After launching a feature, coworkers often ask me, “Did you A/B test it?” While the question is well-meaning, A/B testing isn’t the only way, or even the best way, of making data-informed decisions in product development. In this post, I’ll explain why, and provide other ways of validating hypotheses to assure your coworkers that a feature was worth building.
Implied Development Process
My coworker’s simple question implies a development process that looks like this:
You have an idea for a new feature
You build the new feature
You A/B test it to prove its success
Profit! High fives! Release party!
While this looks reasonable on the surface, it has a few flaws.
Flaw 1: What metric are you measuring?
The A/B test in step 3 implies that you’re comparing a version of the product with the new feature to a version without the new feature. But a key part of running an A/B test is choosing a metric to call the winner, which is where things get tricky. Your instinct is probably to measure usage of the new feature. But this doesn’t work because the control lacks the feature, so it loses before the test even begins.
There are, however, higher-level metrics you care about. These could range from broad business metrics, like revenue or time in product, to more narrow metrics, like completing a specific task (such as successfully booking a place to stay in the case of AirBnB). Generally speaking, broader metrics are slower to move and influenced by more factors, so narrow metrics are better.
Even so, this type of experiment isn’t what A/B tests excels at. At its core, A/B testing is a hill climbing technique. This means it’s good at telling you if small, incremental changes are an improvement (in other words, each test is a step up a hill). Launching a feature is more like exploring a new hill. You’re giving users the ability to do something they couldn’t do before. A/B testing isn’t good at comparing hills to each other, nor will it help you find new hills.
Flaw 2: What if the new feature loses?
Let’s say you have good metrics to measure, and enough traffic to run the test in a reasonable timeframe. But the results come back, and the unthinkable happened: your new feature lost. There’s no profit, high fives, or launch party. Now what do you do?
Because of sunk costs, your instinct is going to be to try to improve the feature until it wins. But an A/B test doesn’t tell you why it lost. Maybe there was a minor usability problem, or maybe it’s fundamentally flawed. Whatever the problem may be, an A/B test won’t tell you what it is, which doesn’t help you improve it.
The worst-case scenario is that the feature doesn’t solve a real problem, in which case you should remove it. But this is an expensive option because you spent the time to design, build, and launch it before learning it wasn’t worth building. Ideally you’d discover this earlier.
Revised Development Process
When our well-meaning coworker asked if we A/B tested the new feature, what they really wanted to know is if we have data to back up that it was worth building. To them, an A/B test is the only way they know how to answer that question. But as user experience professionals, we know there are plenty of methods for gathering data to guide our designs. Let’s revise our product development process from above:
You have an idea for a new feature.
You scope the problem the feature is supposed to solve by interviewing users, sending out surveys, analyzing product usage, or using other research methods.
You create prototypes and show them to users.
You refine the design based on user feedback.
You repeat steps 3 and 4 until you’re confident the design solves the problem you set out to solve.
You build the feature.
You do user testing to find and fix usability flaws.
You release the feature via a phased rollout (or a private/public/opt-in beta) and measure your key metrics to make sure they’re within normal parameters.
This can be run as an A/B test, but doesn’t need to be.
Once you’re confident the feature is working as expected, fully launch it to everyone.
Profit! High fives! Release party!
Optimize the feature by A/B testing incremental changes.
In this revised development process (commonly called user-centered design), you’re gathering data every step of the way. Rather than building a feature and “validating” it at the end with an A/B test, you’re continually refining what you’re building based on user feedback. By the time you release it, you’ve iterated countless times and are confident it’s solving a real problem. And once it’s built, you can use A/B testing to do what A/B testing does best — optimization.
A longer process? Yes. A more confident, higher quality launch? Also yes.
Now when your coworkers ask if you A/B tested your feature, you can reply, “No, but we made data-informed decisions that told us users really want this feature. Let me show you all of our data!” By using research and A/B testing appropriately, you’ll build features that your users and your bottom line will love.
If you’d like to learn how other companies incorporate A/B testing into their development process, or about user-centered design in general, these articles are great resources:
Google Ventures invited design leaders from Twitter, Uber, and GoPro to discuss the topic of hiring designers. What follows are my aggregated and summarized notes.
Everyone agrees, finding designers is hard. They’re in high demand, and the best ones are never on the market for long (if at all). “If the job is good enough, everyone is available.” There are a few pieces of advice for finding them, though:
If you’re having trouble getting a full-time designer, start with contractors. If they’re good, you can try to woo them into joining full-time. Some designers like the freedom of contracting and don’t think they want to be full-time anywhere, but if you can show them how awesome your team and culture and product are, you can lure them over.
Look for people who are finishing up a big project, or have been at the same place for 2+ years. These people might be looking for a new challenge, and you can nab them before they’re officially on the market.
Dedicate hours each day to sourcing and recruiting. Work closely with your recruiters (if you have any) to train them on what to look for in portfolios and CVs. Include them in interview debriefs so they can understand what was good and bad about candidates, and tune who they reach out to accordingly. I.e. iterate on your hiring process. We’ve done this a lot of Optimizely.
Even better is to have dedicated design recruiter(s) who understand the market and designers.
If you have no recruiters, you could consider outsourcing recruiting to an agency.
When reaching out to designers, get creative. Use common connections, use info from their site or blog posts, follow people on Twitter, etc.
Typically you’ll have the highest chance for success if you, as the hiring manager, reach out, rather than a recruiter.
As a designer, this is what hiring managers will be looking for:
Have a high word-to-picture ratio. Product Design is all about communication, understanding the problem, solutions, and context. If you can’t clearly communicate that, you aren’t a good designer.
An exception is visual designers, who can get away with more visually-oriented portfolios.
What about your design is exceptional? Why should I care? Make sure to make this clear when writing about your work.
When looking at a portfolio, hiring managers will be wondering, “What’s the complexity of the problem being solved? Can they tell a story? Are they self critical? What would they do differently or what could be better?” Write about all of these things in your portfolio; don’t just have pictures of the final result.
An exception to the above is high demand designers, who don’t have time for a portfolio because they don’t need one to get work. Hiring these people is all based on reputation.
Don’t have spelling errors. Spelling errors are an automatic no-go. Designers need to be sticklers for details, and have “pride of ownership.”
One million percent agree
On Interviewing Designers
Pretty much everyone has a portfolio presentation, followed by 3–6 one-on-one interviews. Everyone must be a “Yes” for an offer to be made. (Optimizely is the same.)
Look for curiosity in designers. Designers should be motivated to learn, grow, read blogs/industry news, and use apps/products just to see what the UX and design is like. They should have a mental inventory of patterns and how they’re used.
In portfolio review, designers should NEVER play the victim. Don’t blame the PM, the organization, engineering, etc. (even if it’s true.) Don’t talk shit about the constraints. Design is all about constraints. Instead, talk about how you worked within those constraints (e.g. “there was limited budget, therefore…”)
On Design Exercises
People were pretty mixed about whether design exercises are useful during the interview process or not. Arguments against them include:
They can be ethically wrong if you’re having candidates do spec work for the company. You’re asking people to work for free, and you open yourself up to lawsuits.
I wholeheartedly agree
They don’t mimic the way people actually work. Designers aren’t usually at a board being forced to create UIs and make design decisions.
I disagree with this sentiment. A lot of work I do with our designers is at whiteboards. Decisions and final designs aren’t always being made, but we’re exploring ideas and thinking through our options. Doing this in an interview simulates what it’s like to work with someone, and how they approach design. It isn’t about the final whiteboarded designs, it’s about their process, questions they ask, solutions they propose, how they think about those solutions, etc. Plus, you get to experience what they’re like to interact with.
Take home exercises aren’t recommended. People are too busy for them, and senior candidates won’t do them.
The exception to this is junior designers who don’t have much of a portfolio yet so you can see how they actually design UIs
All of this has been true in my experience, as well.
Arguments for design exercises:
You get to see how candidates approach a problem and explore solutions
You get a sense of what it’s like to work with them
You hear them evaluate ideas, which tells you how self-critical they are and how well they know best practices
Personally, I find design exercises very useful. They tell me a lot about how a candidate thinks, and what they’re like to work with. The key is to find a good exercise that isn’t spec work. GV wrote a great article on this topic.
On Making a Hiring Decision
It’s easy when candidates are great or awful — the yes and no decisions are easy. The hard ones are when people are mixed. Typically this means you shouldn’t extend an offer, but there are reasons to give them a second chance:
They were nervous
English is their second language
They were stressed from interviewing
In these cases, try bringing the person back in a more relaxed environment; for example, have lunch or coffee together.
Some people have great work, but some sort of personality flaw (e.g. they don’t make eye contact with women). These people are a “no” — remember, “No assholes, no delicate geniuses”, and avoid drama at all costs.
When making an offer, you’ll sometimes have to sell them on the company, team, product, and challenges. One technique is to explain why they’ll be a great fit on the team (you’ll flatter them while simultaneously demonstrating the challenges they’ll face and impact they’ll have). If you have a big company and team, you can explain all the growth and learning opportunities a large team provides. And you don’t need to be small to move fast and make impactful decisions.
On Design Managers
Hiring design managers is hard. They’re hard to find, hard to attract, and most designers want to continue making cool shit rather than manage people. But if you’re searching for one, your best bet is to promote a senior designer to manager. They already understand the company, market, culture, and team, so they’re an easy fit. The art of management is often custom to the team and company.
If that isn’t an option, go through your network to find folks. You aren’t likely to have good luck from randos applying via the company website, or sourcing strangers.
Great managers are like great coaches — they’re ex-players who worked really hard to learn the game, and thus can teach it to others. Players that are naturally gifted, e.g. Michael Jordan, aren’t good coaches because they didn’t have to work hard to understand the game — it came naturally to them.
I feel like I fit this description. I worked hard to learn a lot of the skills that go into design. It took me a long time to feel comfortable calling myself a “designer”; it didn’t come naturally.
Management is a mix of creative direction, people management, and process. They should be able to partner with a senior designer to ship great product. Managers shouldn’t evaluate designers based on outcomes/impact. People can’t always control which project they’re on, some projects are cancelled, not all projects are equal, etc. Instead, reward behavior and process (e.g. “‘A’ for effort”.)
There are 4 things to look for in good managers:
They Get Shit Done
They improve the team, e.g. via recruiting, events, coaching/mentoring
They have, or can build, good relationships in the organization
They have hard design skills, empathy, and vision
On Generalists vs Specialists and Team Formation
The consensus is to hire 80/20 designers, i.e. generalists who have deep skills in one area (e.g. visual design, UX, etc.). They help teams move faster, and can work with specialists (e.g. content strategists) to ship high quality products quickly. Good ones will know what they don’t know, and seek help when they need it (e.g. getting input from visual designers if that isn’t their strength). “No assholes, no delicate geniuses”. Avoid drama at all costs.
This is the type of person we seek to hire as well. I’ve also seen firsthand that good designers are self-aware enough to know what their weaknesses are, and to seek help when necessary.
Cross-functional teams should be as small as possible while covering the breadth of skills needed to ship features. More people means more complexity and extra communication overhead. (I have certainly seen this mistake made at Optimizely.)
Having designers on a separate team (e.g. Comm/marketing designers on marketing) makes for sad designers. They become isolated, disgruntled, and unhappy. Ideally, they shouldn’t be on marketing. If they are separate, make bridges for the teams to communicate. Include them in larger design team meetings and crits and stuff so they feel included.
I totally agree. At Optimizely, we fought hard to keep our Communication Designers on the Design team for all the reasons listed here (Marketing wanted to hire their own designers). Our Marketing department ended up hiring their own developers to build and maintain our website, but earlier this year they moved over to the Design team so they could be closer to other developers and the Communication Designers working on the website. So far, they’re much happier on Design.
Should designers code?
People were somewhat mixed on this question. It was mostly agreed that it’s probably not a good use of their time, but it’s always a trade-off depending on what a specific team needs to launch high quality product. A potential danger is that they may only design what’s easy to code, or what they know they can build. That is, it’s a conflict of interest that leads to them artificially limiting themselves and the design.
As a designer who codes, I only partially agree with what was said here. It’s true that you can fall into the trap of designing what’s easy to build, but it doesn’t have to be that way. I overcame this by focusing on explicitly splitting out the ideation/exploration phase from the evaluation/convergence phase (something that good designers should be doing anyway). When designing, I explore as many ideas as I can without thinking at all about implementation, then I evaluate which idea(s) are best. One of those criteria (among many) is implementation cost and whether it used existing UI components we’ve already built. I’ve found this to be effective at not limiting myself to only what I know is easy to build, but it took a lot of work to compartmentalize my thinking this way.
Artificially constraining the solution space is also a trap any designer can fall into, regardless of whether or not you know how to code. I’ve heard designers object to ideas with, “But that will be hard to build!”, or, “This idea re-uses an existing frontend component!” Whenever I hear that, I always tell them that they’re in the ideation phase, and they shouldn’t limit their thinking. Any idea is a good idea at this point. Once you’ve explored enough ideas, then you can start evaluating them and thinking about implementation costs. And if you have a great idea that’s hard to implement, you can argue for why it’s worth building.
It depends on the work, and what the frontend or implementation challenges are. For example, apps with lots of complex interactions will need more engineers to build. A common ratio is about 1:10.
More important than the specific ratio is to not form teams without a designer. Those teams get into bad habits, won’t ship quality product, and will dig a hole of design debt that a future designer will have to climb out of. (I’ve been through this, and it takes a lot of time and effort to correct broken processes of teams that lack design resources).
One way of knowing if you don’t have enough designers is if engineering complains about design being a bottleneck, although this is typically a lagging indicator. A great response to this was that the phrase “Blocked on design” is terrible. Design is a necessary creative endeavor! Why don’t we say that engineering is blocking product from being released? (In fact, for the first time ever, we have been saying this at Optimizely, since we need more engineers to implement some finished designs. Interested in joining the Engineering team at Optimizely? Drop me a line @jlzych).
Another good quote: “There’s nothing more dangerous than an idle designer.” An idle designer can go off the deep end redesigning things, and eventually get frustrated when their work isn’t getting used. So there should always be a bit more work than available people to do it. True dat.
This was a great event with fun speakers, good attendees, and excellent advice. The most interesting discussion topic for me was on design managers, since we’re actively searching for a manager now (let me know if you’re interested!) Overall, Optimizely’s hiring practices are in line with the best practices recommended here, so it’s nice to know we’re in good company.
On 11/18/14, Optimizely officially launched A/B testing for iOS apps. This was a big launch because our product had been in beta for months, but none of us felt proud to publicly launch it. To get us over the finish line, we focused our efforts on building out an MVPP — a Minimum Viable Product we’re Proud of (which I wrote about previously). A core part of the MVPP was redesigning our editing experience from scratch. In this post, I will walk you through the design process, show you the sketches and prototypes that led up to the final design, and the lessons learned along the way, told from my perspective as the Lead Designer.
A video of the final product
To provide context, our product enables mobile app developers to run A/B tests in their app, without needing to write any code or resubmit to the App Store for approval. By connecting your app to our editor, you can select elements, like buttons and headlines, and change their properties, like colors and text. Our beta product was functional in this regard, but not particularly easy or delightful to use. The biggest problem was that we didn’t show you your app, so you had to select elements by searching through a list of your app’s views (a process akin to navigating your computer’s folder hierarchy to find a file). This made the product cumbersome to use, and not visually engaging (see screenshot below).
Optimizely’s original iOS editor.
Designing the WYSIWYG Editor
To make this a product we’re proud to launch, it was obvious we’d need to build a What-You-See-Is-What-You-Get (WYSIWYG) editor. This means we’d show the app in the browser, and let users directly select and edit their app’s content. This method is more visually engaging, faster, and easier to use (especially for non-developers). We’ve had great success with web A/B testing because of our WYSIWYG editor, and we wanted to replicate that success on mobile.
This is an easy design decision to make, but hard to actually build. For this to work, it had to be performant and reliable. A slow or buggy implementation would have been frustrating and a step backwards. So we locked a product designer and two engineers in a room to brainstorm ideas and build functional prototypes together. By the end of the week, they had a prototype that cleared the technical hurdles and proved we could build a delightful editing experience. This was a great accomplishment, and a reminder that any challenge can be solved by giving a group of smart, talented individuals space to work on a seemingly intractable problem.
Creating the Conceptual Model
With the app front and center, I needed an interface for how users change the properties of elements (text, color, position, etc.). Additionally, there are two other major features the editor needs to expose: Live Variables and Code Blocks. Live Variables are native Objective-C variables that can be changed on the fly through Optimizely (such as the price of items). Code Blocks let users choose code paths to execute (for example, a checkout flow that has 2 steps instead of 3).
Before jumping into sketches or anything visual, I had to get organized. What are all the features I need to expose in the UI? What types of elements can users edit? What properties can they change? Which of those are useful for A/B tests? I wrote down all the functionality I could think of. Additionally, I needed to make sure the UI would accommodate new features to prevent having to redesign the editor 3 months down the line, so I wrote out potential future functionality alongside current functionality.
I took all this functionality and clustered them into separate groups. This helped me form a sound conceptual model on which to build the UI. A good model makes it easier for users to form an accurate mental model of the product, thus making it easier to use (and more extensible for future features). This exercise made it clear to me that there are variation-level features, like Code Blocks and Live Variables, that should be separate from element-level features that act on specific elements (like changing a button’s text). This seems like an obvious organizing principle in retrospect, but at the time it was a big shift in thinking.
After forming the conceptual model, I curated the element properties we let users edit. The beta product exposed every property we could find, with no thought as to whether or not we should let users edit it. More properties sounds better and makes our product more powerful, but it comes at the cost of ease of use. Plus, a lot of the properties we let people change don’t make sense for our use case of creating A/B tests, and don’t make sense to non-developers (e.g. “Autoresizing mask” isn’t understandable to non-technical folks, or something that needs to be changed for an A/B test).
I was ruthless about cutting properties. I went through every single one and asked two questions: first, is this understandable to non-developers (my definition of “understandable” being would a person recognize it from common programs they use everyday, like MS Office or Gmail); and second, why is this necessary for creating an A/B test? If I was unsure about an attribute, I defaulted to cutting it. My reasoning was it’s easy to add features to a product, but hard to take them away. And if we’re missing any essential properties, we’ll hear about it from our customers and can add it back.
My lo-fi Google Doc to organize features
Let the Sketching Begin!
With my thoughts organized, I finally started sketching a bunch of editor concepts (pictured below). I had two big questions to answer: after selecting an element, how does a user change its properties? And, how are variation-level features (such as Code Blocks) exposed? My top options were:
Use a context menu of options after selecting an element (like our web editor)
When an element is selected, pop up an inline property pane (ala Medium’s and Wordpress’s editors)
Have a toolbar of properties below the variation bar
Show the properties in a drawer next to the app
A sketch of the toolbar concept
A messy sketch of inline formatting options (specifically text)
One of the many drawer sketches
Each approach had pros and cons, but organizing element properties in a drawer showed the most promise because it’s a common interaction paradigm, it fit easily into the editor, and was the most extensible for future features we might add. The other options were generally constraining and better suited to limited functionality (like simple text formatting).
Because I wanted to maximize space for showing the app, my original plan was to show variation-level features (e.g. Code Blocks; Live Variables) in the drawer when no element was selected, and then replace those with element-level features when an element was selected. Features at each level could be separated into their own panes (e.g. Code Blocks would have its own pane). Thus the drawer would be contextual, and all features would be in the same spot (though not at the same time). This left plenty of space for showing an app, and kept the editor uncluttered.
A sketch told me that layout-wise this plan was viable, but would it make sense to select an element one place, and edit its properties in another? Would it be jarring to see features come and go depending on whether an element was selected or not? How will you navigate between different panes in the drawer? To answer these questions, an interactive prototype was my best course of action (HTML/CSS/JS being my weapon of choice).
An early drawer prototype. Pretend there’s an app in that big empty white space.
I prototyped dozens of versions of the drawer, and shopped them around to the team and fellow designers. Responses overall were very positive, but the main concern was that the tab buttons (“Text”, “Layout”, etc., in the image above) in the drawer won’t scale. Once there are more than about 4, the text gets really squeezed (especially in other languages), stunting our ability to add new features. One idea to alleviate this, suggested by another designer, was to use an accordion instead of tab buttons to reveal content. A long debate ensued about which approach was better. I felt the tab buttons were a more common approach (accordions were for static content, not interactive forms that users will be frequently interacting with), whereas he felt the accordion was more scalable by allowing room for adding more panes, and accommodates full text labels (see picture below).
Drawer with accordion prototype. Pretend that website is an iOS app.
To help break this tie, I built another prototype. After playing around with both for awhile, and gathering feedback from various members of the team, I realized we were both wrong.
After weeks of prototyping and zeroing in on a solution, I realized it was the wrong solution. And the attempt to fix it (accordions), was in fact an iteration of the original concept that didn’t actually address the real problem. I needed a new idea that would be superior to all previous ideas. So I hit reset and went back to the drawing board (literally). I reviewed my initial organizing work and all required functionality. Clearly delineating variation-level properties from element-level properties was a sound organizing principle, but the drawer was getting overloaded by having everything in it. So I explored ways of more cleanly separating variation-level properties from element-level properties.
After reviewing my feature groupings, I realized there aren’t a lot of element properties. They can all be placed in one panel without needing to navigate between them with tabs or accordions at all (one problem solved!).
The variation properties were the real issue, and had the majority of potential new features to account for. Two new thoughts became apparent as I reviewed these properties: first, variation-level changes are typically quick and infrequent; and second, variation-level changes don’t typically visually affect the app content. Realizing this, I hit upon an idea to have a second drawer that would slide out over the app, and go away after you made your change.
To see how this would feel to use, I made yet another interactive prototype. This new UI was clean, obviated the need for tab buttons or accordions, was quick and easy to interact with, and put all features just a click or two away. In short, this new design direction was a lot better, and everyone quickly agreed it made more sense than my previous approach.
Reflecting back on this, I realize I had made design decisions based on edge cases, rather than focusing on the 80% use case. Starting the design process over from first principles helped me see this much more clearly. I only wish I would have caught it sooner!
Admitting this design was not the right solution, after a couple months of work, and after engineers already began building it, was difficult. The thought of going in front of everyone (engineers, managers, PMs, designers, etc.) and saying we needed to change direction was not something I was looking forward to. I was also worried about the amount of time it would take me to flesh out a completely new design. Not to mention that I needed to thoroughly vet it to make sure that it didn’t have any major drawbacks (I wouldn’t have another opportunity to start over).
Luckily, once I started fleshing out this new design, those fears mostly melted away. I could tell this new direction was stronger, which made me feel good about restarting, which made it easier to sell this idea to the whole team. I also learned that even though I was starting over from the beginning, I wasn’t starting with nothing. I had learned a lot from my previous iterations, which informed my decision making this second time through.
Build and Ship!
With a solid design direction finally in place, we were able to pour on the engineering resources to build out this new editor. Having put a lot of thought into both the UI and technical challenges before writing production code, we were able to rapidly build out the actual product, and ended up shipping a week ahead of our self-imposed deadline!
The finished mobile editor
Create a clear conceptual model on which to build the UI. A UI that accurately represents the system’s conceptual model will make it easy for users to form a correct mental model of your product, thus making it easier to use. To create the system model, write down all the features, content, and use cases you need to design for before jumping into sketches or prototypes. Group them together and map out how they relate to each other. From this process, the conceptual model should become clear. Read more about mental models on UX Magazine.
Don’t be afraid to start over. It’s scary, and hard, and feels like you wasted a bunch of time, but the final design will come out better. And the time you spent on the earlier designs wasn’t wasted effort — it broadened your knowledge of both the problem and solution spaces, which will help you make better design decisions in your new designs.
Design for the core use case, not edge cases. Designing for edge cases can clutter a UI and get in the way of the core use case that people do 80% of the time. In the case of the drawer, it led to overloading it with functionality.
Any challenge can be solved by giving a group of smart, talented individuals space to work on seemingly intractable problems. We weren’t sure a WYSIWYG editor would be technically feasible, but we made a concerted effort to overcome the technical hurdles, and it payed off. I’ve experienced this time and time again, and this was yet another reminder of this lesson.
On 11/18/14, the team was proud to announce Optimizely’s mobile A/B testing product to the world. Week-over-week usage has been steadily rising, and customer feedback has been positive, with people saying the new editor is much easier and faster to use. This was a difficult product to design, for both technical and user experience reasons, but I had a great time doing it and learned a ton along the way. And this is only the beginning — we have a lot more work to do before we’re truly the best mobile A/B testing product on the planet.
On November 18th, 2014, we publicly released Optimizely’s iOS editor. This was a big release for us because it marked the end of a months-long public beta in which we received a ton of customer feedback and built a lot of missing features. But before we launched, there was one problem the whole team rallied behind to fix: we weren’t proud of the product. To fix this issue, we went beyond a Minimum Viable Product (MVP) to an MVPP — the Minimum Viable Product we’re Proud of.
What follows is the story of how we pulled this off, what we learned along the way, and product development tips to help you ship great products, from the perspective of someone who just did it.
The finished iOS editor.
Genesis of the MVPP
We released a public beta of Optimizely’s iOS editor in June 2014. At that time, the product wasn’t complete yet, but it was important for us to get real customer feedback to inform its growth and find bugs. So after months of incorporating user feedback, the beta product felt complete enough to publicly launch. There was just one problem: the entire team wasn’t proud of the product. It didn’t meet our quality bar; it felt like a bunch of features bolted together without a holistic vision. To fix this, we decided to overhaul the user experience, an ambiguous goal that could easily go on forever, never reaching a clear “done” state.
We did two things to be more directed in the overhaul. First, we committed to a deadline to prevent us from endlessly polishing the UI. Second, we took inspiration from the Lean Startup methodology and chose a set of features that made up a Minimum Viable Product (MVP). An MVP makes it clear that we’ll cut scope to make the deadline, but nothing about quality. So to make it explicit that we were focusing on quality and wanted the whole team to be proud of the final product, we added an extra “P” to MVP. And thus, the Minimum Viable Product we’re Proud of — our MVPP — was born.
Create the vision
Once we had agreed on a feature set for the MVPP, a fellow Product Designer and I locked ourselves in a war room for the better part of a week to flesh out the user experience. We mapped out user flows and created rough mock ups that we could use to communicate our vision to the larger development team. Fortunately, we had some pre-existing usability test findings to inform our design decisions.
Sketches, mockups, and user flows from our war room.
These mockups were immensely helpful in planning the engineering and design work ahead. Instead of talking about ideas in the abstract, we had concrete features and visuals to point to. For example, everyone knew what we meant when we said “Improved Onboarding Flow.” With mockups in hand, communication between team members became much more concrete and people felt inspired to work hard to achieve our vision.
Put 6 weeks on the clock… and go!
We had 3 sprints (6 weeks) to complete the MVPP (most teams at Optimizely work in 2 week cycles called “sprints”). It was an aggressive timeline, but it felt achievable — exactly where a good deadline should be.
In the first sprint, the team made amazing progress. All the major pieces had been built, without any major re-scoping or redesigns. There were still bugs to fix, polish to apply, and edge cases to consider, but the big pieces core to our vision were in place.
That momentum carried over into the second sprint, which we spent fixing the biggest bugs, filling functional holes, and polishing the UI.
For the third and final sprint, we gave ourselves a new goal: ship a week early. We were already focused on launching the MVPP, but at this point we became laser focused. During daily standups, we looked at our JIRA board and asked, “If we were launching tomorrow, what would we work on today?”
We were ruthless about prioritizing tasks and moved a lot of items that were important, but not launch-critical, to the backlog.
During the first week of sprint 3, we also did end-to-end product walkthroughs after every standup to ensure the team was proud of the new iOS editor. We all got to experience the product from the customer’s perspective, and caught user experience bugs that were degrading the quality of our work. We also found and fixed a lot of functional bugs during this time. By the end of the week, everyone was proud of the final product and felt confident launching.
The adrenaline rush & benefit of an early release
On 11/10, we quietly released our MVPP to the world — a full week early! Not only did shipping early feel great, it also gave us breathing room to further polish the design, fix bugs, and give the rest of the company time to organize all the components to launch the MVPP.
Product teams don’t launch products alone; it takes full collaboration between marketing, sales, and success to create materials to promote it, sell it, and enable our customers to use it. By the time the public announcement on 11/18 rolled around, the whole company was extremely proud of the final result.
While writing this post and reflecting on the project as a whole, a number of techniques became clear to me that can help any team ensure a high quality, on-time launch:
Add a “P” to “MVP” to make quality a launch requirement: Referring to the project as the “Minimum Viable Product we’re Proud of” made sure everyone on the team approached the product with quality in mind. Every project has trade-offs between the ship date, quality, and scope. It’s very hard to do all three. Realistically, you can do two. By calling our project an MVPP, we were explicit that quality would not be sacrificed.
Set a deadline: Having a deadline focused everyone’s efforts, preventing designers from endlessly polishing interfaces and developers spinning their wheels imagining every possible edge case. Make it aggressive, yet realistic, to instill a sense of urgency in the team.
Focus on the smallest set of features that provide the largest customer impact: We were explicit about what features needed to be redesigned, and just as importantly, which were off limits. This prevented scope-creep, and increased the team’s focus.
Make mockups before starting development: This is well-known in the industry, but it’s worth repeating. Creating tangible user flows and mockups ahead of time keeps planning discussions on track, removes ambiguity, and quickly explains the product vision. It also inspires the team by rallying them to achieve a concrete goal.
Do daily product walkthroughs: Our product walkthroughs had two key benefits. First, numerous design and code bugs were discovered and fixed. And second, they ensured we lived up to the extra “P” in “MVPP.” Everyone had a place to verbally agree that they were proud of the final product and confident launching. Although these walkthroughs made our standups ~30 minutes longer, it was worth the cost.
Ask: “If we were shipping tomorrow, what would you work on today?”: When the launch date is approaching, asking this question separates the critical, pre-launch tasks from the post-launch tasks.
Lather, Rinse, and Repeat
By going beyond an MVP to a Minimum Viable Product we’re Proud of, we guaranteed that quality was the requirement for launching. And by using a deadline, we stayed focused only on the tasks that were absolutely critical to shipping. With a well-scoped vision, mockups, and a date not too far in the future, you too can rally teams to create product experiences they’re proud of. And then do it again.