At the start of 2015, I officially became a Design Manager at Optimizely. I transitioned from an individual contributor, product design role into a leadership role. For the entirety of my career up to this point, I had never planned on going into management. Doing hands-on, nitty-gritty design and coding work seemed so much more exciting and fulfilling. Management sounded more like a necessary evil than an interesting challenge. But since then, my thinking has changed. So why did I become a Design Manager?
When I joined Optimizely in September of 2012, the design team was just 2 people. I made it 3 in early 2013 when my role moved from engineering to design. And at the end of 2014, we were 16. (As of this writing we’re at 21!). So we’ve seen tremendous growth, and I’ve been present for all of it. And throughout this time, there had only been 1 manager for the entire team, which was not healthy for my manager or the team. The possibility of managing had been floated by me in mid-2014, but I wasn’t interested. I had been a Product Designer for less than a year and wasn’t ready to move on yet. I felt like I had just started hitting my stride as a designer, and wanted to continue honing my craft. I recognized the need for another manager, but I didn’t want it to be me.
As more designers joined Optimizely, I began taking on more managerial tasks. I also saw more issues rising within the team that our manager didn’t always have time to address. So in short, the importance of this role became more apparent, and the day-to-day work of the role became more real to me.
But the real turning point came when my manager went on vacation. In his absence, I was the go-to person for all of the team’s needs. I suspended most of my design work for this period, and really got a taste of what it would be like to work as a full-time manager. I started asking myself, “What if this was my full-time role? Would I enjoy it?” I went back and forth in my head quite a bit. The idea of leaving behind design and code was both scary and saddening. I had so much I was still looking forward to building! Plus, as we all know, change is hard.
But the team has more needs than our lone manager can handle. And I care deeply about the team, so for the greater good I decided it was time to step up. I realized that by helping my team be as great as possible, I would have a bigger impact on the company. And by working closer with engineering managers and PMs, I would have a bigger impact on the product. I’d be getting out of the weeds of day-to-day design to work on the product from a higher perspective across individuals and teams. The impact is less direct, but broader. All of this sounded tremendously exciting to me, and more impactful than individual contributor work.
I also realized the things I love about design (problem solving, ideating, etc.) would still be present in my new role. But instead of applying those skills to concrete visual interfaces, I would apply them to abstract team and personnel issues. I’d be using the design process to solve a different set of problems.
So when my manager got back from vacation, I told him my decision and we started transitioning me into a managerial role. As of the start of 2015, I’ve been managing full time and loving it. I’m still a bit sad to leave design work behind, and worry about my skills atrophying, but I look forward to the new challenges that await. It was a difficult decision that took me a long time to come around to, but I’m excited to make the team, product, and company as great as possible.
This year’s Warm Gun conference was great, just like last year’s. This year generally seemed to be about using design to generate and validate product insights, e.g. through MVPs, prototyping, researching, etc.
Kickoff (Jared Spool)
Jared Spool’s opening talk focused mostly on MVPs and designing delight into products. To achieve this, he recommended the Kano Model and Dana Chisnell’s Three Approaches to Delight (adding pleasure e.g. through humorous copy, improving the flow, and meaning e.g. believing in a company’s mission [hardest to achieve]).
You’re Hired! Strategies for Finding the Perfect Fit (Kim Goodwin)
This was a great talk about how to hire a great design team, which is certainly no easy task (as I’ve seen at Optimizely).
Hiring is like dating on a deadline – after a couple of dates, you have to decide whether or not to marry the person!
You should worry more about missing the right opportunity, rather than avoiding the wrong choice
5 Lessons She’s Learned
Hire with a long-term growth plan in mind
Waiting until you need someone to start looking is too late (it takes months to find the right person)
What kind of will you need? Do you want generalists (can do everything, but not great at any one thing; typically needed by small startups) or specialists (great at one narrow thing, like visual design; typically needed by larger companies)
Grow (i.e. mentor junior people) or buy talent? Training junior people isn’t cheap - it takes a senior person’s time.
A healthy team has a mix of skills levels (she recommends a ratio of 1 senior:4 mid:2 junior). (Optimizely isn’t far off – we mostly lack a senior member!)
A big mistake she sees a lot of startups make is to hire too junior of a person too early
Understand the Market
The market has 5 cohorts: low skill junior folks (think design is knowing tools); spotty/developing skills; skilled specialists; generalists; and team leads
Senior == able to teach others, NOT a title (there’s lots of title inflation in the startup industry). Years of experience does NOT make a person senior. Many people with “senior” in their title have holes in their skills (especially if they’ve only worked on small teams at startups)
5 years experience only at a design consultancy == somewhat valuable (lots of mentorship opportunities from senior folks, but lack working continuously/iteratively on a product)
5 years experience mixed at design consultancy and on an in-house team == best mix of skills (worked with senior folks, and on a single product continuously)
5 years only on small startup teams == less valuable than the other two; is a red flag. There are often holes in the skills. They’re often “lone rangers” who haven’t worked with senior designers, or a team, and probably developed bad habits. Often have an inflated self-assessment of their skills and don’t know how to take feedback. (uh-oh! I’m kind of in this group)
It takes craft to assess craft
Alternate between hiring leads and those who can learn from the leads (i.e. a mix of skill levels)
Education matters - school can fill in gaps of knowledge. Schools have different types of people they product (HCI, visual, etc.). (yay for me!)
Make 2 lists before posting the job
First, list the responsibilities a person will have (what will they do?)
Second, list the skills they need to achieve the first list.
Turn these 2 lists into a job posting (avoid listing tools in the hiring criteria - that is dumb)
DON’T look for someone who has experience designing your exact product in your industry already. The best designers can quickly adapt to different contexts (better to hire a skilled designer with no mobile experience than a junior/mid designer with ONLY mobile experience - there’s ramp-up time, but that’s negligible for a skilled designer)
Junior to senior folks progress through this:
Knows concepts -> Can design effectively w/coach -> Can design solo -> Can teach others
On small/inexperienced teams, watch out for the “Similar to me” effect. Designers new to hiring/interviewing will evaluate people against themselves, rather than evaluate their actual skills or potential. (Can ask, “Where were you relative to this person when you were hired?” to combat this).
Evaluate Based on Skills You Need
Resumes == bozo filter
Look at the goals, process, role, results, lessons learned, things they’d do differently (we’re pretty good at this at Optimizely!)
Do “behavioral interviewing”, i.e. focus on specifics of actual behavior. Look at their actual work, do design exercises, ask “Tell me about a time when…”. It’s a conversation, not an interrogation. (Optimizely is also good at this!)
Be Real to Close the Deal
Be honest about what you’re looking for
If you have doubts about a person, discuss them directly with the candidate to try overcome them (or confirm them). (We need to do this more at Optimizely)
Product Strategy in a Growing Company (Des Traynor)
This was one of my favorite talks. Product strategy is hard, and it’s really easy to say “Yes” to every idea and feature request. One of my favorite bits was you need to say “No” because somethings not in the product vision. If you don’t ever say this, then you have no product vision. (This has been a challenge at Optimizely at times).
Software is eating the world!
We’re the ones who control the software that’s available to everyone.
Niece asked, “Where do products come from?”. There are 5 reasons a product is built:
Customer-focused (built to solve pain point(s))
Auter (art + business)
Scratching own itch (you see a need in the marketplace)
Copy a pattern (e.g. “Uber for X!”)
(Optimizely started as scratching own itch, but is adapting to customer-focused)
Scope: scalpel vs. swiss army knife
When first starting, build a scalpel (it’s the only way to achieve marketshare when starting)
Gall’s Law: complex systems evolve from simple ones (like WWW. Optimizely is also evolving towards complexity!). You can’t set out to design a complex system from scratch (think Google Wave)
A simple product !== making a product simple (i.e. an easy to use product isn’t necessarily simple [difference between designing a whole UX vs. polishing a specific UI]).
Simplify a product by removing steps. Watch out for Scopi-locks (i.e. scope the problem just right – not too big, not too small). You don’t want to solve steps of a flow that are already solved by a market leader, or when there are multiple solutions already in use by people (e.g. don’t rebuild email, there’s already Gmail and Outlook and Mailbox, etc.)
How to fight feature creep
Say “No” by default
Have a checklist new ideas must go through before you build them, e.g. (this is a subset):
Does it benefit all customers?
Is the value worth the effort?
Does it improve existing features? Does it increase engagement across the system, or divide it?
If a feature takes off, can we afford it? (E.g. if you have a contractor build an Android app, how will you respond to customer feedback and bugs?)
Is it low effort for the customer to use, and result in high value? (E.g. Circles in G+ fail this criteria - no one wants to manage friends like this)
It’s been talked about forever; it’s easy to build; we’ve put a lot of effort in already == all bad reasons to ship a new feature
(Optimizely has failed at this a couple of times. E.g. Preview As. On the other hand, Audiences met these criteria)
Once you ship something, it’s really hard to take back. Even if customers complain about it, there is always a minority that will be really angry if you take it away.
Hunches, Instincts, and Trusting Your Gut (Leah Buley)
This was probably my least favorite talk. The gist of it is that as a designer, there are times you need to be an expert and just make a call using your gut (colleagues and upper management need you to be this person sometimes). We have design processes to follow, but there are always points at which you need to make a leap of faith and trust your gut. I agree with those main points, but this talk lost me by focusing only on visual design. She barely mentioned anything about user goals or UX design.
Her talk was mainly about “The Gut Test”, which is a way of looking at a UI (or print piece, or anything that has visual design) and assessing your gut reaction to it. This is useful for visual design, but won’t find issues like, “Can the user accomplish their goal? Is the product/feature easy to use?” (Something can be easy to use, but fail her gut test). It’s fine that she didn’t tackle these issues, but I wish she would have acknowledged more explicitly that the talk was only addressing a thin slice of product design.
In the first 50ms of seeing something, we have an immediate visceral reaction to things
Exposure effect: the more we see something, the more we like it (we lose our gut reaction to it)
To combat this, close your eyes for 5 seconds, then look at a UI and ask these questions:
What do you notice first?
How does it make you feel (if anything)? What words would you use to describe it?
Is it prototypical? (i.e. does it conform to known patterns). Non-conformity == dislike and distrust
Then figure out what you can do to address any issues discovered.
Real Life Trust Online (Mara Zepeda)
This talk was interesting, but not directly applicable to my everyday job at Optimizely. The gist of it was how do we increase trust in the world, and not just in the specific product or company we’re using? For example, when you buy or sell something successfully on Craigslist, your faith in humanity increases a little bit. But reviews on Amazon, for example, increases your trust in that product and Amazon, but not necessarily in your fellow human beings.
Before trust is earned, there’s a moment of vulnerability and an optimism about the future.
Trust gets built up in specific companies (e.g. Craigslist - there’s no reason to trust the company or site, but trust in humans and universe increases when a transaction is successful).
Social networks don’t create networks of trust in the real world
Switchboard MVP was a phone hotline
LinkedIn: ask for job connections, no one responds. But if you call via Switchboard, people are more likely to respond (there’s a real human connection)
They’re trying to create a trust network online
To build trust:
Humanize the transaction (e.g. make it person to person)
Give a victory lap (i.e. celebrate when the system works)
Provide allies / mentors along the journey (i.e. people who are invested in the journey being successful, and can celebrate the win)
She brought up the USDA’s “Hay Net” as an example of this. It connects those who have Hay with those who need Hay (and vice versa). UI had two options: “Have Hay” and “Need Hay”, which I find hilarious and amazing.
Designing for Unmet Needs (Steve Portigal)
Steve Portigal’s talk was solid, but it didn’t really tell me anything I didn’t already know. The gist of it was there are different types of research (generative v. evaluative), you need to know which is appropriate for your goals (although it’s a spectrum, not a dichotomy), and there are ways around anything you think is stopping you (e.g. no resources; no users; no product; etc.). The two most interesting points to me were:
Create provocative examples/prototypes/mocks to show people and gather responses (he likened this to a scientist measuring reactions to new stimuli). Create a vision of the future and see what people think of it, find needs, iterate, adapt. Go beyond iterative improvements to an existing product or UI (we’re starting to explore this technique at Optimizely now).
Build an infrastructure for ongoing research. This is something that’s been on my mind for awhile, since we’re very reactive in our research needs at Optimizely. I’d like us to have more continual, ongoing research that’s constantly informing product decisions.
Redesigning with Confidence (Jessica Harllee)
This was a cool talk that described the process Etsy went through to redesign the seller onboarding experience, and how they used data to be confident in the final result. The primary goal was increasing the shop open rate, while maintaining the products listed per seller. They a/b tested a new design that increased the open rate, but had fewer products listed per seller. They made some tweaks, a/b tested again, and found a design that increased the shop open rate while maintaining the number of products listed per seller. Which means more products are listed on Etsy overall!
I didn’t learn much new from this talk, but enjoy hearing these stories. It also got me thinking about how we don’t a/b test much in the product at Optimizely. A big reason is because it takes too long to get significant results (as Jessica mentioned in her talk, they had to run both tests for months, and the overall project took over a year). Another reason is that when developing new features, there aren’t any direct metrics to compare. Since Jessica’s example was a redesign, they could directly compare behavior of the redesign to the original.
Designing for Startups Problems (Braden Kowitz)
Braden’s talk was solid, as usual, but since I’ve seen him talk before and read his blog I didn’t get much new out of it. His talk was about how Design (and therefore, designers) can help build a great company (beyond just UIs). Most companies think of design at the “surface level”, i.e. visual design, logos, etc., but at its core design is about product and process and problem solving. Design can help at the highest levels.
4 Skills Designers Need:
Know where to dig
Map the problem
Identify the riskiest part (e.g. does anyone need this product or feature at all?)
Design to answer that question. Find the cheapest/simplest/fastest thing you can create to answer this question (fake as much as you can to avoid building a fully working artifact)
Prototype as quickly as possible (colors, polish, etc., aren’t important)
Focus on the most important part, e.g. the user flow, layout, copy, etc. Use templates/libraries to save time
Use deadlines (it’s really easy to polish a prototype forever)
Pump in fresh data
Your brain fills in gaps in data, so have regular research and testing (reinforces Portigal’s points nicely)
Take big leaps
Combine the above 3 steps to generate innovative solutions to problems
Accomplish Big Goals with Objective & Key Results (Christina Wodtke)
This was an illuminating talk about the right way to create and evaluate OKRs. I didn’t hear much I hadn’t already heard (we use OKRs at Optimizely and have discussed best practices). But to recap:
Objective == Your Dream, Your Goal. It’s hard. It’s qualitative.
Key Results == How you know you reached your goal. They’re quantitative. They’re measurable. They’re not tasks (it’s something you can put on a dashboard and measure over time, e.g. sales numbers, adoption, etc.).
Focus: only have ONE Objective at a time, and measure it with 3 Key Results. (She didn’t talk about how to scale this as companies get bigger. I wish she did).
Measure it throughout the quarter so you can know how you’re tracking. Don’t wait until the end of the quarter.
Thought Experiments for Data-Driven Design (Aviva Rosenstein)
This was an illuminating talk about the right way to incorporate data into the decision making process. You need to find a balance between researching/measuring to death, and making a decision. She used DocuSign’s feedback button as a good example of this.
Don’t research to death — try something and measure the result (but make an educated guess).
DocuSign tried to roll their own “Feedback” button (rather than using an existing service). They gave the user a text box to enter feedback, and submitting it sent it to an email alias (not stored anywhere; not categorized at all).
This approach became a data deluge
There was no owner of the feedback
Users entered all kinds of stuff in that box that shouldn’t have gone there (e.g. asking for product help). People use the path of least resistance to get what they want. (I experienced this with the feedback button in the Preview tool)
Data should lead to insight (via analysis and interpretation)
Collecting feedback by itself has no ROI (can be negative because if people feel their feedback is being ignored they get upset)
Aviva’s goal: find a feedback mechanism that’s actually useful
Other feedback approaches:
Phone/email conversation (inefficient, hard to track)
Social media (same as above; biased)
Ad hoc survey/focus groups (not systematic; creating good surveys is time consuming)
Valid: trustworthy and unbiased
Delivery: goes to the right person/people
Retention: increase overall company knowledge; make it available when needed
Usable: can’t pollute customer feedback
Scalable: easy to implement
Contextual: gather feedback in context of use
They changed their feedback mechanism slightly by asking users to bucket the feedback first (e.g. “Billing problems”, “Positive feedback”, etc.), then emailed it to different places. This made it more actionable.
Doesn’t need to be “Ready -> Fire -> Aim”: we can use data and the double diamond approach to inform the problem, and make our best guess.
This limits collateral damage from not aiming. A poorly aimed guess can mar the user experience, which users don’t easily forget.
Growing Your Userbase with Better Onboarding (Samuel Hulick)
This was one of my favorite talks of the day (and not only because Samuel gave Optimizely a sweet shout out). I didn’t learn a ton new from it, but Samuel is an entertaining speaker. His pitch is basically that the first run experience is important, and needs to be thought about at the start of developing a product (not tacked on right before launch).
“Onboarding” is often just overlaying an UI with coach’s marks. But there’s very little utility in this.
Product design tends to focus on the “flying” state, once someone is using a system. Empty states, and new user experiences, are often tacked on.
You need to start design with where the users start
Show a single, action-oriented tooltip at a time (Optimizely was his example of choice here!)
Ask for signup when there’s something to lose (e.g. after you’ve already created an experiment)
Assume guided tours will be skipped, i.e. don’t rely on them to make your product usable
Use completion meters to get people fully using a product
Keep in mind that users don’t buy products, they buy better versions of themselves (Mario + fire flower), and use this as the driving force to get people fully engaged with your product
Provide positive reinforcement when they complete steps! (Emails can help push them along)
Fostering Effective Collaboration in a Global Environment (PJ McCormick)
PJ’s talk was just as good this year as it was last year. He gave lots of great tips for increasing collaboration and trust among teams (especially the engineering and design teams), which is also a topic that has been on my mind recently.
His UX team designs retail pages (e.g. discover new music page). In one case, he presented work to the stakeholders and dev team, who then essentially killed the project. What went wrong? Essentially, it was a breakdown of communication and he didn’t include the dev team early enough.
Steps to increasing collaboration:
Be accessible and transparent
Put work up on public walls so everyone can see progress (this is something I want to do more of)
Get comfortable showing work in progress
Demystify the black box of design
Listen to stakeholders' and outside team members opinions and feedback (you don’t have to incorporate it, but make sure they know they’re being heard)
Be a Person
On this project, the communication was primarily through email or bug tracking, which lacks tone of voice, body language, etc.
There was no real dialog. Talk more face to face, or over phone. (I have learned this again and again, and regularly walk over to someone to hash things out at the first hint of contention in an email chain. It’s both faster to resolve and maintains good relations among team members)
Work with people, not at them
He should have included stakeholders and outside team members in the design process.
Show them the wall; show UX studies; show work in progress
Help teach people what design is (this is hard. I want to get better at this)
A question came up about distributed teams, since much of his advice hinges on face to face communication. I’ve been struggling with this (actually, the whole company has), and his recommendations are in line with what we’ve been trying: use a webcam + video chat to show walls (awkward; not as effective as in person), and take pictures/digitize artifacts to share with people (has the side effect of being archived for future, but introduces the problem of discoverability).
And that’s all! (Actually, I missed the last talk…). Overall, a great conference that I intend to go back to next year.
On Novemeber 18th, 2014, Optimizely officially released version 1.0 of our iOS SDK and a new mobile editing experience. As the lead designer of this project, I’m extremely proud of the progress we’ve made. This is just the beginning — there’s a lot more work to come! Check out the product video below:
Stay tuned for future posts about the design process.
When asked, most of us would say we’d prefer more options to choose from, rather than fewer. We want to make the best possible choice, so more options should increase the likelihood we’ll choose correctly. But in actuality, research shows that more choice usually leads to worse decisions, or the abandonment of choice altogether. In this post, I will describe how we can use this knowledge to generate A/B test ideas.
More choices are more mentally taxing to compare and evaluate, leading to cognitive overload and a decrease in decision making skills. Anecdotally, it’s the experience of walking into a supermarket to buy toothpaste, only to be confronted by an endless wall of brands and specialized types that all seem roughly the same. You’re quickly overwhelmed, and with no distinguishing characteristics to help you choose, you just grab whatever you bought last time and get the hell out of there.
This common experience was formally studied by Iyengar and Lepper (pdf) (2000), who compared buying rates when shoppers were presented 24 jams to sample, versus just 6. They found when 24 jams were available, only 3% of people bought a jar. But when only 6 jams were available, 30% bought a jar. By providing fewer jams to sample, it was easier for shoppers to compare them to each other and make a decision.
Generating Test Ideas
From these findings you can apply a simple rule to your site or mobile app to generate test ideas: any time a user has to make a choice (e.g. deciding which product to buy; clicking a navigation link; etc.), reduce the number of available options. Here are some examples:
Have just one call-to-action. If you have “Sign Up” and “Learn More” buttons, for example, try removing the “Learn More” button. (See below for an example).
Remove navigation items. For example, Amazon has been continually simplifying its homepage by hiding its store categories in favor of search. Shoppers don’t need to think about which category might have their desired item; rather, they just search for it. (For help simplifying your navigation, check out this series of articles on Smashing Magazine).
Try offering fewer products. See if hiding unpopular or similar products increases purchases of the few that remain.
If removing products isn’t feasible, try asking people to make a series of simple choices to narrow down their options. Returning to the toothpaste example, you could ask people to choose a flavor, then a type (whitening, no-fluoride, baking soda, etc.), and present toothpastes that only match those choices. The key is to make sure your customers understand each facet, and the answers are distinct and not too numerous (i.e. less than 6).
Break up checkout forms into discrete steps.
Remove navigation from checkout funnels. Many eCommerce sites (like Crate&Barrel and Amazon) do this because it leaves one option to the user — completing the purchase (see below).
By removing the main navigation from their checkout flow, Crate&Barrel increased average order value by 2% and revenue per visitor by 1%.
By removing extraneous calls-to-action (“Free Sign Up” and “Go Pro! Free Trial”), SeeClickFix (a service for reporting neighborhood issues) focused users’ attention on the search bar and increased engagement by 8%.
Know your audience
Of course, there are times when more choice is better. Broadly speaking, experts typically know what they’re looking for, and are able to evaluate many options because they understand all the distinguishing minutia. For example, professional tennis players can rapidly narrow down the choice of thousands of racquets to just a few because they understand the difference between different materials, weights, head sizes, lengths, and so on. If you don’t offer what they’re looking for, or make it easy to get to what they want, they’ll look elsewhere. For this reason, it’s important that you understand your audience and cater to their buying habits.
We’re trained from an early age to believe that more choice is always better. But in actuality, more choices are mentally taxing, and lead to poor decision making or the abandonment of choice altogether. By testing the removal or simplification of options, you can increase sales, conversions, and overall customer satisfaction.