The document discusses how to ship imperfect products by taking a scientific approach of continuous testing, feedback gathering, and validation. It recommends shipping a minimum viable product (MVP) early instead of waiting for perfection. It then outlines steps to be a better scientist when building products, including testing products you didn't build, testing ideas as early as possible, gathering feedback objectively without leading questions, and validating changes each time something is built. The overall message is that taking a scientific approach of continuous improvement leads to better products than only shipping something once it's perfect.
34. Thank you!
Lauren Gilchrist @lgilchrist
Lovingly illustrated by Linda Joy @ljoy
Notes de l'éditeur
Hi, I’m Lauren
Let’s get started.
I’m here today to talk to you about how to get comfortable shipping imperfect products.
But before I talk about how to get comfortable, I want to first talk about why we need to get comfortable at all
Let’s start with a universal truth.
Shipping a product is terrifying.
No matter how many times you have done it before, releasing a new product into the world is a nerve-wracking experience.
But what makes it so scary?
Well, for one thing, it means our reputation is in the spotlight.
When we build a product, we do it behind closed doors. We tinker, we adjust. But no one can see what we’re doing.
But once we release that product into the world, we are not behind closed doors anymore.
By releasing the product, we’re allowing it to be judged.
But by extension, that also means that our reputation is on the line, and our reputation can be judged.
That can be a tough pill to swallow,
Furthermore... it’s even harder to ship a product when you’ve got an established brand.
Because let’s face it, when an established brand launches a new product, the stakes are pretty high.
The new product represents a statement about the future of the brand.
But what if that product jeopardizes years of your brand’s reputation?
But while putting our personal or brand reputation is intimidating, the root of the matter is that we’re scared to get it wrong.
This fear makes us add just one more feature to our MVP or one more concession to our users.
This fear makes us delay shipping by a week, a month, or even longer.
This fear turns our MVP from a six week product into a six year product.
When we launch a product, we are really scared of getting it wrong.
We feel compelled to get it right the first time.
Because that’s what experts do. Experts get it right the first time. That’s what we hear in tech crunch anyway.
We are all here today because we are experts in something.
But we are also here today because we want to learn how to get products to market faster, reduce risk, and validate our findings with real users.
That’s quite the juxtaposition, when you think about it.
How do you reconcile being an expert, and needing to be right, with needing being lean, and shipping things that aren’t perfect
Well, I can tell you how I’ve learned how to get more comfortable.
And that was learning how to be a scientist, instead of an expert.
As an expert, you need to rely on your experience and your intution
Bust most importantly, you need to be right. After all, your expertise, your reputation, and your brand are all on the line.
Scientists, on the other hand, don’t have to right the first time.
In fact, they don’t ever need to be right.
Instead, scientists need to prove objectively that their hypothesis is true or false.
Success, for a scientist, is getting a concrete answer, regardless if that answer is true or false.
Because whether the experiment is true, or false, scientists have learned something.
I’d like to tell you a story of how being a scientist helped me get over my fear of shipping an MVP that wasn’t perfect, and
After that, l’ll give you some tips on how you to be a better scientist, and get more comfortable shipping imperfect products.
But first, I want to tell you the story of how being a scientist helped me get over my fear of shipping an MVP wasn’t perfect.
I’ll also tell you how science helped improve our product after we launched.
I got tasked to build a content management system for a team of video producers.
These producers create content that get overlayed on a streaming video. Their content is complimentary, so if the video segment is about Derek Jeter, then the content would be a tweet about his retirement, or an article about the new Yankees shortstop.
The content and the video ended up in an app, where you could stream live video and see related content at the same time. Pretty cool stuff.
So let’s get back to the content management piece.
The first thing we did was interview our users.
In this case it was the video producers who were going to be using this system.
We learned they were very anxious about the system we were building.
When they create content, they need to stay “ahead” of the video, because the video program is filming live.
They were very afraid that the system we were building would be too slow, or too cumbersome, and that they would not be able to stay ahead of the live broadcast.
We captured that sentiment in our notes, headed back to the office, and got down to work
We knew that our users, the producers, needed to be able to create, edit, and delete content
We decided these were our must-have features for the producers to be able to do their jobs.
So we began to build working software.
Our MVP was a very basic application that let you create, edit, and delete content.
We had very minimal design.
And, maybe most notably, it didn’t have features that helped a user create content faster.
We built it in about five weeks. We were ecstatic that we had built it so fast.
And just around week four, something happened.
I panicked.
I knew that the producers needed to stay ahead of the show
But our MVP had no specific features that would help them create content faster.
When we had interviewed them, they had suggested things like drag and drop, and keyboard shortcuts to help them save time when creating content.
But we didn’t build and of that. We just build a simple, and very basic app. It was an MVP in the truest sense.
What if the producers couldn’t stay ahead of the show using this MVP?
What would that mean for my reputation as a consultant, and as a product manager?
What would they think of me if they couldn’t do their jobs using what I had built?
What if I didn’t get it right?
Does this sound familiar?
I had let the expert take over.
I realized that, while the producers had expressed a fear that our MVP wouldn’t let them stay ahead of the show, we hadn’t actually proved that that was the case.
We hadn’t proved that the MVP didn’t meet their needs.
In fact, we hadn’t proved anything. They hadn’t used the software yet!
It was the expert inside me who was scared that our MVP wasn’t enough.
It was the expert inside me who was scared that the producers couldn’t get the job done unless we built them more features.
This expert made me solve the probkem before it existed.
And just as we set out to build more features, I realized something.
The expert in me had taken over, but I needed to be a scientist.
But I needed to be a scientist.
I needed to validate, objectively, whether or not the producers could stay ahead of the show using the MVP we had already built.
I needed to prove, objectively, whether our hypothesis was true or false.
So we gathered the producers in a room and asked them to create a full eight hours worth of content using our MVP.
And guess what?
The producers were able to stay ahead of the show the entire time.
They popped a bottle of champagne at the end of the day. They were ecstatic.
We were ecstatic too. We had proved, objectively, that our MVP was good enough for them to get the job done.
We had scientifically proven that we had solved for need to have.
But we still had work to do.
When the producers used our MVP for the first time, we actually stuck a video camera in the room with them.
We recorded how long it took them to complete specific tasks.
We recorded the problems they encountered, and how frequently they encountered them.
By the end of the session, we had a list of the biggest and most frequent problems in our MVP.
We had a benchmark for improvement.
This was a great starting point. This meant we could start solving nice to have, scientifically.
So we set to work to build features that improved the speed at which users could create content.
We delivered these features in about 10 days.
Then, we asked the users to create another 8 hours of content while we recorded their actions.
Not only did our new features save them a full hour a day in creation time, but saving that money allowed them to delay hiring another producer for the rest of the year.
So let’s recap.
I shipped an imperfect product.
I got scared that the product wasn’t perfect.
I then reminded myself to be a scientist.
Being a scientist let me objectively prove that our MVP was successful. And furthermore, that the improvements we made added value
Being a scientist let me objectively proved that I had solved for need to have, and prove that I had solved for the most important nice to have.
So that’s how I learned that being a scientist was how I could get comfortable shipping an imperfect product.
Next, I’d like to share some tips on how you can be a better scientist.
Scientists are not scared to ship imperfect products.
Good scientists practice a number of habits to make sure they can prove that their produduct is solving a problem.
I’m going to teach you some of the tips on how to be a better scientist.
The first tip is to test, test, test.
I’ve never met a boss who doesn’t want to reduce risk.
Testing is the way to reduce risk.
Testing smaller pieces of your product before you ship it reduces the unvalidated assumptions in your product.
If you have tested along the way, by the time you ship, there won’t be any uncertainty left.
And therefore you won’t be scared to ship.
So how do you make sure you’re testing well, like a scientific boss?
User testing and user interviews are not a skill that come naturally.
It can be very awkward to interview someone about a product you have invested time or money in.
You tend to apologize or make excuses for what’s not there.
Oh sorry, that’s supposed to do this! Oh don’t click there, sorry, click here!
So to make sure you can conduct objective testing on your own product, you need to practice first on something you’re not emotionally invested in.
Demo wikipedia to a friend. Do a usability test on Uber.
Whatever it is, practice the craft on something you’re not emotionally invested in.
That way, when you need to test your own product, you’ll already be an expert tester.
Experts are hesitant to show anything to anyone until it is absolutely perfect.
Scientists test as soon as they can communicate an idea.
Put your unfinished work in front of people to get feedback faster.
If you have a napkin sketch, test if someone can explain it to their friend.
If you have a wireframe, test to see if the workflow sane.
If you have a clickable mockup, test to see if it’s usable.
In any case, testing work before it’s perfect lets you catch problems faster.
So be a good scientist, and test as soon as you can communicate your idea.
Scientists test regularly.
Testing regularly actually enforces testing as soon as you can communicate an idea.
At Pivotal Labs, where I work as a Product Manager, we host something called Think Aloud Thursdays.
We bring in 3 users for 30 minutes every single Thursday.
We source on Craigslist and schedule via Taskrabbit.
Anyone in the office can sign up to get fresh eyes on their product
This gives us a huge advantage -- it means only 4 business days can go by between testing sessions.
It gets us into the habit of saying, I don’t know, but I can find out really quickly.
So make sure you are testing regularly.
So let’s recap -
Scientists practice testing on things that aren’t theirs
They test with incomplete and imperfect work
And they rest regularly
And they do all of this to pay down risk of before they ship a product
Now that you know how and when to test, let’s talk about the mechanics of capturing results of those tests.
Scientists need to gather feedback objectively when they do user testing or validation.
Capturing data objectively can be hard, especially with a product you are invested in.
So here are a few ways to remember to be a better scientist and gather feedback objectively.
The first tip is to not pre-empt your answers with leading questions.
There’s a great book called the Mom Test by Rob Fitzpatrick.
In it, Rob talks about the difference between two scenarios.
Hey mom, I just spent 6 months building an ipad app that’s a cookbook. Would you pay $20 for it?
Of course she’ll say yes, she’s your mom. She loves you. You worked hard on this!
But contrast that with this scenario.
Hey mom, what was the last cookbook app you bought for your iPad? Oh, you’ve never bought one? Can you tell me why?
The point is that in the second scenario, your mom just invalidated six months of your work. She insulted you. But she didn’t know it. Because you didn’t ask a leading question.
Rob goes on to suggest that that people are much honest when discussing their past actions and what motivated those actions than they are discussing hypothetical situations
Turns out, no one wants to call your baby ugly.
So make sure don’t ask leading questions when you do user testing or user interviews.
Which brings me to my next point...
Make sure you record user’s reactions verbatim.
There’s an huge difference between the emotion a user expresses the first time they encounter a problem in your product
and your impression of that emotion three weeks after the fact.
Make sure you’re capturing feedback verbatim.
Because it keeps you honest and keeps you objective.
I like to record each piece of feedback on an individual post it note. I put them all on a big whiteboard and I group them by theme.
This exercise is called affinity mapping, and it brings me to my next point.
When we do user testing, it’s very tempting to solve the most recent problem you’ve heard.
I’ve just hung up with bobby, and he had a really hard time finding the save button. So you run back to your team and say stop everything we need to fix the save button.
But scientists need to be more objective than that.
One of the advantages to methods like affinity mapping is it literally lets you see problematic parts of your product by post-it note volume.
You then have a clear list of the biggest priorities.
Another way to priotize more objectively s to make a list of problems you think you are likely to run into when doing user testing.
You can then count the number of unique instances those problems occur during user testing.
I guarantee you’ll find a few surprises!
This will let you say, for example, that 7 of 8 people missed the save button, but only 1 out of 8 people had trouble getting back to the homepage.
So by not asking leading questions,
Recording data objectively
And prioritizing by frequency
You can scientifically determine the most troublesome and frequent problems that users are having with your product.
So now you’ve learned how to use science to objectively test your product, record feedback, and determine what to work on next.
Your next step is to go back to the drawing board and build something that fixes your most important or most frequent problems.
So my last tip deals with make sure that what you’re building actually fixes the problem.
Experts want to build to solve problems that don’t yet exist.
Scientists build to solve the most important frequent problems,
But after they build, scientists need to that they have actually solved the problem.
This is why it is SO important to validate every time you build.
Experts tend to follow this pattern – they learn about a problem, and build, and build, and build until they feel they have a perfect solution.
It’s natural. Their expertise lays in their ability to anticipate user problems and solve them before they happen.
But experts are often solving problems pre-emptively. They build, and build, and build, but they forget to validate.
And that just increases risk that your product is compiling unvalidated assumptions.
This was how it was with my MVP. I wanted to keep building features to help the users go faster, when really I needed to just learn if the MVP was good enough.
Scientists, on the other hand, validate every time they build.
So don’t break the build-measure-learn cycle.
Don’t skip the learning.
And here’s another reason to not ship the learning when you build:
Designers often talk about products as having different stages -- functional, usable, and delightful.
But you can’t validate that something is delightful unless you have validated that it is functional.
Furthermore, when you try to solve for delightful at the same time as solving for functional, you often confuse users as to what the function is in the first place.
So remember that you can’t validate it all at once.
This is why scientists believe in the build measure learn cycle.
We can scientifically determine, piecemeal, whether or not we have solved for functional, then useful, then delightful.
Which brings me to my last point
When I shipped my MVP, I thought I knew that users were going to say creating content was slow.
And when we tested our MVP with them, that was their biggest complaint - that it was slow to create content.
But there’s an important lesson in here.
Learning what I thought I was going to learn isn’t failure.
It’s validation.
It gave me objective and scientific proof that speed was the top problem I needed to solve next for my users after we’d built the MVP.
And that’s the kind of proof I’d like to build my reputation on.
So even if you think you know what you’re going to hear, test anyway. It’s the scientific thing to do.
So let’s recap.
Scientists validate every time they build.
They don’t skip the learning
They don’t validate it all at once
And they test even if they think they know the answers.
That’s all the tips I have for you today.
Let’s go over them one more time:
Remember, you can be a better scientist by:
Testing like a boss
Gathering feedback objectively
And validating every time you build
If there’s one thing I’d like you to take away today, it’s that we can all be scientists.
Scientists aren’t scared to ship an imperfect product.
In taking steps to be a better scientist, you can pay down the risk of your product before you ship it, and prove objectively that you are adding value to your users and solving their problems
We can all be scientists.
And if you feel the need to let that expert creep in, well, maybe you should become an expert in applying science to your product development process.
Thank you.