© Paul Rissen 2019
P. RissenExperiment-Driven Product Developmenthttps://doi.org/10.1007/978-1-4842-5528-5_4

4. Simplest Useful Thing

Paul Rissen1 
(1)
Middlesex, UK
 

In the previous chapter, we covered a set of principles that should be in the back of your mind during the development of a digital product. The importance of shared understanding among team members; having opinions, but being prepared to adapt them in the face of evidence; embracing uncertainty; and being data informed, not data driven.

But the greatest of these principles is the one we didn’t yet cover, because it deserves its own chapter. The principle of the Simplest Useful Thing. In this chapter, we’ll
  • Compare “Simplest Useful Thing” to its famous cousin, “Minimum Viable Product” (MVP)

  • Identify some of the different ways people understand the MVP concept, allowing you to recognize and prevent time-wasting arguments over a definition

  • Deconstruct the idea of Simplest Useful Thing in order to examine its different aspects

What do we mean by MVP?

Many people reading this chapter will probably be familiar with the concept of “Minimum Viable Product” (MVP). However, if you asked ten people working in a business that developed digital products and services to define what an MVP was, you’d probably end up with at least five, if not more, different definitions or opinions over what it actually means.

Different disciplines tend to have different understandings of the concept, and frankly, if you’re spending more time at the beginning (or indeed at any stage) of product development furiously debating among team members as to whether what has been, or worse, what might be, built qualifies as an MVP (or even whether an MVP is something you should be aiming to build), you’re doing it wrong.

Deconstructing the MVP

The sensible thing to do, therefore, would be to go back to the source material, the original definition of MVP, and examine that, in order to reach a better understanding of what it means. But before we do that, let’s look a little closer at four ways in which people interpret the concept. In doing so, you’ll be better equipped to spot where theoretical debates are in danger of causing time-wasting black holes of disagreement among your team and stakeholders.

Minimum and Viable

In this interpretation, the first two words of the phrase are emphasized. That is, that whatever you build merely needs to be functional, to be something that you can release to users.

On the plus side, it’s good to know that this school of thought would not want to release something that is broken—something that isn’t viable. At least we all agree that giving the user something that doesn’t work is a bad thing.

The down side, however, is that what constitutes “minimum viable” isn’t clear. What might appear to be functional and obviously working to the team may not be clear, understandable, or usable by the public. If you set your standards that low, where does it end?

Often, I’ve seen this surface in internal debates between user experience specialists, product managers, and developers. The concern is that too much emphasis on delivery ignores the need to research and design solutions which will be acceptable to and used by people in the world. This is valid if the emphasis is on delivery for delivery’s sake—but it’s a key reason why concentrating on the deliverable—the prototype, the code, or the sketch—is misguided.

Concentrate on what you want to learn—get something, anything out there in front of real users, in some form, so that you can gain knowledge. Delivery should be the means by which you research and learn, not the end in itself.

Focusing purely on delivery and functionality also ignores the importance of form and user experience, particularly if no one is thinking about the bigger picture—you might end up releasing hundreds of minimum viable features or products which by themselves do the job but are painful to use and, taken together, add up to a product no one would choose to use.

A “Viable Product”

In this second interpretation, the latter two words are emphasized, in order to concentrate on what constitutes something that will “work” in the sense that people will choose to use and, if it’s a commercial product, buy.

The upside here is clear—by taking the time to think up front about the market that you’re entering, competitor products, user needs, gaps in the market, and so on, you can have a pretty good idea beforehand about what it is that your product needs to do, as a baseline, to at least sell a respectable number.

It’s clearly important to do your homework before, and during, product development. At the end of the day, however, you’ll never know for sure whether something will succeed with users until it’s actually in their hands. In real-life circumstances, with real data coursing through its veins, that’s when the “market fit” comes into its own.

The danger here is that it’s very easy to get stuck in analysis paralysis, endlessly analyzing market trends, debating, and worrying about whether the thing you’re building will be the “killer app” for your market. Nobody knows. There are no guaranteed successes. Treat your market research as an experiment—focus on the question, work out the simplest useful thing you could do to answer the question, and proceed from there.

“Viable” means “Acceptable”—but to whom?

The third interpretation seeks to clarify what is meant by “viable”—and particularly, frames that in terms of what should be acceptable, both to the team and to users. It becomes a question of professional pride, as well as user satisfaction—the former giving this a particularly volatile sting when it comes to team discussions, as it can appear to be attacking the reputation of team members themselves.

This all stems, of course, from a desire to look out for the user—to deliver something that isn’t just functional, but is well designed and fits in to their lives. It contends that there should be certain minimum standards of design which must be met before something can be released.

A noble goal, for sure, though agreeing and clearly defining these standards, let alone the rules around exceptional and edge cases, can be a huge task in and of itself—should it include accessibility standards? Performance considerations? Almost certainly all of this needs to be considered, but this almost needs to be done as a project alongside product development. Agree what you mean by “viable”—your minimum acceptable principles—as soon as you can.

Product—The Maker’s Privilege

The final interpretation rests upon the word “product” within MVP. Again, the use of this word stems from good intentions—make something, and get it out there into the real world, somehow, so you can gather real feedback. However, the assumption implicit in this is that you must make something.

The insistence that you can only ever learn from designing, and making, a solution, regardless of fidelity, I’d argue, sets us back into the mindset of solutions and deliverables, over and above asking and answering questions. Sure, a good MVP should help you learn, but I’ve seen too many teams leap to designing a solution, such that the knowledge that arises after the fact is rendered either coincidental or open to so much interpretation that it doesn’t help make a decision on future direction clear.

Where does this leave us?

It’s clear that “Minimum Viable Product” is a loaded and hotly contested term. The upshot of all this is that while each interpretation (and I’m sure there’s probably plenty others not outlined earlier) has valid concerns and interests, they’re also each problematic and, worse still, can distract from figuring out what the most useful thing would be that could help you answer a specific question.

The OG MVP

The term “Minimum Viable Product” appears to have been coined by Frank Robinson, and later popularized along with the idea of the Lean Startup by Eric Ries.1

Robinson defines it in a very economical sense—an MVP is

...that unique product that maximizes return on risk for both the vendor and the customer.

—Frank Robinson, A Proven Methodology to Maximize Return on Risk2

Ries, in contrast, frames it in terms of a team, learning

...the minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.

—Eric Ries, Minimum Viable Product: a guide3

Gothelf and Seiden consider it from a designer’s perspective:

…We defined the Minimum Viable Product as the smallest thing you can make to learn whether your hypothesis is valid.

—Jeff Gothelf and Josh Seiden, Lean UX, 2nd Edition4

Note that the first two definitions emphasize minimum risk in return for maximum return. The definition of “return” is still subjective—you could interpret Robinson’s definition in terms of purely economic return, whereas Ries brings the knowledge being gained to the fore. The third definition reinforces my point that it’s the word “product” which is doing the most damage here, because it concentrates the mind on a “finished” thing, rather than what’s actually useful, regardless of form, to the team, the stakeholders, or ultimately, the user.

The problem here is that, particularly in an “agile” development world and especially in the realm of experiment-driven product development, it’s too easy to either get stuck in endless debate over what constitutes the minimum viable product or to use the idea of an MVP as a safety net to continue thinking only of delivery, not learning.

Admittedly, this is a worst-case scenario—and there are some minimum standards such as accessibility and performance that I’d want to defend wholeheartedly—but if you’re setting a team up to approach product development in an experiment-driven way, it’s almost as if you need to set the whole idea of MVP aside.

This doesn’t give you an excuse to chuck inaccessible, badly designed, ill-conceived ideas out to the market that have no hope of surviving. Deciding on what you, as a team, hold as your minimum acceptable level of work, as well as making sure you do your homework and understand your users and the market, is a worthwhile and important thing to do.

It’s not that MVP is a terrible idea, but if you approach product development in a humble way, acknowledging that, even if you have your own acceptable standards of work, you don’t know what will succeed, or indeed what you will learn as you proceed, it’s simply not a helpful way of framing things.

Which leads us on to….

Defining Simplest Useful Thing

Just like MVP, Simplest Useful Thing boils down to three elements. Defining what we mean by each element is crucial. Similarly, when undertaking a piece of work, it’s important to reach agreement among the team and communicate clearly to your stakeholders, how you’re defining each element for a particular piece of work. So, let’s take each word in turn.

Simplest, rather than minimum

Wherever possible, you want to ensure that what you do, in terms of the amount of effort and complexity, both for the team and for users, is simple. During the development process, you should always be asking—is this the simplest thing we could do and learn from? If you are building something, then ask—is this the simplest thing for a user to understand?

Simple doesn’t always mean the first solution that comes to mind—that can imply a lack of care in the work produced. What’s important is that the way in which you try to test a hypothesis or answer a question—that should be done in a way that gives you a valid answer from which you can learn but doesn’t overcomplicate things.

The minute that you find it’s taking weeks just to get an experiment up and running—be that organizing user research or building a feature to put in front of users—you really have to question, is this really the simplest thing we could do, that we can learn from?

Sometimes, rather than building a prototype of a feature for an A/B test, it would be quicker, cheaper, and simpler, to go and do some user research, or, if you have a dataset already, analyze what you have—any activity that will give you a useful answer, one that you can learn from, is fine. Remember, experiments are just a way of asking and answering questions—they don’t always have to involve A/B or multivariate tests. Consider what you already have to hand—be that access to APIs, datasets, or tooling—what can you reuse, in order to help you answer the question?

If it does turn out that building a version of a feature is the simplest way that you could get a useful answer, then it’s still worth asking, what is the simplest version of this feature we could build, in order to get a useful answer?

Let’s take an example from a project I consulted on. A newsletter product might wonder whether their audience would like to receive the newsletter at different times of the day, to suit their tastes. Perhaps they have users around the world, and the current setup sends the emails out at 9am UK time, regardless of when that might be for users around the world.

One way of testing this might be to build a feature that sent out the newsletters at different times of day and see which got the most engagement, using measures like open rate and/or click-through to content from the email itself. But building a system that takes into account time zones, or randomly sends emails out, isn’t exactly a simple thing to do.

Instead, think about the simplest way you could answer the question. How might we ascertain whether there is even any demand for such a feature? Before you spend time and effort building the functionality, why not put a link in the emails as they stand, and see how many people click a link that says something like “want the newsletter delivered at a different time?” You wouldn’t even need a huge sample size to be able to get an initial idea of whether it was a good idea or not.

There’s parallels here to the “minimum” in MVP, but the key difference is that “minimum” suggests an agreed threshold of acceptability, which, as we’ve discussed, is often more complex to define than it first appears, and can get you mired in weeks of endless debate and development, as you struggle to get sign off from all parties. Your standards for acceptability should be agreed at a team level rather than at an experiment level.

In contrast, “simple” keeps you focused on the method rather than the effort or the final deliverable. It’s still important to ensure what you do will give you a valid answer from which to learn. It’s also important to ensure that whatever you do, the potential for causing harm to your current users, or product more generally, is minimized as far as possible. Keeping things simple keeps the risk small—but still allows you to take risks.

Remember!

By simple, we mean the simplest way of answering the question—which won’t necessarily be the first thing that comes to mind.

Useful, rather than viable

As we’ve seen, “viable” is another of those contentious words. Useful, however, is anything but. You don’t know, up front, whether an idea is going to be viable. What you can ensure, however, is whether the thing you do is useful. It can be useful for you as a team—useful to learn from, useful to clarify, useful to gain knowledge—or it can be useful for the user. But it has to be at least one of these.

Knowing that something is going to be useful for a user, ahead of time, is just as tricky as knowing whether something is going to be viable. Which is why it’s so important to reframe the work of product development in terms of questions and premises—you may believe something will or won’t be useful for a user—so go and find out. Even if it doesn’t turn out to be useful for a user, running the experiment, answering the question, will have still been useful for the team as a whole.

What’s important here is to explicitly connect the work you to do how it is useful—be that for the team or for the user. Stating clearly what a piece of work will allow you to learn, and what you did in fact learn, after the experiment, will help both those inside and outside the team understand the utility of the work you’re doing and will in turn both help you avoid wasting time on pieces of work that are black holes and inspire new questions and new experiments to run.

From a user-focused point of view, it’s worth thinking too, about simplicity and usefulness. What is the simplest, most useful thing that we could provide to users—something that would solve a problem, do a job for them with the minimum of fuss? When we talk about innovation, as Jared Spool has noted, really we’re talking about providing users with something that would really, really help them—it can be a small tool, which dramatically helps them.5 If it fills a need, solves a problem, saves time—if it genuinely helps users, it can be as simple as you like.

Finally, the concept of “utility” here also includes how useful the answer to an experiment can be. This really boils down to how meaningful the answer is—what you can extrapolate from it—and this, in turn, comes down to the questions of scale which we’ll be discussing in Chapter 7.

Given the size of the population that you have access to, be that in terms of advertising, data analysis, user research, or actual users of your product or service, how many people would need to be included in the experiment for you to get a useful answer—one that can be relied upon to help you in your next decision. And if you need more people than would be possible in a reasonable amount of time, then working out the certainty of the answer you can get from the experiment will help you determine the usefulness of running the experiment in the first place.

Remember!

An experiment can be useful to the team, useful to the user, or both. The former means that the answer has to be legitimate enough to provide a way forward. The latter means that whatever results from the experiment should have utility for users.

Thing, not product

An MVP focuses a product development team on getting something out the door. This isn’t a bad thing, by any stretch of the imagination, but unless you’re making something physical, even the word “product” has a very wooly definition. Does this mean that you can’t release a single feature, it has to be a whole “product” release? Where does user research and data analysis fit into here, never mind all the other activities that a product development team undertakes? Why does everything have to be focused on building something?

Now, don’t get me wrong, “shipping” something (and I don’t mean fan fiction!) is a worthy cause—getting something into the hands of your users, so they can benefit from your hard work, and you can learn from how people actually use the thing you’ve made, is definitely a good thing. But the obsession with “product” means that the hard work of the other disciplines, aside from software development, can be ignored, shunted into some form of “sprint zero,” or worse, treated in a waterfall manner—do it once, then forget about it.

At the end of the day, as a product manager, I don’t really care whether we’ve shipped something or not—I care that we’ve produced something, that we’ve completed something, or rather, that we’ve learned something. There’s going to be days where everyone is in the middle of something, and there isn’t a clear output yet, and that’s fine, but that work should always be guided by an end goal—a state by which you’d be able to say “as a result of doing X, we learned Y.”6

Equally, you should acknowledge that the developers on your team, whether or not they are building prototypes or parts of the “real” product—are just as much engaged in the practice of experimental research as anyone else. Not everyone has to express themselves in sticky notes and felt tips.

We’ll cover more of how you can clearly communicate your end goals, and achievements, to stakeholders in the final section of this book, but for now, the key takeaway here is that despite the name, a product development team doesn’t only develop products, and it’s time to make that official, and accepted.

Remember!

Don’t get hung up on the need to deliver a solution for every experiment. Instead, do the simplest thing that can get a meaningful answer, ideally by interacting with real users.

Summary

In this chapter, we’ve examined the concept of “Simplest Useful Thing” and compared it to the more often discussed concept of “Minimum Viable Product.”

Minimum viable product may be something that you’re building toward. If you’re going to use that terminology, make sure that everyone is agreed on what the criteria for “minimum” and “viable,” and bear in mind that once released, what you’ve built may not turn out to be viable after all.

In the process of building toward the MVP, however, I’d propose it’s more useful to use Simplest Useful Thing as your guiding light on a day-to-day basis. Always be asking—what is the simplest, useful thing that we could do next, which would allow us to learn; take us closer to where we think we want to go; and ideally, be simple, and useful for our users, no matter what form that takes.

Most importantly, remember
  • Concentrate on gaining knowledge—testing your premises, answering questions, rather than delivering product and feature ideas for the sake of it.

  • Always be asking—is this the simplest thing we could do that would answer our question?

  • Is this the simplest, most useful thing we could provide for our users?

  • Do we have anything to hand already, which will allow us to answer this question in a simple, useful way?

  • How useful will the answer be, given the constraints of the population available to us?

And that’s it! We’ve now covered all the principles and core tenets of philosophy which form the bedrock of the experiment-driven product development approach. Armed with all of this, we’re ready to get started.

In Part 2 of the book, we’ll cover the process—how to generate ideas for experiments, how to define them, and what to bear in mind when setting them up to run. And in the final part, we’ll go over what to do in the aftermath of experiments and how to communicate progress.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.53.112