© Paul Rissen 2019
P. RissenExperiment-Driven Product Developmenthttps://doi.org/10.1007/978-1-4842-5528-5_1

1. What Is Experiment-Driven Product Development?

Paul Rissen1 
(1)
Middlesex, UK
 

Experiment-driven product development (XDPD) is a way of approaching the product design and development process so that research, discovery, and learning—asking questions and getting useful, reliable answers—are prioritized over designing, and then validating, solutions.

In the XDPD framework, we use the concept of an experiment to frame almost everything the team does. All kinds of activities, from data analysis, through user research, to writing software, are seen through the lens of research. In this chapter, we’ll cover
  • What’s involved in the process

  • What we mean by “an experiment”

  • How XDPD relates to the “Lean” approach

  • Why you should consider trying the XDPD approach

  • How to get the best out of XDPD in a multi-team environment

How does XDPD work?

Asking questions, and using structured methods and tools to answer them, is not a revolutionary idea in and of itself. Scientists and researchers have been using what’s known as “the scientific method” for centuries, as a way of seeking knowledge. What is at least slightly novel, however, is the application of this approach to encompass the majority of activities a modern digital product design and development team engages in.

Let’s take a look at the basic process that a team using the XDPD framework would go through (Figure 1-1).
../images/473799_1_En_1_Chapter/473799_1_En_1_Fig1_HTML.jpg
Figure 1-1.

The basic process of experiment-driven product development

We kick off by uncovering premises. These premises can be based on prior knowledge, assumptions, feature ideas, or claims being made about the product or its users. These provide our “fuel” for experiments, and the next step is to take this fuel and turn premises into questions.

Questions form the absolute bedrock of the XDPD approach, and pretty much everything else stems from them. Once we have a question we want to answer, we can formulate a hypothesis—a statement that sums up what we believe the answer might be—and proceed to design our experiment in a way which will test this hypothesis.

We choose a method, or form in which the experiment will manifest, run the experiment, and then dig into the results. By analyzing the results, we should uncover an answer, along with other interesting pieces of knowledge that can inform future experiments. Finally, the answer that is revealed should help us make a decision. This could be a decision around what to do next, whether to invest more time in something or to abandon this line of inquiry.

Parts 2 and 3 of this book will go into much more detail around each of the stages outlined earlier, but now you should have a fair idea of the basic outline of how this will work. Perhaps this is completely new to you, perhaps not. The crucial thing to consider here is how you might apply this to the different activities that a team takes part in. That brings us on to the next question—what do we mean by “an experiment,” anyway?

What do we mean by “an experiment”?

What do you think of when you hear the term “experiment”? It’s likely to be one of two things.

Misconception #1: Experiment = A/B test

Firstly, most people in the realm of technology think of A/B tests—where one set of users is exposed to the existing, “control” version of a product and another set of users is exposed to a slightly different, experimental version of the product. Their reactions are measured, usually according to some predefined success criteria; the two results compared, and a winner emerges.

There are more advanced forms of this kind of experimentation. Multivariate testing doesn’t just give users exposure to one “new” version and the existing version of a product—it exposes users to several different variations, all competing with each other. At the extreme end, this can result in the infamous example of Google testing 41 different shades of blue—a great example of how not to apply an experiment-driven approach.1

Some teams even incorporate an aspect of machine learning into multivariant testing, using a technique called multiarmed bandits . In this format, several different options are tried, and after a while, the “worst” solution is dropped, while the experiment continues until there is a clear winner.

I’d be lying if I said that this book wasn’t going to cover, or indeed take many principles from, the concepts behind A/B and multivariate testing. Indeed, when I first joined the team that helped put together the experiment-driven approach, every experiment was essentially a simple A/B test.

Over time, however, we realized that not only did we have a lot to learn about how to run A/B, let alone multivariate, tests properly, but that more importantly, the idea of an experiment was far more useful than our constrained definition.

What we came to realize is crucial to this whole book. Once we understood the importance of designing the experiment we were planning to run, we began to see that deciding the method up front—regardless of whether it was an A/B test, a piece of user research, or something else entirely—was a mistake.

It didn’t matter what exactly we were going to do—we needed to clarify the same points, over and over again:
  • What question are we trying to answer?

  • Why is it important to us?

  • What do we think the answer might be?

  • How much evidence will we need to gather in order to get a useful, reliable answer?

  • What are we going to do to answer the question?

  • What was the answer?

  • What should we conclude from this?

Experiments thus became a structured way of asking, and answering, questions. They are a way of constructing a question which helps guide what you can do to discover an answer and a set of guidelines which you can use to ensure that the answer is reliable and useful.

Most importantly, any activity undertaken by the team could be in service to answering the question. It’s not a question of having your UX team members run research sessions, while your developers are building software. User research is a way of answering a question. Running an A/B test is, too. Releasing a feature out into the world, no matter its form—a paper prototype or live code—should be framed around asking and answering the questions that will help you progress and build a better product.2

Remember!

Not every experiment has to be an A/B test. Think of experiments as a structured way of asking questions. The exact method you use to answer the question can differ from experiment to experiment.

This is also nothing new. It’s a way of thinking that has served scientists, among others, pretty well for the past 300 or so years. That leads us on to the second thing that might come to mind when you first hear the term “experiment.”

Misconception #2: Experiments don’t need purpose—they’re “innovation”

“Experiment” brings to mind the image of the mad scientist, or perhaps in the context of digital product development, the research and development team. When companies want to get ahead of the curve, they often spin up a separate “innovation” team, away from the day-to-day work of running the business, as a way to scout out opportunities for future growth.

This itself is not a bad thing—provided that the innovation is guided in some way. It’s not enough to tell a team to start innovating or experimenting. Experimenting for the sake of experimenting—running experiments in a haphazard way, with no particular purpose or direction, will not get you far.

Instead, experiments need to be driven by a sense of why they are being run in the first place. The emphasis in the long run shouldn’t necessarily be on the “experiment” at all—that’s just a means to an end. The key thing is the question, and answering it.

Ultimately, there needs to be a reason, a justification, for running an experiment—something you want to know the answer to. The beauty of the XDPD approach is that by framing product development in terms of research, discovery, and learning, teams are free to do anything and everything to answer the questions. They’re not held to specific features or deliverables. The most important thing is to gain knowledge which will ultimately help you achieve a wider objective.

How does the XDPD approach relate to “Lean”?

Starting with Eric Ries’ The Lean Startup and evolving through books such as Jeff Gothelf and Josh Seden’s Lean UX, the last ten years or so have seen a massive shift in the way we think about software, product, and user experience development.3,4 The “Lean” approach has become the go-to standard for any organization wanting to rapidly develop products at relatively low risk.

Experiment-driven product development comes from the same philosophical school of thought as Lean, and shares many similar principles, as you’ll discover in this book. The importance of discovery, of doing the simplest thing in order to learn, and seeking to test assumptions, will be familiar to anyone who has read from the Lean canon.

XDPD is an evolution of the Lean ethos rather than a totally different approach. In some ways, it’s one of many possible iterations upon it. XDPD takes the idea of a hypothesis-driven, experimental approach to product development and goes both wider and deeper. By shifting the focus from assumptions to the step beforehand—questions—and going into more depth on how to design effective experiments, XDPD seeks to reinforce the basic tenets of Lean while bringing rigor and a different perspective to the approach.

Why should I consider trying this approach?

By this point, you’re probably thinking, “oh no, yet another methodology for lean/agile digital product development.” So why do I believe that it’s worth being open to learning about this approach, and considering how you might apply it in your day-to-day practice? Let’s list out the reasons, then I’ll go through them, one by one.

Why experiment-driven product development? Because
  • It forces you to challenge the premises underlying your work

  • It lowers the cost of failure

  • It helps you uncover new ideas through focusing on questions

  • It enables you to treat your users as an equal stakeholder

  • It helps you focus on things that actually make a difference

  • It’s fun!

It forces you to challenge the premises underlying your work

Firstly, as you’ll see in the course of this book, most experiments are formed around premises—beliefs that you have around what might, and might not, be successful. Now, of course, some premises are helpful. I’m not suggesting you spend weeks and months testing the basic principles of web design or user experience.

However, some premises can also be dangerous—particularly when it comes to assumed knowledge about the audience, or assumed knowledge about what the “right” or “best” way of improving a product, or reaching your targets, might be. Some of those premises, most likely based on experience, may well be correct—but without hard evidence to back them up, there’s no way of knowing whether it was your knowledge, experience, wisdom, and insight which made the product successful, or blind luck.

Your objective, as a product development team, should be to always be learning—learning about what actually works and, just as, if not more, importantly, learning when you are wrong. Experiments are a great way of challenging the premises that underlie your work and gathering proof—to a reasonable degree of confidence—as to their validity or otherwise.

Lowering the cost of failure

And if the premise you were working with turns out to be misguided? Well, that’s good! Better to know earlier, with a small but meaningful change made to the product, than to spend weeks and months developing something that you think is going to be a killer feature and turns out to just scupper your chances of surviving the next budgeting round.

Adopting an experiment-driven approach means being humble—being prepared to accept the evidence that your premises were wrong—but it also means that in doing so, you reduce the cost of failure. You may fail; you may make mistakes, make bets on ideas for features that may not earn you adoration from your users, but at least you’ll do so in a way that minimizes the risk to the overall health of your product.

The sooner you know that something is wrong, with as small an intervention as possible, the quicker you can put it right and learn what not to do again. This means that as a team, you have to accept that you will make mistakes and that some of your ideas will be wrong. But that doesn’t mean that you, or your team, are a failure—as long as you’ve gathered evidence and learned—you’ve actually increased your chances of success in the long term.

Remember!

Discovering that a premise you were working upon was wrong is a good thing—but it’s important to test these premises cheaply, so that if you are wrong, the impact on users is smaller.

Uncover new ideas through focusing on questions

Indeed, this can lead to one of the pleasantly surprising aspects of experiment-driven product development. Every experiment should revolve around a question that you want to answer. Sometimes, this will be a case of questioning a previous belief. Sometimes, though, an experiment can start from the opposite—a distinct lack of knowledge about something.

In answering those questions through experiments, this can inspire new breakthroughs, new ideas, and new innovations, which otherwise would only be able to emerge through potentially misleading premises and prior assumptions. Questions are the key to the XDPD approach.

Treat your users as an equal stakeholder

Business stakeholders. Love ‘em or hate ‘em, they are the people you work with day in, day out. Despite most people’s best intentions to put the user first, often, when push comes to shove, it’s the people you work with every day who tend have the loudest voice on what happens with your product. Experiment-driven product development gives users a seat at the table, and a powerful voice when it comes to decisions, by giving you the ability to treat data—evidence of how they use your product and how it fits into the context of their lives- as a stakeholder.

More specifically, running experiments, and gathering evidence of what your users do in certain circumstances, enables you to make more informed decisions over whether to move forward with an idea.

Evidence of in situ reactions of users, as they go about their daily lives, gives you a set of facts upon which you can make a decision, taking you out of the realm of the mythological product manager who uses pure intuition, experience, and, frankly, clairvoyance, to make the “right” choice.

If the various opinions or political seniority of your business stakeholders is all you have to base your decisions on, then the user is often squeezed out. Whereas if you can point to evidence that users in the real world just don’t behave in the way someone assumes, discussions move from an argument over opinions to action, grounded in facts.

It’s important, of course, that this evidence is gathered under the right conditions—you want to ensure, as far as possible, that the decisions you make are based on evidence which is representative and reliable. That’s where this book can help you.

Remember!

Until you’ve tried something via a well-designed experiment, you just don’t know whose voice to listen to. Use experiments as a way to gather evidence directly from your users and make a data-informed decision about how to proceed.

This is where the process of designing your experiments comes in—fortunately, this is what lies at the heart of this book. By carefully designing your experiments, you inevitably end up listening to your users at the right scale to make an informed decision, giving your users a representative voice in decision-making.

No matter what method you end up using in an experiment, taking into account the scale needed to achieve a reliable, useful answer is crucial. In doing so, you allow the actual behavior, thoughts, and feelings of your users, to influence things just as much as the people higher up the organization chart.

Focus on things that make a difference

Designing experiments forces you to think about what change you would like to see in the world. It gives form and structure to your hunches about what might be a good idea to reach a particular target, or achieve a particular objective.

The benefit of adopting an experiment-driven product development approach is that almost every experiment should clearly state what kind of change or difference you expect to see as a result.

In an experiment, you define a hypothesis—by doing “X,” we believe we will see result “Y”—and then measure a number of key metrics, in order to determine whether your hypothesis is indeed correct.

Using this approach as much as possible in the product development context means that you reduce the amount of effort spent on work that doesn’t relate directly to what you’re ultimately trying to achieve.

If someone is lobbying for your team to spend time on their idea, and, through creating a hypothesis as part of the experiment design phase, it’s proving impossible to describe how implementing that idea might lead to a change or difference in your key success metrics, then arguably you shouldn’t be prioritizing that work.

It’s fun!

Hopefully by now I’ve managed to convince you that experiment-driven product development is something worth exploring. But there’s one final reason I think you should care. It’s fun.

Yes, it can be scary. Yes, it requires thought and care when designing experiments. Yes, it will need some practice before you get comfortable and confident in designing and running experiments. But, as a break from the shopping list of features that the team has been asked to work on, I believe experiment-driven product development is, well, more exciting.

It allows you to embrace not only uncertainty but curiosity, to explore possibilities and to answer those burning questions, but in a way which is structured and has value. It’s about coming up with ideas, and not getting too attached to them if they don’t work out.

More importantly, it helps you, and your team, take a step back from churning out endless deliverables, and coming into work every day stressing about things which really aren’t life or death decisions. It puts things into context, involves the whole team, and overall, leads to a happier, more focused, and in the long term, more successful, team.

So, there you have it, the reasons why I believe adopting this approach can help you improve your product management practice. Now, let’s close out the chapter by looking at this in the wider context of an organization.

Getting the best out of XDPD in a multi-team environment

We’ve talked so far about experiment-driven product development in the context of a single team. Unless you’re working in a startup, however, it’s more than likely that the organization you’re a part of has several product development teams, working alongside each other.

Perhaps you’re not the leader of a single team—instead you’re trying to work out, from a higher level, how this approach can work in your organization. How can you get the best out of experiment-driven product development?

Size matters

First of all, you need to consider the size of a team that adopts this approach. As with any “agile” approach, having too big a team tends to lead to inefficiencies, miscommunication, and general unwieldiness.

At the same time, too small a team leaves you no room to maneuver—the ups and downs of dealing with day-to-day reality—simply keeping the basic systems running and being one or two people down due to sickness or annual leave; never mind dealing with that moment when everything goes wrong and you need all hands on deck—all of this means you need to try and hit a sweet spot in the size of your team.

The “two-pizza” team size5 is often a good rule—I’d advocate having at least a couple of pairs of developers, a UX pair, and at least one, if not two, data specialists (those with an aptitude for data science and data analysis).

Your mileage may vary, of course. The key thing is to try and have a team that can weather the storms while still dedicating time to the design, development, and analysis of experiments.

Avoid treating an experiment team as “special”

Secondly, I’d argue against having a dedicated “experiment” team in the long run. By this, I mean designing your organizational structure around a single team who are the only ones who run experiments, on behalf of everyone else.

This is perhaps a place to start. It was the approach being used at the time I joined my first experiment-led team in 2016. The intention was to develop a deliberately siloed team, so that they could experiment with freedom, unburdened by having to maintain, or deeply integrate, with other existing systems.

However, in practice, the rest of the teams began to see our team as “the innovation team”—the ones who get to have all the fun, without any of the responsibility. Additionally, senior stakeholders sometimes viewed our team as the go-to place for any “innovative” project, which, while being a compliment, became at worst a distraction and at best a bottleneck to the company’s success.

The trick is to start with a team who work in the experiment-driven way, help them to become expert practitioners, but make sure they share their knowledge with the other teams, with the aim of eventually having as many teams comfortable with this approach as possible. Make this team a “center of excellence” for experiments, and give them time and space to help coach other teams. Demonstrate by doing, but don’t fall into the trap of assuming there’s no responsibilities here.

Sharing what you learn

Thirdly, if you have several teams operating in this way, you need a way of sharing knowledge between them. Both in terms of the results of experiments—“we tried this, so you don’t have to”—and in terms of coordinating the running of experiments, so that they don’t clash and conflict.

This can take the form of regular stand-up-style updates, or perhaps a large wall display, so that everyone can see what’s happening. Regular product team showcases, sharebacks, and demos can also help here—we’ll discuss this in much more detail in the final chapter of the book, where I’ll go through an approach to these team sessions which we developed to help share what we were working on.

Knowledge, Power, and Responsibility

Finally, you have to make a choice. Part of the beauty of adopting an experiment-driven product development approach is the ability to reduce the cost of failure—to spend less time on each “thing,” and prioritize finding answers which allow you to move on. But, this can lead to an issue when it comes to the experiments that center around rapid prototyping—technical debt.

This has a couple of dimensions. If you’re concentrating on developing the simplest, useful thing in order to drive meaningful results from an experiment, you may find yourselves building things fast but in an unsustainable way. The first question is—do you spend all your time just designing, building, and running experiments, forever generating useful results, but piling up the technical debt, or do you run an experiment, gather results, and then, if the experiment was successful, pay down the debt in order to make it stable and sustainable?

I’d advocate a healthy balance—run a small series of experiments, focused on gathering results, answering questions, and producing knowledge, and then take a step back. Which were the most successful variations? Which were the most valuable pieces of knowledge we gained? Allow yourself some time to take the best from what you’ve learned, and apply it in a “production-ready” way that benefits everyone.

Managing expectations is the other element to this—if you are a team focused solely on experiments, and with no expectation that you’ll maintain successful versions of features that do well, then make that clear from the start, and keep reiterating it where ever possible. As I’ve mentioned, having such an “innovation” team has its pros and cons, but the key thing here is to make sure everyone is clear on what is expected of a team.

If you’re adopting an experiment-driven product development approach within a product team that does have commitments to maintaining something, however, then that also needs to be clearly understood. It will mean that some time to pay down the technical debt will be needed, which can mean that there may be some time where experiments will have to take a back seat to simply keeping the lights on.

Equally, particularly with small teams, if the expectation is that the team is responsible for maintaining a stable product or service, in those moments where, for whatever reason, the thing you’re developing may be about to fall over entirely, everyone needs to understand that at times like that, the existing thing comes first.

Remember!

Make sure you have an agreement in place between the team and business stakeholders as to your priorities, and keep reinforcing this message of choices and trade-offs. Be prepared to share your reasons for prioritizing tech debt over experimentation.

Summary

In this chapter, we’ve explored in more detail the ins and outs of what we mean, and don’t mean, by experiment-driven product development.

Experimentation is a very particular approach, and is most effectively used when it has clear direction. The exact methods by which you run your experiments—be it data analysis, multivariate testing, user research, or something entirely different, are very much secondary to the process of designing effective experiments.

This doesn’t mean that you need to labor and agonize over every single aspect of the experiment design. Once you’re up and running, it should become more like second nature—but simply taking some time to think about what makes sense for an experiment to be effective is important.

We’ve covered a number of reasons why experiment-driven product development is a worthwhile approach to consider—ranging from the creativity it affords to the importance of challenging your assumptions so that you can avoid wasting your time on things doomed to fail.

Finally, we discussed the wider organizational context that surrounds any team wanting to take a more experiment-driven approach. Sharing knowledge, and getting agreement all round, as to the scope, limits, and responsibilities of the team, is crucial—as is the need to be constantly reiterating this and making the choices involved in prioritization explicit.

With this all in mind, then, we can move on to look at how experiment-driven product development translates to some of the common activities and methods a team might use in their daily work.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.135.67