User testing

User testing is the act of working with users to determine whether certain experiences are optimal (or at least good) and if they are what the user actually wants to do. These are often called user interviews. The idea is that you ask a user questions to see how they do, or would, interact with a piece of software. To create a user test, you need to come up with an experience that you want to investigate, come up with questions and tests, and find people to test.

In most of my examples, I am talking about software that is already built. Just remember throughout this chapter that you can do user testing with paper diagrams or frontends that have buttons that don't do anything, or even using a slide deck. The more ephemeral the thing your users are interacting with, though, the more you need to set up the test to explain what is going on.

You can use these tests to design user stories. User stories are a description of a type of action a user will take. They often combine three pieces of information: user type, user goal, and user reason. A user type describes a group of users. A user goal is something someone from that group might want to do. A user reason is the reason why someone from that group would want to achieve that goal. Each one of these stories is an experience you might want to test during user testing.

Picking an experience

There are lots of experiences in your service you could test and you can test very broad things, as well as very specific interactions. If you want to test an entire tool, come up with user flows you want to test. An example flow might be the rollback flow. Given a service, how does the user rollback that service? Another flow might be launching an application locally. You could also test more traditional flows, such as the checkout flow: how does a user with something in their cart actually buy it?

Before you start talking to users, you can also instrument the flows using tracing or user tracking tools. Products such as Google Analytics and New Relic can show you how a user goes from one page to another by giving you a graph of user flows. You could also collect metrics about how mobile applications are used or CLIs. This will let you see how users may branch out and give you specific paths that you want to test or compare.

Picking an experience

Figure 8: An example user flow graph for a website using Google Analytics

While deciding on flows to compare, try to come up with the goals of the test. Do you want to know what users notice while using the tool? Do you want to know what users find confusing?

Designing the test

Once you have a flow, or set of flows, you would like to test, put together some questions to ask people. Usually, a user test works by sitting a user in front of a computer with the preconditions set up as you would expect the user to see the service. You then ask them to perform an action, and record what they do. After they are done, you ask them questions about their experience and record the answers. This is then repeated for multiple users.

The goal is to let users do most of the talking and not try to force them down paths or toward certain answers. You also want to avoid open-ended questions, as they tend to be harder to compare between users.

Some example questions:

  • You are here to answer an alert requesting that you roll back the service, so how would you do that?
  • What button would you click to silence the alert?
  • On a scale of one to five, where one is trivial and five is very complicated, how did you find that experience?
  • Did you feel that you completed the rollback successfully?
  • You need to write a monitoring config that stores the 95th percentile of request latency. How would you begin doing that?

You should record all answers and compare them between users who take the test.

Finding people to test

The best users to test are people who actually use the software. Often, you can get a few people to go through a test just by asking. If you have lots of people using your software, you can use tools that do heat map tracking on your pages to see where people actually click, so you don't even need to collect people.

If you are testing people, make sure to record data about them before the test, such as their experience with the tool, their role, and time in that role.

There are also organizations that will put together pools of people to be part of a test if your product is something used outside of your organization. If it's an internal tool with lots of users, you could send out an email and offer something like cool merchandise in exchange for people participating in a user study. Currently, the trend is free socks with logos, but I have received cups, t-shirts, sweatshirts, hats, USB drives, and even flashlights in exchange for helping people with internal projects.

Don't forget to thank people for their time. It is also useful to share your findings with the folks who take the test, so that they feel their time was well spent and helped the project.

After testing out user flows, it is important to focus on the developer's experience of working on your tools.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.41.214