Chapter 12. After product delivery: problems and revisions

This chapter covers

  • Diagnosing problems with the product after it’s delivered to the customer
  • Finding remedies to product problems
  • Getting and using feedback from customers
  • Revising the product based on known problems and feedback

Figure 12.1 shows where we are in the data science process: revising the product after initial feedback. The previous chapter covered the delivery of a product to your customer. Once the customer begins using the product, there’s the potential for a whole new set of problems and issues to pop up. In this chapter, I discuss some of these problem types and how to deal with them, and I talk about customer feedback and revising the product based on the problems you encounter and the feedback you receive.

Figure 12.1. The second step of the finishing phase of the data science process: revising the product after initial delivery to the customer

12.1. Problems with the product and its use

Despite your best efforts, you may not have anticipated every aspect of the way your customers will use (or try to use) your product. Even if the product does the things it’s supposed to do, your customers and users may not be doing those things and doing them efficiently. In this section, I discuss a few causes for the product not to be as effective as you would hope, and I give some suggestions for how to recognize and remedy them.

12.1.1. Customers not using the product correctly

Customers tend to use new products in every way except the way that was intended. Sometimes they misunderstand where to type which text and which buttons to press, either because they haven’t put a lot of effort into figuring out the correct way or because it’s difficult to figure out. Sometimes customers try to use a product to do something it wasn’t intended to do, usually either because they think the product is supposed to do that thing or because they’re trying to be clever and twist the product’s capabilities into doing an extra thing it shouldn’t be doing.

In either case, and in others, customers not using the product in the intended ways is problematic because it can lead to false or misleading results or no results at all. Misleading results are probably worse, because the customer might gain false confidence and act on those results, potentially leading to poor business decisions.

If the customer can’t get any results at all, then hopefully they’ll come back to you to ask for help, which is far better than the alternative: they’ll give up and stop using the product altogether.

How to recognize it

If you deliver a product to a customer and forget about it, you’ll never know how it worked out for them. Maybe it was great; maybe it wasn’t. Therefore, the first step in recognizing improper use of your product is to talk to the customer about it. You talked to the customer at the time of product delivery, and presumably you provided instructions for use, but, like the product itself, instructions may be used incorrectly, partially, or not at all. At least one follow-up is usually in order.

It’s usually best to give the customer a bit of time to try out the product before you pester them with questions, but before that it’s OK to reaffirm that you’re there to help if they need it. When you do follow up with questions, you should ask things like this:

  • Are you getting the results you expected from the product?
  • Have you been able to use the product to address the intended business needs?
  • Is the product falling short in any way?
  • How often and by how many people is the product used?
  • Can you describe the typical use of the product?

If they describe any problems, bad results, or something unexpected, it would be good to explore these further and try to find out what the trouble is.

In addition to finding out if the customer is experiencing problems, it’s important to notice if the customer isn’t able to give a complete answer to any of your questions. This can be a sign that the customer you’re talking to isn’t the main person who is using product or isn’t being completely informed by the person using the product, or that no one is using the product. Your role as a data scientist may not be to force the customer to use your product, but it is (usually) part of your job to help them use it properly. If they’re not using it properly because they can’t figure it out, a little encouragement may be good.

Sometimes talking with the customer isn’t enough. If you want to get to know how they’re using the product, you may need to spend some time with the customer while they’re working. As they work with the product, it might be enough to observe—you certainly don’t want to interfere with how they normally use the product—but it can be good to ask them questions as they go along. Particularly if they do something unexpected, you might want to ask them why they performed that action, but it’s important not to direct them or otherwise show them how to do anything until you fully understand why they’ve performed the action they did. With understanding, you may be able to fix a root cause of the misuse instead of a symptom.

For instance, if the customer clicks the wrong button in the product, instead of saying, “I think you clicked the wrong button,” it would be better to ask, “Why did you click that button?” and try to understand their reasoning. Let’s say the customer is clicking a button labeled SUBMIT in order to make changes to a name-address pair inside a database, when you think they should be clicking a button labeled UPDATE. If you ask them why they clicked SUBMIT instead of UPDATE, they might say that they thought UPDATE was for the special case when they wanted to change only the address of an existing entry but leave the name unchanged.

You designed the application so SUBMIT adds a new entry and UPDATE changes an existing one. Because the customer wanted to change an existing entry, they should have clicked UPDATE instead of SUBMIT. If you had instructed them to click the other button, you would have learned little, but now you know the reasoning behind their actions. You might then explain that SUBMIT is for creating new name-address entries, whereas UPDATE is for modifying existing entries no matter which part of the entry they want to change. By clicking SUBMIT they’ve created a new, correct entry, but the old, incorrect entry still exists. More important, you’ve learned that the words SUBMIT and UPDATE aren’t clear enough to the customers, so it might be worth changing them or further educating the customers about the differences between the two buttons.

How to remedy it

Once you’ve recognized and diagnosed the root causes of improper use of the product, there are two ways to remedy them:

  • Educate and encourage the customers to use the product in the correct way.
  • Modify the product so that it is clearer which actions are correct.

Further educating customers might include writing new documentation and delivering it to them. It might also include holding seminars, workshops, or one-on-one sessions in which you personally guide them through the proper protocols for product use. Which way will be more effective will depend highly on the situation.

Modifying the product to encourage proper use is a question of user experience (UX), which I cover next. A UX problem might be causing improper product use, but there’s always a gray area between “the UX isn’t very good” and “the users are using it wrong.” In the end, the goal is to make the product effective, so UX changes that enable that can be a good idea, even if they seem wrong for other reasons. When in doubt, consult a UX expert.

12.1.2. UX problems

Beyond the case of customers using the product incorrectly in some way, they may be using it inefficiently. They can use the product to do the things they need to do, and they can solve problems with it, but it’s taking more time or effort than it should. Addressing the user experience directly can help improve inefficiency.

A product can be inefficient in a few ways. One such way is having extraneous steps in a workflow. Let’s say that your customer typically performs a query within the product and then always sorts the results in a particular way. If the results weren’t sorted that way by default, then the customer would need to perform the sort step in addition to the query. The extra time and effort can add up if the customer performs the query tens or hundreds of times per day. Changing the default sort method could make the product more efficient.

The product could also be inefficient, in some sense, if it was designed to do many things, but the customer uses only a couple of those things often. For example, perhaps you performed some complex genetic analyses of several stages of fruit fly development from embryo to adult, and you delivered to the customer a report as well as a multisheet spreadsheet containing results from each of those stages. You then find out that 95% of the time the customer finds a specific entry in the adult data and then cross-references it with the other stages of development. It wasn’t clear to you before you designed the spreadsheet that the customer would be spending so much time specifically on the adult stage, but now it’s clear that you could save the customer some time by adding an additional page that contains all the cross-references the customer would typically do. Adding even more pages for more cross-referencing might also make sense, but only if the customer might use them and there are few enough to keep the spreadsheet a manageable size.

The appearance or arrangement of a product and its interface might also lead to inefficiencies. These are some of the most straightforward questions in UX and UI design:

  • Is the text easy to read and well formatted?
  • Are the buttons and text boxes in places that make them easy to use?
  • Are keyboard shortcuts available for commonly performed actions?
  • Is everything easy to find and easy to use?

UX and UI design extend far beyond these questions in both scope and detail, though the last question is something of a catchall in looking for UX issues. Beyond the most obvious cases, it’s often best to consult a UX expert.

How to recognize them

It can be easy to miss some UX problems, particularly if you’ve designed and built the product yourself. You may have to give the product to someone else before you realize that the interface has significant flaws.

One strategy is to diagnose UX flaws through the customer. You might ask them questions such as the following:

  • Have you found the product easy to use?
  • Is there anything in the product you found difficult to do?
  • What are the most common tasks you find yourself doing?
  • How often do you use each part of the product?

If, given the answers you receive, you still suspect that that there may be some UX issues, it might be good to spend some time with the customer observing how they work, as I described previously.

For a more thorough treatment of the UX and your customer’s workflows, you can bring in a UX professional. Their education and experience lie somewhere in the areas of technology, design, and psychology, and they’d likely want to work directly with the customer to learn everything they can about product use and workflows before making recommendations about what is good, what is bad, and what improvements might be made.

How to remedy them

As with most product-related problems, you can either deal with a UX problem or change the product. Assuming that the problem doesn’t make the product unusable, it’s a viable option to educate customers about how to work around the issue, or they may be able to figure it out for themselves. The other option, changing the product, presumably involves more of your time for redesign and development, and the choice of remedy involves weighing the time-and-effort benefits of changing the product against the customer’s time and effort saved when compared to the workaround. See section 12.3, later in this chapter, for more on what that process entails.

12.1.3. Software bugs

A bug is something that’s wrong with the software. This is in contrast with a UX problem, which causes a product to be unintuitive but not necessarily wrong. Examples include the following:

  • Incorrect numerical results from calculations
  • Error messages or crashes in the software
  • Improperly displayed text, tables, images, and the like
  • Buttons or other action-based objects that do nothing

This is obviously not an exhaustive list, but it should be illustrative for those who have little experience dealing with software bugs.

How to recognize them

Many software development teams, before they deliver software to their customers, conduct a bug bash in order to find and later fix bugs in the software. In a bug bash, several people—perhaps the whole development team plus some product-oriented people not on the team—take an hour (or some fixed amount of time) to bash on the software, trying to break it. Bug bashers might click every button in the software, enter crazy values into text boxes (negative numbers, absurdly large numbers, words where numbers should be, and so on) and see if they can make the software crash, give an unwanted error message, or otherwise do something it’s not supposed to do. Each time they succeed in finding a bug, they record the details so that it can be fixed before the software is released to the customer.

If you and your colleagues don’t find bugs yourself, your customers will probably do it for you. They’re good at that. By that, I mean that because they haven’t been involved with the product from design to development, they have far fewer prejudices and conformant behaviors than you do. Therefore, they’ll try to do things with the software that you didn’t foresee, opening up the range of activities in which bugs might be discovered.

How to remedy them

Because bugs, by definition, are mistakes in software, you usually have little choice but to fix them. On the other hand, if a bug is hard to fix and customers rarely encounter it, you might be able to justify not fixing it. This is more common than I’d like to admit. If you have a favorite piece of widely used software that has a public bug-reporting system (Ubuntu and Bitbucket are good examples), you can find some examples of bugs that are years old, because they’re reasonably hard to fix and they affect few users. Though bugs are generally taken more seriously than UX flaws, the decision about whether to fix a bug comes down to comparing the time and effort to fix it with the cost to the customer of leaving it in.

12.1.4. The product doesn’t solve real problems

This might be one of the worst outcomes for a product: you’ve spent time designing and developing a product that meets all the customer’s descriptions of what they do, what they want, and what the problems are, but for some reason the product that you made isn’t helping the customer achieve their goals. It’s not a good feeling to have this happen. Assuming that the mismatch between the product and the goals isn’t caused by incorrect use, UX problems, or software bugs, the product must fall short in some more profound way.

For example, while working at a startup analyzing organizational communications within financial firms subject to SEC regulation, my team developed statistical methods for detecting risky behaviors and incorporated them into the software product. Our customers wanted to find bad or suspicious employees who may be involved in illegal trafficking of privileged information, internally or externally, such as that which might be used for insider trading of securities. We built a framework for creating behavioral profiles for combinations of suspicious activity that compliance officers might be looking for in a potential lawbreaker. These profiles could be translated into the risky behaviors that the statistical models could detect, and a list of the most suspicious employees was generated for each behavioral profile.

We thought this would answer many questions that financial regulatory compliance departments had, and all potential-customer feedback to that point seemed in agreement with that assumption. But when the initial set of compliance officers got their hands on it, they were underwhelmed with the behavioral profiling and detection of risky behaviors. Many of them continued to use their old tools for compliance, which mostly comprised scanning a random sample of internal communications looking for suspicious words and phrases. There didn’t seem to be any significant flaws in UX or any major bugs, but the customers didn’t feel the product answered their main questions.

In retrospect, I’m convinced that the product made it most of the way to the answers the customers wanted but fell short of satisfactory in the minds of the compliance officers. It seems that they wanted one of two things:

  • Statistical methods that tell them, without much doubt, who the bad guys are
  • A better way to search and filter communications and employees

Unfortunately our statistical methods weren’t good enough yet to fulfill the first item, and having complex statistical methods facilitate a better filter as in the second seemed to create cognitive dissonance between trusting those statistical methods and the expectation of deterministic behavior in searches and filters. Said another way, if the customers couldn’t trust the statistical methods completely, they would rather use methods that they understand completely, such as simple searches and filters. This wasn’t entirely obvious to anyone involved with the project until after the product had been built and delivered.

The good news—if it can be called that—is that we learned a lot by delivering the product and getting reactions from the customers. The reactions, and the fact that more than one customer had the same reaction, taught us more about our customers and the industry we were selling to than any other interactions with customers that we’d had before.

All wasn’t lost; large parts of the software product were usable even without the behavioral profiling, in particular the pieces that managed, stored, and queried data. But with respect to the product interface, the development team had to perform one of the fabled pivots for which startups are so famous. The UI, which had been built around behavior profiles, had to be redesigned to something the customers were interested in using. Thankfully, we knew a lot more about our customers at this point than we had only a few months before, due mainly to the delivery of the product and the subsequent rejection of it.

Each case like this, in which a product fails to achieve its main goals, can be quite different from the others. A large number of variables come into play. Sometimes you can bridge the gap by educating the customers and convincing them that—as in my example—the statistical models are trustworthy for filtering and sorting. Sometimes you have to make dramatic changes in the product. But assuming that customers have started using the product, recognizing that there is a problem isn’t difficult.

How to recognize it

This one is easy: when the customer starts using the product, if there’s a problem, they will either complain or stop using the product. Once you’ve talked to the customer long enough to rule out improper use, UX, and significant bugs, you’ll soon realize if your product has missed its goals.

How to remedy it

This one is tough. Every case is different, but you can use the newfound knowledge about what the customer wants to make your most informed product decisions yet. Options usually include further education of the customer, changing the product, building a new product, and finding a different kind of customer for the product you already have. Depending on your situation, some of these may make more sense than others, but one thing is certain: you should consider all options thoroughly before making a decision.

12.2. Feedback

Getting feedback is hard, and I mean that in both senses of the phrase. On the one hand, it’s often difficult to get constructive feedback from customers, users, or anyone else. On the other hand, it can be hard to listen to feedback and criticism without considering it an attack on—or a misunderstanding of—the product that you’ve spent a lot of time and effort building. In this section, I discuss how any feedback, except in rare cases, is a good thing, and I share some advice about making the most of the feedback that you get.

12.2.1. Feedback means someone is using your product

If you’re getting feedback on the product, someone is using the product, which is good. It’s surprising to me how many times in my life I’ve seen someone build a product that doesn’t get used. The reasons it wasn’t used vary, but they include the following:

  • Customers didn’t have the time to learn to use the product.
  • The customer who asked for the product wasn’t a user, and the users resisted integrating the product into their workflow or weren’t satisfied by it.
  • The product addressed a moving goal, and by the time it was developed, the goal had moved too far for the product to be useful.

Whatever the reasons why customers might not use the product, if you’re receiving feedback from someone, those reasons don’t apply, which is good.

12.2.2. Feedback is not disapproval

Unless a person is being unconstructive or downright mean, their feedback should not be taken as an insult to you or your product. Most people on this Earth love to make suggestions for how other people can do things better; in my experience, this is particularly true of engineers, software or otherwise. They don’t mean any harm, and some of their suggestions might be useful.

This probably has more to do with business etiquette than it does data science, but you can learn much useful information if you can listen to feedback without taking it personally. It’s difficult, though, to spend months building something and then hear someone tear it down after using it for 10 minutes, and I’ve seen more than one developer or data scientist vehemently defend a product against the smallest of suggestions; that’s why I’m mentioning this here. In these situations, I stand by the maxim “Cooler heads prevail.”

You don’t have to take every comment and every suggestion to heart, but it’s usually worth it to listen. Afterward, you might consider the person’s knowledge and expertise before deciding whether to take their feedback seriously. I probably wouldn’t take the CEO’s advice on statistical methods unless they had a statistics degree, and likewise I wouldn’t take a statistician’s advice in business matters unless they had some relevant experience. Anyone and everyone might have some suggestions that could improve the product, and someone who can listen and consider calmly has a much better chance of achieving product greatness than those who can’t.

12.2.3. Read between the lines

While you’re busy listening to anyone and everyone’s feedback, there’s a sort of post-processing step you can use to gain even more information. It’s not completely precise, but it can be boiled down to a few concepts that should be considered together:

  • What people say isn’t always precisely what they mean.
  • What people say is a reflection of who they are and what experiences they have.
  • What people say about a situation is based on their own perceptions and not necessarily on reality.

These are nebulous statements, I know, and they don’t apply strictly to data science, but they are worth considering. Any time you’re considering taking someone’s suggestion to change a product, it can be valuable to consider who they are and where they’re coming from before devoting time and effort to the change. I explain these three points in the following subsections.

What they mean

If someone uses your web application and they say, “You should add a tab called Search and it should search all the available data,” taking their suggestion literally may not be the best idea. Probably they mean to say, “I would like to be able to search all the data,” but they may not be concerned about the search functionality coming in the form of a tab, a page, or a page header. Taking the person’s suggestion literally without considering which parts of the suggestion are important may be a mistake. They may not have considered all potential UX solutions before suggesting an extra tab, even if the suggestion of a search capability is a good one.

Similarly, when I was working at the startup developing a product analyzing organizational communications within financial firms, discussed earlier in this chapter, we often heard the suggestion that the users shouldn’t have to define behavioral profiles for the suspicious types they were looking for. The need for the user to define these profiles was a hindrance to the use of the product, and we received requests for a set of generic profiles that could be used without user input. We came to understand after a short time that it was too much effort for users to construct profiles, despite the fact that the typical users of the product knew far more about the suspicious types they were looking for than we did as data scientists and software developers. We began interpreting “You should include a set of generic behavioral profiles” as “We don’t want to put time and effort into developing behavioral profiles.” The latter statement is more definitive and direct, and understanding it that way prevented us from developing the requested generic profiles that we knew wouldn’t be specific enough to give acceptable results.

It’s more art than science, but being able to distinguish what a customer says from what they mean can save significant time and effort in product revisions.

Who they are

It may be obvious that a person’s identity and experience affect the things they say. In software-related industries, I’ve noticed a few patterns in how people in certain roles think. Certainly not everyone conforms to these descriptions, but for whatever reason—common basis of education, experience, objectives, and so on—people in the following roles generally tend to share a few opinions and biases:

  • Salespeople and business developers want everything to be easier and more clear-cut. Their feedback might seem to trivialize statistics and software engineering in favor of making everything easier to understand. They want the product to solve the customer’s problem directly, in one step (which is rarely possible).
  • Software engineers want to make everything more efficient, and they want to add every capability. Their feedback might include phrases like “You could make it faster if...,” “If you just __________, it could also do...,” and “Does it do __________? Why not?” They want the product to handle all of the edge cases of every potential user and with exabytes of data.
  • Other data scientists want to make the analytics smarter and the data-driven graphics impossibly informative. Their feedback might include “Which statistical methods did you use? Oh.” and “You could probably visualize that by....” They want the product to give academic-quality results for every use case and to have enough flexibility for powerful statistical analysis.
  • Subject matter experts mainly want their problem solved; they have little experience with software or data science but are willing to learn what they need, to an extent. Their feedback is usually reserved and inquisitive, such as “Can I do ________?” and “How do I ________?” Because they want their problem solved, they’re open to almost any strategy or method unless they realize the product can’t solve their problem.

I hope that these are useful lenses through which you can consider feedback from people in various business roles, even though they obviously aren’t perfect. All feedback can be important, but the question of whether to act on it depends heavily on whether your goals align with the giver of the feedback.

Their perception

It’s important to consider what someone observed about the product and the addressable problem before they gave their feedback.

While working with a startup recently, an adviser suggested that, for the complex analytic component that I built, we should stop using the file system for intermediate storage and instead put intermediate and final results into a database. I understood his thinking: clearly, databases are faster than a file system for all but the simplest reads and writes, so it would be an obvious improvement to switch to a database, regardless of which type. In addition to that, we were working with many terabytes of data, which was a lot in 2015. This wasn’t the first time that I’d received this advice, so I had experience dealing with the arguments on both sides. Given my experience, I wasn’t tempted to follow it. My reasons for not following the advice at the time were these:

  • A database is a software component that must be configured and managed on every environment, whether during testing or during a live customer deployment. That’s a significant amount of work and maintenance.
  • I was the only person who was developing or maintaining the code for this analytic software component, and I didn’t have a lot of experience with multiple deployments of databases.
  • The file system storage was working quite efficiently.
  • I performed some code profiling and found that the majority of the time was spent calculating things and not on storage. Switching to a database would entail, at best, less than a 2x efficiency improvement.

The startup adviser who had recommended that we switch to a database—any database—didn’t know any of these facts. As the data scientist and developer, I was privy to these facts, and I knew he wasn’t, so I was able to consider his comments in the context of what he knew and what he didn’t. I knew it would be a lot more work for me during each deployment if we switched to a database, and we didn’t have the money to hire anyone else, so it was more sustainable to go without. Many software engineers might disagree with me, but I was laser focused on getting the correct results from the software and less concerned with whether it took 1.5x longer than it could.

It would be incredibly difficult to characterize how every person’s perception might differ from yours, but I’d like to stress that every person’s perception differs from yours, at least a little. Their feedback should be viewed through the lens of what they know about the product and the situation.

12.2.4. Ask for feedback if you must

Some data scientists deliver products and forget about them. Some data scientists deliver products and wait for customers to give feedback. Some data scientists deliver products and bug those customers constantly. I don’t want to imply that the last category is the best, but it’s often a good idea to follow up with your customers to make sure that the product you delivered addresses some of the problems that it was intended to address. There are two main reasons why you might want to do this.

Reputation

The first reason you might want to follow up with your customer is that, presumably, it’s good for your reputation and that of your team and employer (if any) that your projects are successful and that your customers are satisfied. Certainly, it’s not fun to spend weeks or months building something that isn’t useful, which is a shame. But far more important than that, the project was a crucial part of your work and should reflect the quality of work that you do in general and what customers can expect in the future. Following up with customers allows you to address any concerns—in particular, the easy-to-solve ones—and greatly improve customer satisfaction.

Learning opportunity

In asking for feedback from your customers—playing the long game—you can improve yourself and your skills in data science. When delivering a product to a customer, there aren’t many ways in which you can figure out where you went wrong and where you were successful without talking to that customer. Asking them how things went—thoroughly, as described throughout this chapter—provides one of the only insights to the success or failure of the product that you designed and built.

12.3. Product revisions

Every product developer wishes they could deliver their product to the customer, the product solves all of the customer’s problems, and everyone lives happily ever after. That rarely happens. As with every step of the data science process, the initial phase of product usage is subject to uncertainty. Usually there are problems. Sometimes, as discussed earlier in this chapter, those problems can be fixed or alleviated by working with the customer to help them use the product more effectively. But in other cases, problems arise that are either difficult or impossible to solve by training the customer to work around them. Revising or changing the product in some way then becomes the best—or only—option.

Earlier in this chapter, I discussed different types of problems with the product that might occur and how they might be fixed. In some cases, I suggested that the problems could be fixed by changing or revising the product itself. Making product revisions can be tricky, and finding an appropriate solution and implementation strategy depends on the type of problem you’ve encountered and what you have to change to fix it. In this section, I discuss product revisions as a direct result of prior uncertainty about some aspects of the project, the processes of designing and engineering revisions, and the decision of which product revisions to make in order to maximize the project’s chance of success.

12.3.1. Uncertainty can make revisions necessary

If, throughout the project, you’ve maintained awareness of uncertainty and of the many possible outcomes at every step along the way, it’s probably not surprising that you find yourself now confronting an outcome different from the one you previously expected. But that same awareness can virtually guarantee that you’re at least close to a solution that works. Practically speaking, that means you never expected to get everything 100% correct the first time through, so of course there are problems. But if you’ve been diligent, the problems are small and the fixes are relatively easy.

It might even be the case that you knew that there was some chance that these exact problems would arise, and you may have even planned for that possible outcome. If this is true, you must have maintained incredible awareness throughout the project, and now you’re in an excellent position to find and enact tenable solutions to the problems. If you didn’t foresee these problems, you’re in the vast majority; knowing that there is uncertainty isn’t the same as knowing which outcomes might come to be. But now that you know which problems did pop up (and which successes) after product delivery, you’re in a better, more informed position than ever from which to achieve the project’s main goals. It will, however, take further effort from you to design and build the necessary product revisions.

12.3.2. Designing revisions

By designing revisions I mean the process of figuring out what product revisions are necessary and what form they should take. The design process conceptually identifies and imagines product changes without making them.

Designing a revision is largely the same as designing a product in the first place, except that there’s already a product in existence, hopefully most of which you can still use. But although you may be able to continue to use large parts of the existing product, it may be best not to. To borrow a term from finance, the time and effort spent on anything that you’ve already built are sunk costs, so the costs of building the existing product shouldn’t be considered when making future decisions. You can consider the existing product as an asset at your disposal when making those decisions.

Earlier in this chapter, I talked about recognizing and finding a remedy for various types of problems with a product. At this point you should know generally what the problem is and how to fix it, but I didn’t give many specifics about the remedy itself. Here I’ll say a bit more about designing these remedies in the form of product revisions for three types of problems.

Bugs

From a design perspective, fixing a bug is trivial. Within a certain context, the software does something when it’s obvious it should be doing something else. The design of the revision is to make the software stop doing the former and do the latter. The engineering of the revision, however, may not be so trivial.

UX problems

UX problems are all about design. Tricky ones may require the help of a UX expert. Given that the first version of the product contained a UX problem, unless the problem and the specific solution are now obvious (which they sometimes are), it’s entirely likely that a second try at a UX design will also be problematic. For nontrivial problems, it’s probably worth putting more effort into a UX redesign than was put into the original design, and this usually requires the help of someone with UX experience. At the least, it’s worth spending some time with the users to get a good impression of how they’re using the product and how the problems arise.

Problems with functionality

With respect to analytic software applications, I use the term functionality to mean any actions taking place behind the scenes—data collection, data flow, computation, storage, and so on—as well as the application’s general ability to deliver information and results to a user. A problem with functionality can therefore be, for example, the application taking excessive time to deliver a result to a UI, an improper statistical model being used, or a crucial piece of data not being used properly.

Making revisions to a product’s functionality often requires architectural changes, which can have consequences for many other aspects of the application. Such revisions shouldn’t be taken lightly. It’s often best to have discussions with everyone involved in the design and engineering of the product in order to run through the repercussions of making the revisions under consideration. For more complex analytic applications, it may not be entirely clear how dramatic and far-reaching any revisions might be, and it may take some investigation to find out.

When designing significant revisions in functionality, I typically gather all members of the team with key knowledge about the product and its architecture and follow these steps:

  1. Plainly state a best-case revision, with respect to the end result for a user.
  2. Get feedback from everyone present regarding the repercussions within the application.
  3. Consider whether the best-case revision can be achieved or if some aspect of it is untenable.
  4. Get estimates on time and effort required to make the desired changes.

These steps can be repeated for any number of suggested revisions. Then the revisions can be compared with each other based on time and effort required, closeness to the best case the revision was intended to achieve, and the overall impact to the product and the project’s success.

12.3.3. Engineering revisions

By engineering revisions I mean the process of taking a revision design and building it into the product. For analytic applications, this means coding or incorporating new tools.

Like a product design, a product architecture can have considerable inertia, but it should be considered both a sunk cost and an asset. Many parts of the product’s architecture may still be useful, but some of them may not. Deciding when to throw out a significant chunk of code can be a tough call, but it’s sometimes the right thing to do.

On the other hand, there may be clever ways to use your current architecture to fulfill the requirements of a revision design. Use these clever solutions at your own risk. They may save you time now but may cost you significant time and effort later, should you ever need to make further revisions or additions to the product. If you find yourself considering a clever solution—a solution that cobbles together existing software components in a way that wasn’t originally intended—it can be good to ask yourself and your team these questions:

  • If I rebuilt the entire product, would I build it this way?
  • Why not?
  • Do the reasons why not apply to the current situation and indicate that I’m taking undue risk by engineering this clever solution?

A little bit of foresight here can go a long way. A single clever revision is usually OK, but a second round of revisions on top of a clever revision almost guarantees more, cleverer revisions, and the cycle of revisions can continue until the entire code base is so clever that no one knows how to fix or revise anything without unraveling the application like a ball of yarn.

Clever solutions notwithstanding, I’ll say a bit more about engineering revisions based on a few types of problems.

Bugs

The lifecycle of a software bug is finding it, diagnosing it, and fixing it. Finding it is the easy part. Diagnosing the root cause of the bug can go either way, easy or hard, depending on how deep and how interrelated the bug is with respect to the application. Fixing the bug can also be easy or hard, and this level of difficulty can be independent of whether the bug was difficult to diagnose. Some bugs are diagnosed easily, but their fixes are anything but easy, and the opposite can also be true.

In most cases, making a product revision that includes bug fixes is straightforward. You diagnose and fix the bugs. Little design is involved, and there’s barely a question of whether or how a bug should be fixed. But in some unfortunate cases, the diagnosis or fix for a bug is so incredibly complicated that it can’t be accomplished in a reasonable timeframe, if at all. In these cases, you must make important decisions about the impact of the bug and whether you can work around it in some useful way. Tolerating a bug is never fun, but some bugs are stubborn enough that you have to let them stick around for a while—or forever—if for no other reason than business pressures dictate that the project must be brought to a close. Minimizing the impact of the bug then becomes the goal of the revision.

UX problems

Revisions intended to fix UX problems are design heavy. Figuring out what to do is usually much harder than engineering the revision itself. Certainly some UX aspects require sophisticated engineering, but most don’t. Many UI frameworks exist that enable various standard behaviors, and the vast number of software applications in existence makes it likely that someone has already built a UX similar to yours. Be sure to have your UX designer inspect your revisions thoroughly before you deliver them to the customer.

Problems with functionality

Revisions addressing problems with functionality can have far-reaching consequences within your application. My comments on such revisions in section 12.3.2 also hold true here. It’s best to gather the team and run through all consequences and repercussions of making various desired revisions.

Beyond the known or foreseen repercussions, other unexpected effects might appear as well. Changing an object in your software so that it has a new capability can change application behavior in every place in the code in which that object is used. Object-oriented programming is notorious (and beloved) for relying on side effects to get things done, some of which may not be obvious on first reading the code. For complex applications, software developers rely on unit tests and integration tests (among other types of tests) to help ensure that new bugs aren’t introduced by software changes. If you’re worried about introducing bugs during a revision (or even if you’re not), it can be a good idea to include tests in your code.

Many software engineers argue (and I agree with them) that this is a late time to begin formal testing of your application code. Ideally you would have written tests as soon as you wrote any code, and you would always have 100% test coverage for your application. But being a data scientist and not a software developer, I rarely write tests as I write code for the first time. I usually adapt experimental or research code to make application code, and testing is an afterthought. It’s only when I realize that I’ll be making a series of revisions, and I worry that I’ll break something in the process, that I begin writing tests. It may not be an ideal workflow, but it’s what I tend to do. In any case, it’s better to write tests late than to write them never, and this point in the process is a good time to include them if you haven’t already. Ask an experienced software engineer or consult any popular software testing reference for more information on using testing to help prevent the introduction of bugs during development or revision of your product.

12.3.4. Deciding which revisions to make

Once you recognize a problem with the product and figure out how it can be fixed, there remains the decision of whether to fix it. The initial inclination of some people is that every problem needs to be fixed; that isn’t necessarily true. There are reasons why you might not want to make a product revision that fixes a problem, just as there are reasons why you would. Here are some examples.

Here are some possible reasons in favor of making a revision:

  • You’ll have a better product, a more satisfied customer, and a more successful project.
  • You’re contractually obligated to make certain types of fixes or improvements.
  • Future projects, products, or revisions depend on this one, and making a revision now will improve the quality of those.
  • The customer is willing to pay more for the revision.
  • The customer is asking for a revision, and granting the request will help you maintain or improve your relationship with the customer.

And here are some possible reasons against making a revision:

  • The revision doesn’t improve the product much.
  • The revision is difficult or time consuming to make.
  • Your contractual obligations have already been fulfilled, and you feel that the product is, as they say, “good enough for government work.”
  • You suspect the customer won’t notice the problem or won’t be significantly affected by it.

Here are some variables that you should generally take into account:

  • Time and effort required to make the revision
  • The real impact of the revision on the product and its efficacy
  • Contractual obligations
  • Other possible revisions and their impacts, time, and effort
  • Conflicting obligations of the development team

The final decisions, like many decisions, boil down to weighing the pros and cons for each side and then making the call. In my experience in software development and data science, the important thing is to stop and consider the options rather than blindly fixing every problem found, which can cost a lot of time and effort.

Exercises

Continuing with the Filthy Money Forecasting personal finance app scenario first described in chapter 2, and relating to previous chapters’ exercises, try these:

1.

Suppose that you’ve finished your statistical software, integrated it with the web app, and most of the forecasts appearing in the app seem to be incorrect. List three good places to check to try to diagnose the problem.

2.

After deploying the app, the product team informs you that it’s about to send selected users a survey regarding the app. You may submit three open-ended questions that will be included in the survey. What would you submit?

Summary

  • Customers tend to find completely new ways to break your product.
  • The process of recognizing, diagnosing, and fixing problems in the product should be undertaken deliberately and carefully.
  • Getting feedback is helpful, but it shouldn’t be taken at face value.
  • Product revisions should be designed and engineered with the same level of care (or more) as when you designed and built the product itself.
  • Not every problem needs fixing.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.189.129