Chapter 4. Working Product

Working Product

A working product at all times should be the ultimate goal for every software team. The further a product is away from being in a working state and the longer it is in that state, the greater the time required to get the product back to a working state.

Software products need to be in a working state at all times. This could be called virtually shippable because the product could be shipped after a short period of verification testing and stabilization, the shorter the better. A working product does not mean that the product is feature complete or that all features are done; rather, that it can be shown to or given to a customer for use with the minimum possible number of unexplainable errors. The closer a team can get to continually keeping its product in a working state, the better the chances that its efforts can be maintained over time because of the level of discipline required and also because having a working product imparts greater flexibility and agility.

A working product is an underpinning of agile software development. Agile development demands the ability to get continual customer feedback, and the only way to get feedback on features and to keep customers interested in working with the product is to provide the customers a working product on a regular basis. A working product is a demonstration to customers that this is a product they can rely on. It is extremely difficult to keep customers interested in using a product if they develop a low opinion of it because it is unreliable, crashes, and has unpredictable problems and regressions (where features they depend on no longer work). Regressions are particularly dangerous, and one important aspect of having a working product is that nothing should be added to the product until the regression is fixed.

A working product also gives the product team an incredible amount of flexibility, not only in being able to change directions, but also in taking the latest version of software and shipping it at any time with the critical feature(s) the market demands. This flexibility translates into a sense of power for the development team derived from feeling in control and using that control to respond to changing situations.

The working product principle is a deliberate contrast to the traditional code-then-fix method of software development. This method, which is probably the most commonly used method of software development, is the practice of developing a feature and then debugging and altering the code until the feature works. When software is developed in short iterations of two to six weeks as in agile software development, the temptation is to believe that it is acceptable to apply code-then-fix in each iteration. However, the problem with the code-then-fix method is that teams are forced to waste time and effort debugging, effort that should be applied to satisfying customer problems. Wasted effort is a lost opportunity. Hence, having a working product every day and fixing defects as you go minimizes wasted effort and allows teams to focus on their customers. It also dramatically reduces the buildup of technical debt in the product, because chances are very high that problems are caught as soon as possible. As explained in Chapter 5, this requires a change in mentality about software development: All attempts must be made to prevent defects from reaching people, whether they are testers or customers.

In order to keep a product in a working state, teams need to focus on the quality of everything they do. Achieving this level of product quality requires a great deal of discipline and focus on the part of the development team: focus on the quality of the product and the discipline to ensure that every software modification passes an extensive suite of automated tests, or it is removed from the product until it does pass the tests (see Ruthless Testing in Chapter 5). Whenever a product is in a non-working state and changes are still being made to it, the development team is literally flying blind and technical debt is accumulating; a debt that will have to be repaid later. This situation is a recipe for disaster because the cost of fixing problems increases with time, as does the probability that multiple problems will show up in the final product and be found at customer sites.

A useful conversation to have with a product team is around the question What would it take for us to ship our product every day to our customers? In this case ship could mean providing daily updates or a complete new version of the software to install. The purpose of answering this question is to get the team thinking about minimizing the overhead activities that occur after the software is declared complete and can then be shipped to customers. The answers should be enlightening, because they will highlight not only all the changes the team will have to make but also the ripple effect this would have on the team’s surrounding organization. Here are a few of the topics that might come up:

  • Quality assurance and testing become critical issues. How are you going to keep the level of quality high in your software? If you are going to ship every day, a person can’t do all your testing. You’re going to have to automate as much testing as possible, and this likely means designing the product and product features for testability as opposed to thinking about testing after. And where people are required for testing you’re going to have to ensure that they are available when they are needed, not working on another project.

  • How much validation testing must take place on the final version of the product before the team and its customers have enough confidence in the product that it can be fully deployed? How many of the current validation tests could be automated and run every night or on a more frequent basis so that the testing is essentially done much earlier?

  • Is your documentation and web site going to be ready to reflect the new changes every day? Most likely you’re going to have to work out some automated way of updating the web site while ensuring that product documentation can be loaded off the Internet and not just the local hard drive (i.e., updated live).

  • What happens when a customer finds a problem? How is he going to report it to you? How are you going to prioritize customer reports so that urgent fixes are made as rapidly as possible? How are you going to minimize the chances that a defect is fixed (reliably) without breaking something else?

  • When a customer reports a problem to you, one of the first things she is going to want to know is what the status of the fix is. Are you going to let the customers review the status of their defect reports online and see when they can expect fixes?

  • How are you going to develop features that take a long time (weeks to months) to complete? Most definitely the wrong answer is to wait until the feature is complete before merging it into the source code. You are going to have to find a way to integrate these features in piecemeal while ensuring the product still works and while telling users what to expect every day, and you’re going to have to find ways to get input from customers on the new features as they progress.

  • What if a critical feature breaks in a daily release? Customers would have to be able to easily report the problem to you and back up to the previous day’s version while they wait for a fix. This is the time when you will have to demonstrate responsiveness to your customers, because the time they spend waiting for a fix or backing up to the previous version costs them time and money.

The goal in this exercise is to ship the product to customers as frequently as possible. Shipping frequently encourages teams to minimize their overhead while maximizing their productivity; so they can ship frequently AND implement the features that their customers need. This is only possible by focusing on a working product, because by doing so teams will have to be ultra-efficient at finding, fixing, and preventing failures, as explained in Chapter 5. Teams that are able to achieve the frequent ship/features balance will consistently outinnovate teams that have one or more delays as they try to fix the problems that prevent their product from being in a working state.

Shipping daily is the ultimate goal, but there may be factors that make it possible to ship once a week or once a month. What you want to avoid is being able to ship only a couple of times a year. Note that it’s one thing to ship twice a year and quite another to only be able to ship twice a year! Not all of your products need to get installed by all possible users, it is simply important to get the software installed and used on real problems as frequently as possible. Many open source projects and a smattering of commercial products are able to actually ship every day or at least very frequently. A few examples are:

  • Eclipse (http://www.eclipse.org) is an open-source IDE project from IBM. A reliable update mechanism is built into the software so that updates can be made at any time.

  • Qt (http://www.trolltech.com) is a commercial GUI cross-platform toolkit.

  • Renderware (http://www.renderware.com) is a commercial game engine produced by Criterion that ships weekly.

  • Torque (http://www.torque.com) is an inexpensive but high-quality game engine that is virtually open source.

  • Microsoft regularly provides updates to the Windows operating system and Office tools. Although Microsoft makes an easy target for many people, I think it’s worth pointing out that, overall, its updates work.

  • Norton Anti-Virus is a common desktop tool that is updated whenever a new virus has been detected on the Internet.

Think of shipping every day from a purely technological standpoint. We have the technology right now in the Internet and binary update mechanisms to actually perform live updates of customer software as the developers merge their changes into the product, which would essentially entail multiple software updates every day. There are many reasons why we don’t do this; I think the biggest is simply the baggage that the software industry is carrying around: that software quality is poor and updates should be done carefully or only after some (usually long) period of verification testing. There are clearly cases where verification and extreme caution before installation of a new software version is mandatory: for software systems where a software error could cause a person or other living thing to be injured or killed, the economy to collapse, or an expensive piece of equipment to be permanently disabled or destroyed. Clearly, these mission-critical software systems shouldn’t be installed without a great deal of caution.

The problem today is that the vast majority of software produced today is not mission-critical, yet the software industry has done such a good job of training its users to expect poor quality that they expect to not be able to continue to do their work when they perform a simple software upgrade. Consider a commercial business that has a production pipeline made up one or more software products. The amount of confidence this business would have in being able to upgrade a product while people are working on a project is nil today. But, what if a production pipeline could be changed during a project? And frequently changed? What is the advantage to the software team that could offer this ability to its customers? That is the power of having a working product.

All the practices recommended in this chapter help project teams concentrate on keeping their product in a working, virtually shippable, state. Having a singular focus on a working product should be a rallying cry for every project team, one that will lead to significant positive team morale and a sense of ongoing accomplishment and ability to be proactive not reactive (i.e., in control). Achieving a working product is not extra work if it is done every day!

Practice 1: No “Broken Windows”

This practice could also be thought of as “no hacks or ugly workarounds.” It is possible to think of many metaphors for this practice. The authors of The Pragmatic Programmer [Hunt and Thomas 2000] use the metaphor of a house for software development, where once a house has a broken window, the occupants of the house will tend to be careless and more prone to leaving broken windows themselves. Another metaphor for this practice is counter space in your kitchen: All it takes is for someone to leave an unopened letter on a section of your counter, and before you know it, there will be more there. Software has all the same properties as a house or kitchen countertop: It’s exceedingly easy to make changes in a sloppy, haphazard way. Once one sloppy change has been made, others will follow; this is unfortunately human nature. Therefore, never start the decay, and if you find some bad code, clean it up immediately. If the programmer who left the mess behind is still working on the code, make sure he or she knows about the cleanups you have made and why you made them; peer pressure to write good code is a good thing.

In software, once decay has started, the decay will continue and get worse over time. It is really hard to care about leaving the code in a good state when there are obvious signs of neglect. This is unfortunately human nature, and the best way to combat human nature is to always leave the code in a better state than when you started. Don’t leave a mess behind for someone else to clean up! Ugly workarounds cause problems and are a symptom that the product is most likely not in a working state.

Practice 2: Be Uncompromising about Defects

It is vital that you adopt an uncompromising attitude toward defects. The goal is to avoid a defect backlog, where there is a list of open defects that is too large for the team to deal with immediately. Defect backlogs are evil and must be avoided because:

  • Defect backlogs burden teams with repetitive searching, sorting, and categorizing tasks that waste time and take away from productive work.

  • A backlog, by definition, means that more defects are being carried around than can be reasonably dealt with. The number of defects makes it more difficult to find the defects that must be fixed, the defects that customers repeatedly encounter and can’t work around.

  • A defect backlog is a strong indicator that new features are being developed on top of defective code.

  • The larger the backlog, the greater the chance the backlog is never going to be eradicated. I have seen cases where teams were carrying around backlogs that would take over a year to fix, even if all they did was fix bugs and no new problems were introduced!

  • The larger the backlog, the greater the chance that defects are going to be shipped to customers and that customers will find defects that they can’t work around, so you’re going to have to send them a patch. This is expensive for you and your customer.

  • The greater the time between when a defect is introduced and when it is fixed, the greater the cost of fixing it. Carrying a backlog thus means that you are implicitly accepting higher costs of development and future change.

How to Avoid a Defect Backlog

  • Be ruthless: When a defect is reported, decide as quickly as possible to either fix it or get rid of it, preferably by marking it with a special state to indicate that it won’t be fixed along with an explanation as to why. This way you won’t lose the information.

  • Fix regressions, where a feature that used to work stops working, immediately. Regressions put the product into an unknown state where users won’t be able to trust it and developers won’t be able to make changes with confidence. To achieve sustainable development, regressions must be fixed immediately, before any other changes are made to the product, even if that means that work on some new feature must be stopped to fix the regression.

  • Once you have decided to fix a defect, do so as quickly as possible, and for as many defects as you can manage. Understand what caused them and how to prevent them in the future.

  • Fix all the known defects in a new feature before moving on to work on the next feature. Then, once you move on to the next feature, be ruthless with any newly reported defects.

Put More Effort into Defect Prevention Than Bugfixing

The most important point about being ruthless about defects is that you need to put more effort into preventing defects than into fixing them. Defect detection is still vital; it’s just that defect prevention is even more important. Being ruthless about defects and trying to avoid the accumulation of a backlog is one thing, but if your product doesn’t have safeguards in place to prevent defects from getting to your QA department or customers, then you are still going to get a backlog, no matter how fast you fix and classify your defects. Defect prevention is described in detail in Chapter 5.

Being Pragmatic about Defects

If you are starting a project with known technical debt or are concerned that your product quality may slip, there are a few things you can do:

  • Insert bug fix-only iterations between feature development iterations. If product quality is slipping, you’d be surprised by how many users appreciate a stable product with a few focused features.

  • Set some quality-related exit criteria for your iterations. Then, set the rule that the team can’t call an iteration complete until the exit criteria are met. An example exit criterion might be “There can be no ‘stop ship’ defects in the product and no known regressions.” Another might be “There can be no known open and unresolved defects” (to reinforce the need to be ruthless). These exit criteria add some extra discipline to iterative development through having a clear goal.

  • A variant on the previous point is to let the team decide (the team includes the users or user representatives!) when the quality is good enough to move on to the next iteration.

These ideas might cause some people to balk, but I think you need to do what is best for your project. Your goal is to ship repeatedly, and with the least amount of wasted effort. I have used or seen each of the above suggestions on various projects, and they work. However, you do need to be careful, because you should not need to regularly rely on these nor use them for any extended period of time; disciplined iterative development alone should be sufficient.

Practice 3: “Barely Sufficient” Documentation

In order for teams to concentrate on a working product, they should minimize the amount of time spent writing documents that are not part of the product. Examples of this type of document are requirements and design documents[1].

Barely sufficient documentation is another important part of agile development. All too often, teams forget why they produce documentation: to produce a working product. In too many cases, documentation is required before a project can proceed to the next step. This introduces a linearity to development that is unhealthy and unrealistic. Many people lose sight of the fact that what truly matters in any development project is the process, especially the conversations and decisions required to produce a useful product. Documents are secondary to collaboration and learning and are a result of the process, not its focus.

This practice is intended to encourage teams to focus on what matters the most: their working products. If you have been involved in or observed a project where the team spent months or years trying to understand user requirements and writing documents without ever producing a product (or even writing any code at all), you’ll understand the reasoning behind this rule! Customers don’t know what they want until they see it; documents don’t help customers at all.

Some of the barely sufficient forms of key documentation are:

  • The use of index cards for feature description in cycle planning. Index cards are used to describe features, to record the essential facts, and encourage team members to discuss the details. They also give team members something tangible to hold in their hands and quickly move around during planning sessions.

  • The collection of feature cards for a project should serve as adequate documentation of requirements. It may be necessary in exceptional cases to provide additional documentation, but these should be truly exceptional circumstances not the norm.

  • Design your software collaboratively in front of a whiteboard. Great discussions almost always result that highlight problems and alternative approaches. Then, simply take a picture of the whiteboard when done and place the image on a web page.

  • An alternative to drawing a design on a whiteboard is to use a sketching tool. There are some good inexpensive tools available, such as SketchBook Pro from Alias (http://www.alias.com).

  • Put design details like relationships and code rationale with the source code (where it can be easily kept up to date) and extract them using a tool like JavaDoc or doxygen. This is a good practice because it encourages collaboration and decision making over document writing in addition to encouraging teams to get on to developing their product.

For details on designing software in an agile development context, refer to the section Simple Design in Chapter 6 on Design Emphasis.

The examples given above of barely sufficient documentation are by necessity brief and far from exhaustive. An entire book [Ambler and Jeffries 2002] has been written on this topic, and interested readers should refer to it for further information.

Practice 4: Continuous Integration

An important team discipline is to continuously integrate changes together. Frequent integration helps to ensure that modules that must fit together will, and also that the product continues to work with all the changes. Many developers have the bad habit of checking out a number of files and not checking them in again until their work is done, often days or weeks later. Developers should integrate their work every day, or even better, many times per day. This gradual introduction of changes ensures that integration problems or regressions are caught early, in addition to naturally allowing multiple developers to work on the same section of the source code. Of course, in order to catch problems it is important that your team has automated tests in place to help catch integration problems as explained in Chapter 5.

Practice 5: Nightly Builds

In order for your team to ensure that your software is working, it should be completely rebuilt from scratch one or more times per day. The build should include as many automated tests as possible to catch integration problems early, and the result of the build should be an installable product image. Nighttime is a good time to do this because no one is working on the code, and there are plenty of idle computers available. However, if you can do it, there is a lot of benefit to building the product as many times per day as possible. If the build or tests fail, fix the problems first thing in the morning and don’t let anyone integrate any additional work until after the build succeeds again, otherwise there is a risk of multiple bad changes accumulating that will jeopardize the quality of the product.

Pay Attention to Build Times

You should always keep an eye on the product build times. The faster you can keep the build, the faster the feedback you’ll get when you need to quickly generate a cut of the product. Long build times encourage developers to take potentially dangerous shortcuts and also to pay less attention to keeping the problem under control.

Long build times are almost always an indicator that something is wrong with the structure of the product: too many interdependencies between modules, not enough modules, or worst of all, duplicated or unnecessary code. In languages like C or C++ there might also be “include file” bloat, with include files making unnecessary #include statements or declaring inline functions that should be moved to the .cpp file.

Here’s an example of the most common problem:

// File: A.h

// This class contains a member variable that is a pointer
to an
// instance of another class.
//
#include "B.h"

class A {
       ...
   protected:
   class B *m_Bptr;
   ...
}

The problem with this seemingly innocent class is the #include of “B.h,” which is the file that declares “class B.” If this file that declares “class A” (A.h) is included in many places in your product, build time will suffer because the preprocessor is going to spend a great deal of time processing A.h AND B.h, when in fact B.h is not necessary because a forward declaration can be used in A.h and an include of B.h only added to the .cpp files that actually need to use “m_Bptr”:

// File: A.h

// This class contains a member variable that is a pointer
// to an instance of another class.
//
class B;    // All class A needs is a forward declaration of
            // class B.
class A {
       ...
   protected:
   class B *m_Bptr;
   ...
}

Tip: Timestamp the start and end of all builds

A good practice to follow is to timestamp the start and end of all complete builds and keep a log. Then, write a simple script to catch large increases in build time. If possible, produce a simple chart that can then be put on a web page and quickly viewed. It’s always easier to glance at a chart than to dig through a log file!

Practice 6: Prototyping

Making a conscious effort to prototype solutions to risky problems helps to increase the chance of having a working product. Prototypes are an inexpensive way to try out ideas so that as many issues as possible are understood before the real implementation is made. There are two main classes of prototypes that you can use to positively impact your product. The first is the true prototype, where a test implementation is used to understand a problem before it is implemented for real. The second is the notion of “tracer bullets” [Hunt and Thomas 2000].

Reducing risk is a key part of working software. You need to continually assess risk in collaborative planning sessions, but, most importantly, when key risks are identified, you need to do something as early as possible in the project to ensure the risk is well understood. Prototypes are an extremely effective and relatively inexpensive way to evaluate risks. Not only do you gain an understanding of the nature of the risk, you have a prototype of a solution to the problem and hence have much greater confidence that you understand the length of time required to address the problem. You also have an implementation you can borrow from, even if it is only ideas, when implementing the complete solution.

Throwaway Prototypes

A throwaway prototype is an excellent tool to evaluate risky features and develop better time estimates for a feature. As explained in the “iterative development” section, the elimination of risk as early as possible in a project is often a critical project success factor. Doing a quick prototype in a small number of iterations is often an excellent way to acquire a much better understanding of the problem and the actual risk. Also, by developing the prototype as a throwaway, usually in a copy or branch of the product or a simple prototype application, it is easy to ensure that the product is still in a working state when the real solution to the problem is implemented.

Quite simply, a prototype is a quick and dirty implementation that explores key target areas of a problem. Prototypes should not be production ready, nor should they provide a complete implementation of the desired solution. They might even be implemented in a scripting language like Python or Ruby to save time. The bottom line with prototypes is that they should be used as a point of reference only when implementing the complete solution to a problem.

One of the downsides of prototypes is that you often have to show them to customers and management in order to demonstrate progress. One of the conclusions your audience could draw, if you are not careful, is that the problem is solved and done when, in fact, all you have is a rough proof of concept. If you do show a prototype to customers, be sure they know what you are showing them before you show it to them to avoid situations where your audience thinks the problem is solved. You could also make it obvious through rough visual output that clearly does not look done.

A good analogy to use is car design. The car manufacturers all regularly produce concept cars to try out ideas or demonstrate a proof of concept. These cars are one-offs that could be turned into a production version fairly quickly, but they are definitely not ready for shipping to customers. Likewise, many product designers ensure that if they are showing a digital prototype such as a rendered image to their clients that the image does not make the product look done. If you suspect there is going to be pressure to use the prototype for the shipping product, it might be better to consider using tracer bullets instead.

“Tracer Bullets”

“Tracer Bullets” are a deliberate approach that uses a prototype that is intended from the outset to gradually turn into the final solution [Hunt and Thomas 2000]. The analogy is combat: Imagine you have to shoot at and hit a target in complete darkness. The brute force approach would be to point your gun in the right direction and hope for the best. However, a more effective approach is to use glowing rounds called tracer bullets. You start by firing a few tracer bullets and then correcting your aim until you can fire real bullets to destroy the target. In the real world, tracer bullets are every few bullets in a clip. Hence, the software analogy is that if you’re uncertain you’re pointing in the right direction, start by building a viable prototype and keep adding to it until your customer says, “That’s it!” This is really a microcosm of agile development, where multiple short iterations are used to continually add to a prototype. The difference is that agile development is applied to a collection of features (a whole product) and tracer bullets can be used on one feature, problem, or area of risk.

Practice 7: Don’t Neglect Performance

Performance is one topic that generates passionate discussions in software development. Some people feel that code clarity is more important and that you should get the code clarity right first and then optimize the 1 to 3 percent of code that needs it. Others feel that you should code for performance first, because if you don’t, your code will always be slow.

Personally, I get scared when a development team or developer says “we’ll worry about performance later.” I have seen too many projects where the team felt the need to get the features in before worrying about performance led to months or years of rework. Code clarity does come first, but not at the expense of performance. This is another case where you need to embrace the genius of the AND: You must be concerned with performance AND clarity, with a strong bias toward the latter. You don’t have to be obsessive about performance, but you need to understand what the customer’s expectations are, design for the right level of performance from the beginning and where performance really matters, ensure that you have tests to measure performance and let you know when the performance degrades in a noticeable way (see Chapter 5 for more on performance tests). You should never have to hear from your customers that your product is too slow…!

I also get scared when a development team says its software will get faster on the next generation of hardware. The fact that computers are getting faster is a constant and everyone benefits, including your competitors. Applications are also getting larger and more complex. You need to understand at least the basics of performance for the programming language and framework your product works in. Just because you develop in Java, for example, doesn’t mean that you can neglect performance, even though Java is an interpreted language.

Practice 8: Zero Tolerance for Memory and Resource Leaks

If you develop in a programming language or environment where memory or resource leaks are possible, you must have zero tolerance for leaks. Resource leaks lead to unpredictable behavior such as program crashes, and they can also be exploited by hackers to compromise your customer’s computers.

I wish it were possible to say that this is an easy practice to follow. Unfortunately, the reality is that even if you use available tools (see Chapter 5 on Defect Prevention), leaks can still happen. Sometimes, lower level APIs that your application uses leak memory. There isn’t much you can do about those except to report the problem to the vendor. And while it might seem ideal to adopt a policy where you always free every allocated resource, sometimes you can’t. The most frustrating example is if you have a large application that allocates large amounts of memory. Because operating systems don’t provide a fast exit path for applications, if you were to explicitly free up all allocated memory when the application stops, it may take many seconds for the application to actually go away, which can be frustrating to users. Therefore, to get the appearance of a fast exit, it is common practice to not bother freeing up memory at all, quit, and let the operating system worry about it. This wouldn’t be a problem except that when you use a memory leak detection tool, all of the memory and resources that were not explicitly freed on exit are reported as leaks, which makes legitimate leaks virtually impossible to detect…!

All I can advise is to at least make sure you exercise due diligence—don’t just ignore trying to find leaks. The tools available today have APIs that you can hook up to your application to start and stop leak detection. These can be very useful.

Practice 9: Coding Standards and Guidelines

Make sure your team talks about what good code is and what coding practices team members like and dislike. Put them down on paper or in a web page and get everyone on the team to agree to them. Don’t get hung up on silly formatting issues like whether curly braces should appear on the same line or next, just get everyone to agree that when they modify a file, they should follow the same formatting conventions. Having consistent coding conventions simply makes code easier to read. Plus, there is a huge benefit to having the team discuss good code and bad code; they can learn from each other and understand each other’s viewpoints.

Tip: Tune Coding Guidelines to the Team

One of the teams I worked in had one coding guideline: Use whatever coding conventions you’re comfortable with, but if you’re modifying someone else’s code, you have to follow his or her conventions.

This rule worked for that particular team because we all had many years of experience, and we each had our own coding conventions (and we were opinionated about them). We trusted each other to write good code and to be adaptable.

Our goal was simply to avoid problems like files being reformatted (to suite someone’s taste who uses different conventions), variables being renamed (because of different naming conventions), and multiple conventions being used in the same file. We didn’t want to waste time discussing or debating our coding conventions or making changes that had no functional purpose.

Practice 10: Adopt Standards (Concentrate on Your Value-Add)

Another way of stating this practice is don’t reinvent the wheel. For every project you work on, you need to understand where your value-add is, and then ensure that you put as little effort as possible into portions of your project that are not value-add (i.e., plumbing). Reuse as much as you can for the plumbing, even if it isn’t perfect; you can always modify or extend it to suit your needs.

One of the most significant changes to the software industry is taking place in open source software. I believe that everyone in the software industry (and not just developers!) needs to at least understand what open source software is available and what the different open source licenses mean. While only time will tell if open source makes software into purely a commodity (as some claim), it is clear that what open source is doing is changing our definition of what is plumbing and what is value-add. Too many teams spend too much effort developing code that is already readily available to them via open source. The challenge is for everyone to recognize the value of developing the plumbing in a community. Naturally, over time the definition of what is a commodity will change, and this is a natural and healthy progression toward being able to offer customers richer and more sophisticated solutions over time.

Open source software addresses the concern that when problems are found you are powerless to fix them. This is a common problem when commercial third-party libraries or components are used: If you find a problem, you are completely dependent on the provider of the library to fix it. If the provider doesn’t or can’t fix the problem in a timely manner, you often have to implement an ugly workaround. With open source, you can make the change and submit it to the community, where all the other users benefit.

IBM is an excellent example of understanding the difference between commodity and value-add. IBM took the bold move of developing its Eclipse IDE as an open source project, but IBM also uses Eclipse as a foundation for its own commercial products and allows third parties to develop their own commercial products on top of Eclipse. Being a free IDE has created a large community of developers who are comfortable with Eclipse and are also potential buyers of IBM’s other tools.

XML is another example of the power of adopting standards. The use of XML has proliferated quickly because of the ease of use of the language and its supporting libraries. The ready availability of XML libraries and parsers means that today it is virtually impossible to justify designing a proprietary text-based file format and wasting time writing a parser for it. This is most definitely a good thing because it means that one more piece of the standard building blocks required by virtually all programs has been taken care of. XML frees developers to work on their value-add.

Practice 11: Internationalize from Day One

Get in the habit of ensuring your products are internationalized[3] from day one. There is no extra overhead once you understand what is required. The bottom line is that the largest amount of economic growth globally in the next decade is projected to take place outside of English-speaking United States. The implication should be obvious: Your company needs to think globally, and so should you.

This practice applies to the need for a working product because of the discipline required. The process of continual localization as the product is changed is a discipline that, while sometimes burdensome, also requires discipline on the part of the project team and the skills learned through this discipline transfer to other areas of keeping the product in a working state.

Practice 12: Isolate Platform Dependencies

If your code must support multiple platforms, a clean separation between platform-specific code and all the other code is essential to have software that can be easily kept in a working state. Some examples of a platform are operating systems, graphics subsystems, runtime environments, libraries, and user interface toolkits. The best way to isolate platform dependencies is usually through an abstract interface (such as a Façade design pattern) so that the majority of the software is isolated from the platform-dependent code. Otherwise, software can become littered with conditional code that makes the code hard to maintain, debug, and port to other platforms in the future. If platform dependencies are important to your project, take the time to do them right. The effort will pay off.

Summary

Software products should always be in a shippable state. This does not mean a feature complete state. A working product gives teams the maximum possible flexibility and agility while minimizing wasted effort. The longer a product is in a nonworking state, the greater the effort required to return it to a working state. If problems are neglected too long, it may be impossible to return the product to a working state.

This chapter has outlined a number of practices related to keeping a product in a working state. By necessity, there are references to practices in other chapters because all the principles and practices described in this book reinforce each other. Subsequent chapters cover the principles of defect prevention, design, and continuous refinement. Defect prevention is required for working software because teams must have automated, not effort-wasting manual, methods to keep their products in a working state. Likewise, in a complex environment of continual change, the design practices employed by the team must reinforce agility and changeability so that all hell does not break loose every time a change is made. The principle of continuous refinement is also vital for working software, because the project management practices used by the team must provide a lightweight and flexible way to continually monitor the project at frequent intervals to gauge progress and introduce the necessary changes to keep the project on track, and the product working.



[1] Internal documentation is distinct from the external documentation that is included with the software. External documentation such as help and feature documentation is required by customers and should be considered as features. The intent of this practice is to minimize the internal documentation only.

[2] There are many good standards online, and these could serve as an excellent starting point.

[3] Internationalization is the act of making software capable of displaying its user interface in multiple languages. Localization is the translation of the text strings into other languages. Software developers internationalize their software, while localization is usually done by a third-party translation service managed by the project team.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.54.245