The middle band within Ron Jeffries’ Circle of Life consists of the Team Practices of Agile. These practices govern the relationship of the team members with one another and with the product they are creating. The practices we will discuss are Metaphor, Sustainable Pace, Collective Ownership, and Continuous Integration.
Then we’ll talk briefly about so-called Standup Meetings.
In the years just before and after the signing of the Agile Manifesto, the Metaphor practice was something of an embarrassment for us because we couldn’t describe it. We knew it was important, and we could point to some successful examples. But we were unable to effectively articulate what we meant. In several of our talks, lectures, or classes, we simply bailed out and said things like, “You’ll know it when you see it.”
The idea is that in order for the team to communicate effectively, they require a constrained and disciplined vocabulary of terms and concepts. Kent Beck called this a Metaphor because it related his projects to something else about which the teams had common knowledge.
Beck’s primary example was the Metaphor used by the Chrysler payroll project.1 He related the production of a paycheck to an assembly line. The paychecks would move from station to station getting “parts” added to them. A blank check might move to the ID station to get the employee’s identification added. Then it might move to the pay station to get the gross pay added. Next it might move to the federal tax station, and then the FICA station, and then the Medicare station… You get the idea.
The programmers and customers could pretty easily apply this metaphor to the process of building a paycheck. It gave them a vocabulary to use when talking about the system.
But metaphors often go wrong.
For example, in the late ’80s, I worked on a project that measured the quality of T1 communication networks. We downloaded error counts from the endpoints of each T1 line. Those error counts were collected into 30-minute time slices. We considered those slices to be raw data that needed to be cooked. What cooks slices? A toaster. And thus began the bread metaphor. We had slices, loaves, crumbs, etc.
This vocabulary worked well for the programmers. We were able to talk with each other about raw and toasted slices, loaves, etc. On the other hand, the managers and customers who overheard us would walk out of the room shaking their heads. To them we appeared to be talking nonsense.
As a much worse example, in the early ’70s I worked on a time-sharing system that swapped applications in and out of limited memory space. During the time that an application occupied memory, it would load up a buffer of text to be sent out to a slow teletype. When that buffer was full, the application would be put to sleep and swapped out to disk while the buffers slowly emptied. We called those buffers garbage trucks, running back and forth between the producer of the garbage and the dump.
We thought this was clever. Our use of garbage, as a metaphor, made us giggle. In effect, we were saying that our customers were garbage merchants. As effective as the metaphor was for our communication, it was disrespectful to those who were paying us. We never shared it with them.
These examples show both the advantages and disadvantages of the Metaphor idea. A metaphor can provide a vocabulary that allows the team to communicate efficiently. On the other hand, some metaphors are silly to the point of being offensive to the customer.
In his groundbreaking book Domain-Driven Design,2 Eric Evans solved the metaphor problem and finally erased our embarrassment. In that book, he coined the term Ubiquitous Language, which is the name that should have been given to the Metaphor practice. What the team needs is a model of the problem domain, which is described by a vocabulary that everyone agrees on. And I mean everyone—the programmers, QA, managers, customers, users…everyone.
2. Evans, E. 2003. Domain-Driven Design: Tackling Complexity in the Heart of Software. Boston, MA: Addison-Wesley.
In the 1970s, Tom DeMarco called such models Data Dictionaries.3 They were simple representations of the data manipulated by the application and the processes that manipulated that data. Evans greatly amplified that simple idea into a discipline of modeling the domain. Both DeMarco and Evans use those models as vehicles to communicate with all of the stakeholders.
3. DeMarco, T. 1979. Structured Analysis and System Specification. Upper Saddle River, NJ: Yourdon Press.
As a simple example, I recently wrote a video game called SpaceWar. The data elements were things like Ship, Klingon, Romulan, Shot, Hit, Explosion, Base, Transport, etc. I was careful to isolate each of these concepts into their own modules and to use those names exclusively throughout the application. Those names were my Ubiquitous Language.
The Ubiquitous Language is used in all parts of the project. The business uses it. The developers use it. QA uses it. Ops/Devops use it. Even the customers use those parts of it that are appropriate. It supports the business case, the requirements, the design, the architecture, and the acceptance tests. It is a thread of consistency that interconnects the entire project during every phase of its lifecycle.4
4. “It’s an energy field created by all living things. It surrounds us and penetrates us. It binds the galaxy together.” Lucas, G. 1979. Star Wars: Episode IV—A New Hope. Lucasfilm.
“The race is not to the swift…”
— Ecclesiastes 9:11
“…But he who endures to the end will be saved.”
— Matthew 24:13
On the seventh day, God rested. God later made it a commandment to rest on the seventh day. Apparently even God needs to run at a Sustainable Pace.
In the early ’70s, at the tender age of 18, my high school buddies and I were hired as new programmers working on a critically important project. Our managers had set deadlines. Those deadlines were absolute. Our efforts were important! We were critical cogs in the machinery of the organization. We were important!
It’s good to be 18, isn’t it?
We, young men right out of high school, pulled out all the stops. We worked hours and hours and hours, for months and months and months. Our weekly average was over 60 hours. There were weeks that peaked above 80 hours. We pulled dozens of all-nighters!
And we prided ourselves on all that overtime. We were real programmers. We were dedicated. We were valuable. Because we were single-handedly saving a project that was important. We. Were. Programmers.
And then we burned out—hard. So hard that we quit en masse. We stormed out of there, leaving the company holding a barely functioning time-share system without any competent programmers to support it. That’ll show ’em!
It’s good to be 18 and angry, isn’t it?
Don’t worry, the company muddled through. It turned out that we were not the only competent programmers there. There were folks there who had carefully worked 40 hours per week. Folks whom we had disparaged as undedicated and lazy in our private late-night programming orgies. Those folks quietly picked up the reins and maintained the system just fine. And, I daresay, they were happy to be rid of us angry, noisy kids.
You’d think that I would have learned my lesson from that experience. But, of course, I did not. Over the next 20 years, I continued to work long hours for my employers. I continued to fall for the important project meme. Oh, I didn’t work the crazy hours I worked when I was 18. My average weekly hours dropped to more like 50. All-nighters became pretty rare—but, as we shall see, not entirely absent.
As I matured, I realized that my worst technical mistakes were made during those periods of frenetic late-night energy. I realized that those mistakes were huge impediments that I had to constantly work around during my truly wakeful hours.
Then came the event that made me reconsider my ways. My future business partner, Jim Newkirk, and I were pulling an all-nighter. Sometime around 2 a.m., we were trying to figure out how to get a piece of data from a low-level part of our system to another part that was much higher up the execution chain. Returning this data up the stack was not an option.
We had built a “mail” transport system within our product. We used it to send information between processes. We suddenly realized, at 2 a.m., with caffeine roaring through our veins and all our faculties operating at peak efficiency, that we could have the low-level part of the process mail that piece of data to itself where the high-level part could fetch it.
Even today, more than three decades later, whenever Jim and I want to describe someone else’s unfortunate decision, we say: “Uh-oh. They just mailed it to themselves.”
I won’t bore you with all the horrible, nitty-gritty details about why that decision was so awful. Suffice it to say, it cost us many times the effort that we thought we were saving. And of course, the solution became too entrenched to reverse, so we were stuck with it.5
5. This was a decade before I would learn about TDD. Had Jim and I been practicing TDD back then, we could have easily backed out that change.
That was the moment that I learned that a software project is a marathon, not a sprint, nor a sequence of sprints. In order to win, you must pace yourself. If you leap out of the blocks and run at full speed, you’ll run out of energy long before you cross the finish line.
Thus, you must run at a pace that you can sustain over the long haul. You must run at a Sustainable Pace. If you try to run faster than the pace you can sustain, you will have to slow down and rest before you reach the finish line, and your average speed will be slower than the Sustainable Pace. When you are close to the finish line, if you have a bit of spare energy, you can sprint. But you must not sprint before that.
Managers may ask you to run faster than you should. You must not comply. It is your job to husband your resources to ensure that you endure to the end.
Working overtime is not a way to show your dedication to your employer. What it shows is that you are a bad planner, that you agree to deadlines to which you shouldn’t agree, that you make promises you shouldn’t make, that you are a manipulable laborer and not a professional.
This is not to say that all overtime is bad, nor that you should never work overtime. There are extenuating circumstances for which the only option is to work overtime. But they should be extremely rare. And you must be very aware that the cost of that overtime will likely be greater than the time you save on the schedule.
That all-nighter that I pulled with Jim all those decades ago was not the last all-nighter I was to pull—it was the second to last. The last all-nighter I pulled was one of those extenuating circumstances over which I had no control.
The year was 1995. My first book was scheduled to go to press the next day, and I was on the hook to deliver page proofs. It was 6 p.m. and I had them all ready to go. All I needed to do was to FTP them to my publisher.
But then, by sheer accident, I stumbled across a way to double the resolution of the hundreds of diagrams in that book. Jim and Jennifer were helping me get the page proofs ready, and we were just about to execute the FTP when I showed an example of the improved resolution to them.
We all looked at each other, heaved a great sigh, and Jim said, “We have to redo them all.” It wasn’t a question. It was a statement of fact. The three of us looked at each other, at the clock, back at each other, and then we buckled down and got to work.
But when we were done with that all-nighter, we were done. The book shipped. And we slept.
The most precious ingredient in the life of a programmer is sufficient sleep. I do well on seven hours. I can tolerate a day or two of six hours. Anything less, and my productivity plummets. Make sure you know how many hours of sleep your body needs, and then prioritize those hours. Those hours will more than pay themselves back. My rule of thumb is that the first hour of insufficient sleep costs me two hours of daytime work. The second hour of insufficient sleep costs me another four hours of productive work. And, of course, there’s no productive work at all if I’m three hours behind on my sleep.
No one owns the code in an Agile project. The code is owned by the team as a whole. Any member of the team can check out and improve any module in the project at any time. The team owns the code collectively.
I learned Collective Ownership early in my career while working at Teradyne. We worked on a large system composed of fifty thousand lines of code partitioned into several hundred modules. Yet no one on the team owned any of those modules. We all strove to learn and improve all of those modules. Oh, some of us were more familiar with certain parts of the code than others, but we sought to spread rather than concentrate that experience.
That system was an early distributed network. There was a central computer that communicated with several dozen satellite computers that were distributed across the country. These computers communicated over 300-baud modem lines. There was no division between the programmers who worked on the central computer and the satellite computers. We all worked on the software for both of them.
These two computers had very different architectures. One was similar to a PDP-8 except that it had an 18-bit word. It had 256K of RAM and was loaded from magnetic tape cartridges. The other was a 8085 8-bit microprocessor with 32K of RAM and 32K of ROM.
We programmed these in assembler. The two machines had very different assembly languages and very different development environments. We all worked with both with equal comfort.
Collective Ownership does not mean that you cannot specialize. As systems grow in complexity, specialization becomes an absolute necessity. There are systems that simply cannot be understood in both entirety and detail. However, even as you specialize, you must also generalize. Divide your work between your specialty and other areas of the code. Maintain your ability to work outside of your specialty.
When a team practices Collective Ownership, knowledge becomes distributed across the team. Each team member gains a better understanding of the boundaries between modules and of the overall way that the system works. This drastically improves the ability of the team to communicate and make decisions.
In my rather long career, I have seen a few companies that practiced the opposite of Collective Ownership. Each programmer owned their own modules, and no one else was allowed to touch them. These were grossly dysfunctional teams who were constantly involved in finger pointing and miscommunication. All progress in a module would stop when the author was not at work. No one else dared to work on something owned by someone else.
One particularly maladroit case was the company X, which built high-end printers. In the 1990s, the company was transitioning from a predominant hardware focus to an integrated hardware and software focus. They realized that they could cut their manufacturing costs significantly if they used software to control the internal operation of their machines.
However, the hardware focus was deeply ingrained, so the software groups were divided along the same lines as the hardware. The hardware teams were organized by device: there were respective hardware teams for the feeder, printer, stacker, stapler, etc. The software was organized according to the same devices. One team wrote the control software for the feeder, another team wrote the software for the stapler, and so forth.
In X, your political clout depended on the device you worked on. Since X was a printer company, the printer device was the most prestigious. The hardware engineers who worked on the printer had to come up through the ranks to get there. The stapler guys were nobodies.
Oddly, this same political ranking system applied to the software teams. The developers writing the stacker code were politically impotent; but if a printer developer spoke in a meeting, everyone else listened closely. Because of this political division, no one shared their code. The key to the printer team’s political clout was locked up in the printer code. So the printer code stayed locked up, too. Nobody outside that team could see it.
The problems this caused were legion. There are the obvious communication difficulties when you can’t inspect the code you are using. There is also the inevitable finger pointing and backstabbing.
But worse than that was the sheer, ridiculous duplication. It turns out that the control software for a feeder, printer, stacker, and stapler aren’t that different. They all had to control motors, relays, solenoids, and clutches based upon external inputs and internal sensors. The basic internal structure of these modules was the same. And yet, because of all the political safeguarding, each team had to invent their own wheels independently.
Even more important, the very idea that the software should be divided along hardware lines was absurd. The software system did not need a feeder controller that was independent of the printer controller.
The waste of human resources, not to mention the emotional angst and adversarial posturing, led to a very uncomfortable environment. I believe that this environment was instrumental, at least in part, to their eventual downfall.
In the early days of Agile, the practice of Continuous Integration meant that developers checked in their source code changes and merged them with the main line every “couple of hours.”6 All unit tests and acceptance tests kept passing. No feature branches remained unintegrated. Any changes that should not be active when deployed were dealt with by using toggles.
6. Beck, K. 2000. Extreme Programming Explained: Embrace Change. Boston, MA: Addison-Wesley, p. 97.
In the year 2000, at one of our XP Immersion classes, a student got caught in a classic trap. These immersion classes were intense. We shortened the cycles down to single-day iterations. The Continuous Integration cycle was down to 15 to 30 minutes.
The student in question was working in a team of six developers, five of whom were checking in more frequently than he was. (He was not pairing for some reason—go figure.) Unfortunately, this student had kept his code unintegrated for over an hour.
When he finally tried to check in and integrate his changes, he found that so many other changes had accumulated that the merge took him a long time to get working. While he was struggling with that merge, the other programmers continued to make 15-minute checkins. When he finally got his merge working and tried to check in his code, he found he had another merge to do.
He was so frustrated by this that he stood up in the middle of class and loudly proclaimed, “XP doesn’t work.” Then he stormed out of the classroom and went to the hotel bar.
And then a miracle happened. The pair partner that he had rejected went after him to talk him down. The other two pairs reprioritized their work, finished the merge, and got the project back on track. Thirty minutes later the student, now much calmer, came back in the room, apologized, and resumed his work—including pairing. He subsequently became an enthusiastic advocate for Agile development.
The point is that Continuous Integration only works if you integrate continuously.
In 2001, ThoughtWorks changed the game significantly. They created CruiseControl,7 the first continuous build tool. I remember Mike Two8 giving a late-night lecture about this at a 2001 XP Immersion. There’s no recording of that speech, but the story went something like this:
CruiseControl allows the checkin time to shrink down to a few minutes. Even the most minor change is quickly integrated into the mainline. CruiseControl watches the source code control system and kicks off a build every time any change is checked in. As part of the build, CruiseControl runs the majority of the automated tests for the system and then sends email to everyone on the team with the results.
“Bob, broke the build.”
We implemented a simple rule about breaking the build. On the day that you break the build, you have to wear a shirt that says, “I broke the build”—and no one ever washes that shirt.
Since those days, many other continuous build tools have been created. They include tools like Jenkins (or is it Hudson?), Bamboo, and TeamCity. These tools allow the time between integrations to shrink to a minimum. Kent’s original “couple of hours” has been replaced by “a few minutes.” Continuous Integration has become Continuous Checkin.
The continuous build should never break. That’s because, in order to avoid wearing Mike Two’s dirty shirt, each programmer runs all acceptance tests and all unit tests before they check in their code. (Duh!) So if the build breaks, something very strange has happened.
Mike Two addressed this issue in his lecture, too. He described the calendar that they put in a prominent position on the wall of their team room. It was one of those large posters that had a square for every day of the year.
In any day where the build failed, even once, they placed a red dot. In any day where the build never failed, they placed a green dot. Just that simple visual was enough to transform a calendar of mostly red dots into a calendar of mostly green dots within a month or two.
Again: The continuous build should never break. A broken build is a Stop the Presses event. I want sirens going off. I want a big red light spinning in the CEO’s office. A broken build is a Big Effing Deal. I want all the programmers to stop what they are doing and rally around the build to get it passing again. The mantra of the team must be The Build Never Breaks.
There have been teams who, under the pressure of a deadline, have allowed the continuous build to remain in a failed state. This is a suicidal move. What happens is that everyone gets tired of the continuous barrage of failure emails from the continuous build server, so they remove the failing tests with the promise that they’ll go back and fix them “later.”
Of course, this causes the continuous build server to start sending success emails again. Everyone relaxes. The build is passing. And everyone forgets about the pile of failing tests that they set aside to be fixed “later.” And so a broken system gets deployed.
Over the years, there has been a great deal of confusion about “The Daily Scrum” or the “Standup Meeting.” Let me cut through all that confusion now.
The following are true of the Standup Meeting:
This meeting is optional. Many teams get by just fine without one.
It can be less often than daily. Pick the schedule that makes sense to you.
It should take ∼10 minutes, even for large teams.
This meeting follows a simple formula.
The basic idea is that the team members stand9 in a circle and answer three questions:
9. This is why it’s called a “standup” meeting.
What did I do since the last meeting?
What will I do until the next meeting?
What is in my way?
That’s all. No discussion. No posturing. No deep explanations. No cold houses nor dark thoughts. No complaints about Jean and Joan and who knows who. Everybody gets 30 seconds or so to answer those three questions. Then the meeting is over and everyone returns to work. Done. Finito. Capisce?
Perhaps the best description of the standup meeting is on Ward’s wiki: http://wiki.c2.com/?StandUpMeeting.
I won’t repeat the ham-and-eggs story here. You can look it up in the footnote10 if you are interested. The gist is that only developers should speak at the standup. Managers and other folks may listen in but should not interject.
From my point of view, I don’t care who speaks so long as everyone follows the same three-question format and the meeting is kept to about 10 minutes.
One modification that I have enjoyed is to add an optional fourth question:
Whom do you want to thank?
This is just a quick acknowledgment of someone who helped you or who did something you believe deserves recognition.
Agile is a set of principles, practices, and disciplines that help small teams build small software projects. The practices described in this chapter are those that help those small teams behave like true teams. They help the teams set the language they use to communicate, as well as the expectations for how the team members will behave toward each other and toward the project they are building.