Learning from the Situation

Estimates can be used for more than simple prediction and tracking of progress. It’s so discouraging when people treat estimates as a one-time chance to guess the future. The odds of winning that game are very low, indeed. Using estimates as your current best idea of that future is more helpful. And using them as a stake in the ground to detect changes in what you think you know is even more so.

There’s little to be learned by the fact that estimates are wrong, but they can help you notice when things are not as they should be or, at least, not as you thought they should be. “Incorrect estimates” are not very useful when considered as failures, but are a goldmine of potential information and insight. It’s instructive to examine in what ways they are wrong and how they came to be wrong. When actuals differ significantly from the estimate, it indicates that the assumptions of the estimate have not been borne out in fact. Why not? What was different from what was expected? What other estimates deserve another look based on this new knowledge?

As you adjust your view of the future, it should come more and more in line with reality. You can start with really rough, low-precision estimates. Over time you’ll gain the information to make them more precise, if that seems worthwhile. Your estimates should converge with the final reality over time as you learn and as you address the risks and get closer to your goals.

What Didn’t We Know Earlier

When we estimate, we are always working with incomplete knowledge of the situation. If we had complete understanding, we would calculate instead of estimate. We fill in the gaps of our knowledge with assumptions, both explicit and tacit. As is to be expected, those assumptions sometimes turn out to be unwarranted.

When events do not match the estimate, it’s tempting to jump to single, proximate causes. “If it hadn’t taken so long to figure out that legacy code we were modifying, we wouldn’t be so late.” This assumes that this or other legacy code won’t be a problem in the future. That’s similar to the assumption we made in our original estimate, that the existing code wouldn’t slow us down.

What other assumptions are built into the estimate?

Let’s reconsider all that might be suspect, given what we’ve learned from this failed prediction. It’s easy to find reasons why things didn’t go as planned, but these are not generally special cases. There are similar things you will not foresee in the future. Allow for them.

If it seems like you’re slipping the schedule on a recurring basis, then there’s something you’re not learning. You must be injecting some unwarranted optimism. Perhaps you think too many things are one-time, special variations that won’t be repeated, but the stream of one-time variations is endlessly ever changing. Whatever it is, you need to dig until you root it out. Adjusting expectations is a good thing, but people would like an increasing sense of trust in those expectations over time.

Revisiting Assumptions

When we first built our estimate, we did so in comparison to past history. This is, of course, an imperfect process. We tried to think of all the aspects where the proposed system might differ from the reference systems. (See Aspects to Compare.) We can revisit those aspects and decide if we misjudged any of them.

The Code Isn’t What We Imagined

We saw an example of things not being what they seemed in Misdiagnosis. We can misdiagnose the situation in software development, too. It’s certainly happened to me.

Someone asked me to change the way the system functions, and I thought it was going to be easy. “Sure, I’ll have that done by close of business tomorrow.” Then I started looking at the code and thought, “Oops, this code isn’t at all what I expected.”

The code for this functionality was spread over a number of source files, and architectural levels. Worse, it was commingled with other functionality and that interacted with each other. When I change this, I break that. Somebody must have been in a big hurry when they hacked in the last change. Perhaps they were trying to get it done within the time they had estimated.

I could try to do the same, that is, hack up the code until it looks like it might be working. As tangled as I found the code to be, though, it was unlikely that I’d get everything right for all cases. There were cases that would be affected that had nothing to do with the change I was making.

Or, rather, they should have had nothing to do with it. With that much coupling in the code, it would take me more than two days to figure out all the things that might be affected. Testing those things would take even longer. Ignoring this risk would likely cause some problems in the future. If I broke something I didn’t know about, it might take a while before it’s noticed.

When errors accidentally affecting other functionality aren’t noticed right away, it hides the fact that the work I’m doing can’t really be done in the expected amount of time. Ultimately, it adds to the time it will actually take. I might make it look like I met the estimate, but I won’t have done all the work that was expected to be included. It’s expected that I add the new functionality and that I don’t break anything else. I may meet the deadline, but invisibly miss the implicit expectation from my estimate.

Or I could do the job “properly.” I clearly won’t meet the expectation of “close of business tomorrow” that I gave when I was given the task. In other words, I will visibly miss the explicit expectation from my estimate.

Either way, my estimate was wrong. I wouldn’t have the change done by close of business tomorrow.

Looking at it another way, my estimate wasn’t really wrong. It was made in good faith based on the information available at the time. Now I have more information that makes the estimate obsolete. My updated estimate is that it would take me several days to two weeks to accomplish this, depending on the problems I had disentangling the code and how far I went in cleaning it up.

My best course of action was to talk with the person who asked me to make the change. I didn’t want to leave them with the false impression created by my original estimate.

The Team Isn’t What We Expected

Does your estimate depend on who is working on the project? A friend told me about carefully planning a project with someone who had deep experience in the domain, competence at the work, and also brought out the best in teammates. Shortly after submitting the plan, this key person was reassigned to another project. It was a crushing blow to the project and completely undermined the estimated plan.

The loss of crucial personnel can also happen in the middle of a project. This not only removes the knowledge and skill on which the estimate was predicated, but disrupts the rhythm of teamwork that had been developed.

I once consulted with a manager tasked to estimate a project where the personnel weren’t yet known. He thought that the company would probably choose one of a few overseas contracting companies they’d used before, but he didn’t know which one. And once they chose, there was no telling who they would assign to the project. They might be people recently hired just for the contract.

Even when working with known players, a team is more than the sum of its individuals. When a team jells, they cover for each other’s momentary or permanent deficiencies in a way that makes them unnoticeable. They collaborate seemingly effortlessly in a just-in-time way that keeps the work flowing. There is a big difference between such a team and a work group that doesn’t jell. Does your estimate assume that they will jell or not?

Blown Sprint

It happens sometimes that a development team “blows a Sprint” and doesn’t deliver much in the way of functionality for an entire iteration. How does your organization respond to that?

If you’re “working to plan,” this throws a monkey wrench into the plan, and many organizations jump to blaming the development team for not meeting commitments. This, of course, damages the trust and communication between the development team and the part of the organization doing the blaming. In turn, it becomes less likely that the development team will notify the blamers, much less reach out for help, when similar situations come up in the future. Underlying problems get swept under the rug and ignored, making future disappointing situations even more likely. And things spiral downhill.

Rather than push hard to “catch up” with the schedule, recognize that this was one of the unforeseen situations for which you should have contingency planning in place. Stop and spend the time to examine the situation more deeply. My father, when he was an organic chemistry professor, would tell flunking students to stick it out and “earn their F.” They had nothing to lose and an opportunity to learn. It would make their next attempt easier and more likely to succeed.

Perhaps it’s worth taking a whole day for a retrospective. You might even want to have a multipart retrospective, with the development team examining their part of the situation, and the managers and other stakeholders examining theirs. Then, get all the parties together for a combined retrospective with each affinity group having taken a deep dive into their own experiences and failures. What can you learn from this?

You might not catch up with your preferred schedule, but that’s water under the bridge. The goal now is to make the best choices for the future, and a temporary reset is a great way of doing that. And who knows, this might be just the impetus for a breakthrough that leads to a better conclusion than the original plan.

Why Didn’t We Know Earlier

There are two common situations where we wouldn’t know earlier that a milestone would be missed. One is that someone knew, or had a pretty good idea, but didn’t tell anyone else. The other is that no one was paying attention to the things that would have let us know. In between, there are some variations, such as telling someone who wasn’t paying attention, but rarely is there no warning at all.

Late Surprises

Picture this: The scene is a conference room at the Empire Enterprises IT department. Various IT and project managers sit around the conference table. Other project managers and tech leads sit in a second row of chairs around the walls, and yet more are attending by video conference. The current topic is reviewing the status of near-term milestones. Reviewing risks is scheduled next on the agenda.

The CIO is unhappy. "It’s not unreasonable to say ’Hey, these dates don’t make sense anymore.’ It is unreasonable to say at the 11th hour, ’We’re not going to meet that date.’ It should be a dialog. Is this milestone on the critical path for something, or is it a stake in the sand? If you know you can’t meet a date, don’t kill yourself trying to do so. Think about the risk. Having a conversation is a really good thing to do." Everyone in the room takes a deep breath, and then the meeting continues.

In my experience, executives don’t usually get upset when work is going to take longer than planned. Very few milestones are deadlines on inflexible schedules that can’t be missed. What does upset executives is not knowing that things will take longer than planned. They depend on people to report reality so they can make the right decisions. You’ll hear them say this at every status meeting where they get late notification of a problem.

Unfinished Kitty

...continued from TinyToyCo and the Robotic Cat

"It looks like we won’t be able to sell Fluphy Kitty for Christmas this year. If the retailers can’t order it next month, it’s missed the train."

"Yeah," Chris replied, "I got too carried away trying to get the prototype to jump, I guess. I wasn’t watching the lead time for the big retailers. What are we going to do? We can’t afford to keep developing at this burn rate for next year’s season, and without Christmas, all our revenue projections are trash."

"Let’s deal with the near-term problems, first," Pat replied. "What do we need to do to get a desirable, salable toy that can be sold?"

"Obviously we need to drop jumping, for now. I suspect that we could make the meowing more interesting to make up for that. Perhaps throw some recognizable words into the meows, even."

"Second, how can we sell this at Christmas? We won’t be able to get it into physical stores, but perhaps we can do well with large online retailers. We’ll need to investigate the timeline for that—and also think of how to do some viral marketing to create demand. I was counting on people buying on impulse when they saw Fluphy Kitty in action at the stores. I think we can replace that with online videos that are easy to share."

"That sounds good. Parallel to that, though, we need to take action so that this doesn’t happen to us again. Let’s estimate backward from hard deadlines. When do we need to deliver to online retailers to make this happen? How long will it take to manufacture and package prior to that delivery? How long to design the packaging after we know the final feature set, or close enough to know what to print on the box?"

"Good point. We’re in this pickle because we didn’t estimate when our last responsible moment for a working design really was. I’ll spend the rest of today making phone calls for information, and tomorrow let’s work out the known and unknown aspects of that timeline."

When things don’t go as expected, you not only need to correct the problem, but correct your process so you don’t repeat the same problem. Setting a milestone as a guard condition can let you know when to reassess your current plan while you still have time to do something different. See Danger Bearings for more on this concept.

Giving the Bad News

Imagine that you’re in that meeting. And imagine, while your project’s next milestone is further off than is currently being discussed, that there are things that worry you. Have you articulated these worries on the risk register, to be discussed next? Are you comfortable with bringing up your worries in this meeting? Unless the risk can be laid to dependencies outside the company, that’s a very hard thing to do. No one wants to look bad in front of a lot of important managers and peers.

Yet those same people are depending on you to give them accurate enough information to make good decisions, and to give it early enough that there is still time for the best alternative options.

Hiding the schedule problems means that something less visible than time is sacrificed. Even with the best of intentions, time pressure causes people to rush and make mistakes. Time pressure affects their judgment of how much they need to think about the design, about the abnormal conditions the system might face, or about the understandability of the code. “We don’t have time to sharpen the saw; we’re in a hurry to cut down this tree.”

You want to do the right thing, but how much courage do you have?

Asking for the Bad News

Now imagine that you’re the CIO. You really do want people to give you the bad news as well as the good. Do you ask for bad news when none is evident? Perhaps you’d prefer that there be no bad news to give. How do you react when people give you bad news? Do you thank them for it? Or do you take charge of the situation by asking them questions and giving them advice? Perhaps you have some other default reaction. Do you know what that reaction is? Are you aware of how your reaction in such difficult times is perceived by those witnessing it? Will your reaction make it easier or harder for people to report unhappy news in the future?

If you want people to tell you bad news, you need to do more than accept it when they do. You need to invite it, repeatedly, and you need to make people feel good when they provide it. They don’t have to feel good about the news, itself, but they do need to feel good that they shared it. How can you reward people for highlighting when things aren’t going to plan? How can you make it likely that people will investigate the potential consequences of new knowledge?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.249.127