Chapter 5


Step 3: Analyze Performance Gaps

images

How does a project get to be a year behind schedule? One day at a time.

—Fred Brooks, author of No Silver Bullet

images

What should you do when it's clear that you aren't getting results? Typically, managers in most organizations do a terrible job analyzing and diagnosing employee performance issues. This partly explains why managers in so many organizations jump from fad to fad—they fail to really understand what is broken, why it's failing, and what it will take to fix it. And without a good understanding of these three issues, efforts to make things better usually fail. And these failed fixes ultimately contribute to making things worse as employees start to regard all changes as just a series of hoops they have to jump through.

images

We don't see things as they are, we see things as we are.

—Anaïs Nin, author

images

Many experienced managers insist that they know enough about their organization or are so experienced that they can quickly tell what is wrong and don't need to engage in any detailed analysis. Most executives who impose corrections on personnel believe this as well. But the data shows that most of these corrections fail. David Strang and Michael Macy of Cornell University did a comprehensive review of business trends and new innovations. They found “an examination of many innovations suggest empty rhetoric and shameless self-promotion…. It is evident that organizational change in American business is faddish” (Strang and Macy 1999, 1). Managers who shoot from the hip or act without a serious commitment to questioning assumptions, doing systematic analysis, and examining evidence are wasting effort.

In the United States, any doctor who provides treatment without first doing a diagnosis is guilty of malpractice. Yet managers and executives engage in just this kind of mistake all the time. Pfeffer and Sutton (2006, 5) note that “business decisions, as many of our colleagues in business and your own experience can attest, are frequently based on hope or fear, what others seem to be doing, what senior leaders have done and believe has worked in the past, and their dearly held ideologies—in short, on lots of things other than the facts. Although evidence-based practice may be coming to the field of medicine and with more difficulty and delay, the world of education, it has had little impact on management or on how most companies operate.”

The result of this slipshod approach to managerial decision making is that a vast array of initiatives that fail. Jim Collins—the coauthor of the book Built to Last (Collins and Porras 1994) and author of the article “Good to Great” and the book by the same name (Collins 2001)—concluded that “very rarely do significant changes ever lead to results in a sustainable way…. We started [our analysis] with 1,435 companies. And 11 companies did it [succeeded at getting better]…. We don't know what the heck we're doing! And because we don't know what we're doing, we launch into all sorts of things that don't produce results. We end up like a bunch of primitives dancing around the campfire chanting at the moon” (Collins 2001). Clearly, managers in many organizations do a lousy job of figuring out what is broken, why it's broken, and how to fix it.

Picking the Right Target

The biggest mistake that most managers make in analyzing performance breakdowns is that they tend to act on just any problem. This is true regardless of how unimportant the performance issue may be to the larger priorities of the business. Managers can argue that outstanding organizations pay attention to all details and seek to get everything right. Although this approach is fine in theory, in practice it often means that organizations spend time, energy, and resources on performance issues that will have a minimal impact on essential business goals.

images

Most people struggle with life balance simply because they haven't paid the price to decide what is really important.

—Steven Covey, author of The Seven Habits of Highly Effective People

images

Consequently, an important part of analyzing performance is to be sure that you're focusing on performance gaps that, if closed, will show a significant return for the organization. It's easy to look at a particular performance issue and rationalize how, if corrected, the result will be a major benefit to the business. But such rationalizations are easy and usually wrong. The only way to do this analysis objectively is to start with the business's goals or organizational priorities. Once you start with these goals, the next step is to identify the performance results or accomplishments that are critical if the goals are going to be achieved. This means figuring out which accomplishments will have the most impact on the goals or which results are the performance drivers that will determine whether or not the business meets the goals.

It's tempting to argue that everything will have an impact on the business's goals. Although true, this ignores the reality that just because everything has an effect, not all effects are equal—some make no real impact. For every goal, there are always a couple of accomplishments that matter more than all the others. Thus, the smart manager will identify which accomplishments matter most and if they have any performance gaps. If so, those are the areas to focus on first. Any progress with these gaps will have a disproportionately large impact on the organizational goal and will also reflect a more efficient use of resources.

To argue that all practices are critical is simply wrong. One example of this is in comparative benchmarking, where organizations adapt practices from other companies. As Pfeffer and Sutton (2006, 7) note, “A pair of fundamental problems render casual benchmarking ineffective. The first is that people copy the most visible, obvious, and frequently least important practices…. The second problem is that companies often have different strategies, different competitive environments, and different business models—all of which make what they need to do to be successful different from what others are doing. Something that helps one organization can damage another.” Treating all practices as equally important demonstrates ignorance about the nature of the business and a failure to recognize some of the unique nature of the specific firm.

The Usefulness of Performance Drivers

images

I'm not a fan of facts. You see, the facts can change, but my opinion will never change, no matter what the facts are.

—Stephen Colbert, comedian

images

As mentioned above, baseball is a brilliant example of an industry where false assumptions have led decision makers to be ignorant of the details that drive performance. Major League Baseball has always had theories and opinions about how to create a successful team, and team managers have usually had a clear philosophy about how to win games and have successful seasons. If the organizational goal is getting to the championship level (or sometimes a lower target more commensurate with the team's talent, such as a playoff spot or a season with more wins than losses), baseball's decision makers have had no shortage of opinions about the performance drivers that produce a successful team. In some cases, it has been the strategy to just acquire as much talent as possible—an approach that has little appeal for poorer or small market teams. Other general approaches involve tactics like getting two or three star players to carry a team, or playing “small ball” (seeking to bunt, steal bases, or sacrifice to advance base runners), or building a team on pitching and defense. But until some teams began to engage in detailed analysis (often statistically driven) of what factors had the most impact on a team's success, these approaches were mostly based on hunches, biased perspectives, and anecdotal examples.

Baseball has always been a business run very much on hunches and conventional wisdom. The baseball writer Bill James rejected this position and began to crunch numbers—looking for statistics that demonstrated (or even disproved) some of the conventional wisdom within the sport. Others, all from outside baseball, operating independently from James, turned to statistical analysis to test the conventional wisdom. People like the pharmaceutical researcher Dick Crame or the radar station programmer Pete Palmer all began to test baseball's supposed laws for what it take for teams to be successful or not. They did this work independently of each other—each just because of his love of the game and from an intellectual curiosity that refused to accept what others viewed as fact—and thus provided a serious analysis of performance drivers within the sport.

What Crame and Palmer found defied the conventional wisdom. Evidently, their analyses began to show that on-base percentage (which included factors such as getting on base other than just hits—such as walks) had significantly more effect on success than defense or pitching or even batting average. And a critical element to on-base percentage for a hitter was selectivity, which resulted in a lot of walks.

The Oakland Athletics became determined to both measure this element of performance and reinforce it. The As' general manager, Sandy Alderson, began to emphasize to all players and coaches in their minor league teams how important walks were. According to Michael Lewis (2003, 60), “More or less overnight, all of the As' minor league teams began to lead their respective leagues in walks…. No player [in the As' minor league system] was eligible for minor league awards or was allowed to move up in the system unless he had at least one walk in every ten bats.” This is a clear case of an organization identifying what performance was critical to achieve a larger goal and finding ways to create incentives that drove more of that performance.

A similar example involves the gaming industry. When Gary Loveman was appointed COO of Harrah's, he decided to base future organizational initiatives on data. He quickly found out that most of the conventional wisdom about casinos was wrong. For instance, he discovered that rather than drawing in customers through extensive media advertising, targeted direct mail promotions were a much more effective performance driver of customer participation levels (Pfeffer and Sutton 2006). Harrah's has since become a good example of a business that uses data to make decisions (rather than operating on management hunches and mindsets).

images

It's not that I am so smart, it's just that I stay with problems longer.

—Albert Einstein

images

Identifying Performance Drivers

So how can a conscientious executive identify performance drivers? There are four main approaches. The first and most obvious one is also the toughest: You need to really know how the business works. This is not the same as having done the work for a long time. Knowing how the business works involves a tremendous depth of experience and subject matter expertise that are typically beyond most executives. This approach cannot be completed quickly, because it involves not only gaining experience with an organization or product but also acquiring a deep understanding of and expertise with the process. And because most executives confuse experience in a business with knowledge of how the business works, people who believe they know how the result happens rarely do.

The second approach is to conduct a performance analysis. This is a systematic approach to documenting the work process and determining what performance contributes to the goal. This approach has the advantage of usually being quicker than acquiring experience with the organization to learn how the business works.

The third approach involves mapping the work process (so it's clear which steps are actually followed, which steps add value, and which are redundant), and then doing a rigorous and objective analysis of the process's logic. Because managers typically don't have the benefit of a real depth of process expertise, for this approach to be effective, objectivity is critical. See sidebar for tips on generating process maps.

The fourth approach involves doing a sensitivity analysis, in which various aspects of performance are tested through a range of approaches (baseline analysis, customer feedback, comparative benchmarking with other firms, comparing results between shifts so that one shift serves as a control group) that allow management to determine which performance has the greatest impact on the organization's goals. Think of the analogy of a wind tunnel, where prototypes aircraft are tested. Sensitivity analysis is conceptually the same as a wind tunnel; it can be done with prototypes (such as field offices or a market in another country), with computer modeling, or using limited trials within the organization.

Any of these four approaches will enable managers to identify performance drivers so improvement efforts aren't wasted on issues that will have only a minimal impact.

images

Humans do not make rational, logical decisions based on information input, instead they pattern match with either their own experience, or collective experience expressed as stories. It isn't even a best fit pattern match, but a first fit pattern match…. The human brain is also subject to habituation, things that we do frequently create habitual patterns which both enable rapid decision making, but also entrain behavior in such a manner that we literally do not see things that fail to match the patterns of our expectations.

—David Snowden, knowledge management consultant

images

Finding Evidence

Organizations are continuously attempting to improve through a range of initiatives and changes. The vast majority of these initiatives are not crapshoots or blind guesses but are the result of a well-meaning manager having become convinced that action was going to make things better. And yet most of these efforts are failures.

There are three main reasons why reasonably smart people insist on implementing initiatives that turn out to be a bad fit. First, many decision makers have strongly held beliefs about the value of particular policies but have no evidence for these beliefs. Forced rankings and stock options are widely used programs in American business, and yet the data on their impact turns out to be negative (Pfeffer and Sutton 2006). But this doesn't stop other firms from adopting them, despite the lack of proof that they work—and the contrary evidence that they seem to be counterproductive.

Second, managers tend not to be objective about many problems. A manager's background will tend to produce a particular bias or perspective on a given problem that in turn suggests solutions that fit this bias.

Third, most decision makers do a poor job analyzing performance problems and understanding the organization, so they end up adopting overly simplistic solutions.

images

When solving problems, dig at the roots instead of just hacking at the leaves.

—Anthony J. D'Angelo, The College Blue Book

images

Uncovering the Root Causes

Once management has identified the performance drivers, the next question is to determine which of these critical accomplishments have performance gaps. For instance, if an important organizational goal is to increase sales by 4 percent by the end of the year, a potential performance driver might be the close rate of the sales staff. To hit the 4 percent goal, the sales staff needs to have a close rate of 24 percent, and right now it's 14 percent—thus, there is a performance gap of 10 percent.

Performance gaps need to be expressed in terms of accomplishments or outcomes—that is, quantifiable, objective targets. A performance gap expressed in terms of behavior or attitudes ends up being subjective. Look at these examples: The staff in the call center need to be friendlier in their initial greeting; the team members need to be more alert to potential cost overruns; the sales associates on the floor have to be more approachable to customers—all very difficult things to measure, both for the gap and progress.

Once the performance gap has been identified, it's time to determine why it exists. The tendency is for managers to just shoot from the hip and operate on biases. As Pfeffer and Sutton (2006, 5) note, “People are overly influenced by deeply held ideologies or beliefs—causing their organizations to adopt some management practice not because it is based on sound logic or hard facts but because managers believe it works or it matches their sometimes flawed assumptions about what propels people and organizations to be successful.” And this is part of what leads to fads and Band-Aid solutions that don't fix the real problem. When managers operate on the basis of their beliefs (rather than objective analysis, systematic process, and data), they don't improve performance but only perpetuate their own biases. It's not a very effective way to solve performance issues and improve execution. Instead, the organization has to do an effective cause analysis to figure out what's causing the performance gap.

People tend to associate the phrase “cause analysis” with scientific levels of causality, such as what chemicals are carcinogens or what birth defects are caused by particular drugs. A cause analysis for performance problems within the organization is usually nowhere near this detailed because the analysis isn't intended to expand scientific understanding but to generate actionable data. More specifically, the cause analysis should be able to give management a clear idea of what needs to be fixed to that the organization can reach its goals.

Here's an example to illustrate this point. A small insurance firm wasn't meeting one of its top three goals for the year: to increase sales to business clients by 8 percent from the preceding year's numbers. Two performance drivers had more of an impact on the firm's ability to hit this target than all the other contributing accomplishments combined: the organization had to retain 84 percent of its existing clients or better; and the agents trying to generate business had to close two new business clients each week. There was no gap for the first performance driver (the firm's retention was actually at 86 percent). But agents were averaging fewer than one new business client each week (which in this case was a performance gap of approximately 57 percent). Interestingly enough, the director of sales averaged five new business clients per week (but this wasn't enough to offset the lower business generation numbers from the agents).

In doing a task analysis, it became clear that the insurance firm's director of sales wasn't doing anything different or better in the client contact, presentation, and closure process than the agents. Digging deeper, it became clearer that the quality of leads and potential customers was different; the director of sales was dealing with potential clients who were more likely to say “yes” and sign on with the firm as customers, because she was cherry-picking the prospects as they came in and saving the best ones for herself. Now, it was possible to pursue this further. Why was she cherry-picking? She wanted to appear superior to the agents she managed. Why did she want to appear superior? To satisfy her self-esteem needs. You could ask further, why do humans need to build their self-esteem? But pushing this line of questioning to this level would be irrelevant.

From a performance standpoint, it was clear that the director of sales was choosing to not distribute prospects equally. This hurt the organization, because she couldn't service all the top-shelf prospects she kept for herself (so the firm lost business). And agents felt there was favoritism or that the deck was stacked against them (so the better agents left or became discouraged). Once the organization knew where the problem originated (the director of sales and her decisions on how prospects were distributed) and why it was happening (she was keeping the best for herself deliberately so her record would be more impressive than those of all the other agents), the organization knew enough to be able to act and close this performance gap. To go further and try to determine more details about her motivation (for instance, her parents might have been too critical of her as a child so she always had self-esteem issues) serves no purpose with respect to execution but goes into issues that the company cannot do anything about.

Once the performance gap has been identified, it's time to look at what the employees do that is supposed to produce the accomplishment (versus what they actually do). To phrase this a different way, it's important to identify what tasks have to be done and the sequence in which they need to be done to reach the accomplishment.

Typically, this process of identifying the tasks and their sequence will involve a combination of observing employees and interviewing the people who do the work. If a manager observes the employees, wouldn't interviews be redundant? The answer to this question is almost always that interviews will provide new information that either isn't obvious in the observation or clarifies some of what the manager saw. This is because so much work today involves tacit knowledge, individual discrimination, or decisions and mental adjustments that it's rarely enough to observe someone do the work. Instead, it becomes critical to ask “Why did you choose this way?” or “What did you notice that implied this was the right direction?”

In doing this kind of task analysis, it is important to realize that just because a successful employee does something does not mean that is critical to the success of the task. Pfeffer and Sutton (2006, 7) point out the absurdity of assuming that if people just copy what successful individuals do, then they too will succeed: “Herb Kelleher served as CEO during most of Southwest's history and remains the chairman to this day. Kelleher drinks a lot of Wild Turkey bourbon. So does that mean that if your CEO starts drinking as much Wild Turkey as Kelleher, your company will dominate its industry?” The task analysis needs to start by identifying everything the employees do. But the analysis will then have to weed out which tasks are irrelevant (such as drinking Wild Turkey) and which ones actually contribute to reaching the accomplishment.

The performance analysis has by this point produced a list of tasks that appear to be important for employees to be successful. But the task analysis isn't finished. It's important to determine if the sequence of the task matters. It's also important to assess just how precise each particular task needs to be. Is it sufficient to simply greet the customer, or is it critical to greet the customer in a particular manner? And the issue of variability enters. Tasks that one employee must do to succeed at something might not be essential for another employee seeking the same result but using a different process.

For instance, think back to the example of technical writing earlier in the book. A writer who brainstormed and used lateral thinking with Post-it notes on the wall would have tasks in that process that would include creating order out of the Post-it notes and connecting the ideas (perhaps using a mind map). A writer who generated content collaboratively by working in a group would need to schedule meetings, confirm who is participating, and assign topics to each participant as tasks in that process. Thus, work that has variability or that must allow for employee discretion will therefore complicate the task analysis. This analysis will have to identify if some tasks are common to all approaches to the work or are required only for a variation.

images

Fix the problem, not the blame.

—Catherine Pulsifer, artist

images

At this point in the performance analysis, possible causes of the performance gap may begin to suggest themselves. There are several important points to keep in mind about identifying these causes. For starters, the place where the problem shows up is not necessarily the place where the problem is caused. For instance, the loading dock staff may fill out shipping labels with the incorrect address (so deliveries often fail to reach the customer). However, the cause of the performance gap may actually originate back in the call center, where the phone staff does a sloppy job entering the order information (including customer addresses) into the database. Thus, the employee who appears to be making the mistakes may not be the cause of the mistakes—that may just be where the performance problem manifests itself. And it's rare to find only one cause for a performance gap. Typically, there are multiple causes, and one approach will rarely address all of them. This of course runs counter to management approaches that look for simplistic answers to solve problems.

Common Mistakes in Analyzing Performance Problems

Many executives and managers insist that they do indeed look at problems objectively, they do gather data, and they do consider alternative explanations. But the reality is that what managers perceive as a rigorous, open-minded examination of performance issues typically has a number of flaws that prevent any real insights from being generated. There is a strong tendency for most managers to lock themselves into rigid perspectives that limit their ability to analyze what it really going on within the organization.

Confirmation Bias

images

The most secure prisons are those we construct for ourselves.

—Gordon Livingston, psychiatrist and author

images

One of the more common errors is confirmation bias, which occurs when people either deliberately or unconsciously look for data that reinforces the position they believe to be true. The reality is that there is almost always enough information out there that someone can use to justify almost any position if the data is going to be selectively reviewed. Executives will have a tendency to surround themselves with people who agree with them. Generally speaking, people don't get promoted by disagreeing with senior management. So there is a strong likelihood that the organization tends to tell senior managers what they want to hear. When you combine this with the tendency for executives to have deeply held beliefs that significantly shape their perception of reality (Argyris 1986), it is nearly impossible for most executives and managers to objectively evaluate most performance data without allowing their own perceptions, biases, assumptions, and ideological beliefs to color what they see and how they interpret.

Some managers and executives who just read the last paragraph are probably shaking their heads in disagreement. Here's an example to consider: Doctors are incredibly well trained, vastly experienced professionals with access to tremendous resources to help them make effective diagnoses. Yet the development of evidence-based medicine is an admission that, left to their own biases and perception, even doctors will consistently fall prey to misdiagnoses unless they effectively use evidence to help them evaluate what is going on (Sackett and Straus 1998). As a result, it's critical for managers to try to compensate for these biases with a systematic process designed to correct for these perceptual limitations.

Finding Fault, Not the Cause

images

Don't find fault, find a remedy.

—Henry Ford, inventor and industrialist

images

Another common mistake that management makes in doing cause analysis is to focus on fixing blame rather than fixing the problem. In the vast majority of organizations, people who bring bad news or admit mistakes are celebrated. When things go bad, most organizations focus on who to blame (and managers focus on how to avoid being blamed for the problem). The mental outlook that people take to performance issues is critical. If it's about figuring out who the culprit is or who to make an example of, then the resulting analysis is not about figuring how what's broken but instead about who can do the best job deflecting responsibility. This approach hinders successful execution. Execution functions with real accountability. But if everyone is ducking responsibility, then you end up with a system that is more focused on avoiding blame than getting results.

This point—about focusing on the cause so you can fix the problem rather than fix the blame—is important from an attitude perspective. But it's also important technically. When organizations define problems in terms of blame (even if the blame is appropriate), they haven't really defined the issue in a way that it can be fixed. And that's the whole purpose of doing a cause analysis—figuring out why there is a performance gap and correcting it, so the next time around the organization will get the results it wants. Let's look at some examples to illustrate this mistake.

When there is an aviation accident, initial findings may conclude “pilot error.” What does that tell anyone trying to make sure that the same mistake isn't repeated—to tell other pilots “Don't make any errors?” Does anyone really think that pilots deliberately or intentionally make errors but would stop if only they were told not to? A finding of “pilot error” may allow the airline manufacturer to breathe easier (because they have less liability in the crash), but it does not tell how to prevent that error in the future.

Figure 5–1. An Example of a Sales Process—Sample Why Tree

images

Another example is that of a bank teller who incorrectly gives a customer $100 for cashing a $10 check. Now, technically, it's correct to say that the cause of the till shortfall was a teller error. But until the bank knows why the error was made, they won't know the best way to prevent it from happening again. So any efforts to find the cause that stop with conclusions like “pilot error,” “employee confusion,” or “stupid mistake” serves little value. Those labels serve only to fix blame; they don't provide decision makers with the insight to fix the problem so it won't happen again.

The U.S. Army's National Training Center in Fort Irwin, California, has become a wonderful laboratory for teaching combat units this principle of finding the cause. Army mechanized units rotate into Fort Irwin for war games and to practice combat against an “OpFor” (Opposition Force, or mock enemy) in preparation for deployment to combat zones. All the units rotating in are assigned observer/facilitators. After an action occurs (typically involving a defeat or nasty series of surprises for the unit confronting the OpFor), they stand down and debrief the results. Unit commanders are taught to facilitate discussions that focus not on who to blame but instead to focus on how to fix it. These are not discussions dominated by the officers; everyone participates, and the purpose is to learn. The lesson that officers get from this activity isn't on kicking butt and threatening subordinates but on analyzing breakdowns to understand what happened and how to prevent it the next time. Over the past two decades, this has been a fundamental shift in how the Army approaches mistakes. It has changed the way that many combat units operate in the field.

Frame Failure

images

It is one of the commonest of mistakes to consider that the limit of our power of perception is also the limit of all there is to perceive.

—C. W. Leadbetter, Theosophical author

images

A common mistake that many make with analyzing performance issues is that of frame failure. Specifically, too many executives define the problem poorly. There are many reasons for this, but one of the more common ones is a tendency to define the problem based upon the more obvious possible solutions that management has available. Albert Einstein is purported to have said that “if your only tool is a hammer, you see every problem as a nail.” This is a consequence of defining problems in terms of likely interventions—which has the consequence of preventing an intelligent and objective analysis of what is really going on. It's a mistake to define the issue in terms of an implied solution because that serves to beg the question. Classic examples of this include:

• “The problem is that the frontline staff lacks training.”

• “Because our system is so out of date, we're running behind schedule.”

• “Since we're understaffed, we can't keep the phones covered and still respond to walk-in requests.”

• “The issue is that Marketing doesn't have the leadership to execute these stretch goals, so we're behind schedule for the first two quarters.”

• “What's wrong here is that because our proposal staff doesn't know how to write a write a winning proposal, our capture rate is 25 percent lower this year than before.”

If you look at how each performance issue is stated, a response of some sort is implied (that is, if we only provided training to the frontline staff, then everything would be fine). But the problem with this approach is that before management can honestly say that the execution problems have been analyzed and some root caused has been determined, the organization is throwing out solutions. How can anyone know what will improve performance if it isn't clear what the problem is, let alone what's causing it? An easy rule of thumb is that anytime someone defines the problem as an absence of a particular action (that is, “the problem is that these people haven't been trained how to do the job right”), then that person probably hasn't objectively analyzed the performance failure in depth.

images

It is a capital mistake to theorize before one has data.

—Sir Arthur Conan Doyle, creator of Sherlock Holmes

images

Not Seeing the Organization as a System

images

To manage a system effectively, you might focus on the interactions of the parts rather than their behavior taken separately.

—Russell L. Ackoff, organizational theorist

One of the more frequent reasons that management fails to intelligently analyze execution failures is the inability to view the organization as a system. A system consists of apparently independent yet interrelated elements that interact and reinforce each other. Management has a tendency to view problems in isolation and therefore to make changes that will fix a problem by addressing its issue in isolation. But a true understanding of the problem requires a systems perspective. For every individual who does something wrong, there is an organizational culture that rewards wrong behavior (or consistently overlooks it), processes that penalize success, peers or managers who punish change, and business goals and rules that work against the intended objective.

When there are execution failures in an organization, it is critical to analyze the performance at an organizational level, a process level, and a performer or employee level (Rummler and Brache 1995). Only by analyzing performance from a systems perspective (which includes these multiple levels of the organization, the process, and the employees) is it possible for managers to gain a deep understanding of the system and how it contributes to performance problems.

images

Everything affects everything.

—Earl Weaver, former baseball manager

images

What happens when management uses a systems view to analyze performance issues? Anyone taking a systems perspective will quickly understand how many diverse factors contribute to either outstanding performance or a failure to get results. Typically, managers who rush to implement a particular solution are doing so with blinkers on that fail to acknowledge how many other issues influence the success or failure of employees. For instance, the business process reengineering (BPR) movement was driven by an assumption that processes were critical to successful organizations and that it made sense to make process changes quickly (to minimize disruption).

The problem with this BPR approach was that though processes are indeed a common area of organizational failings, those using BPR failed to understand how other factors—such as organizational culture, peer relations and social networks, or the ability of individual workers to assimilate massive change—would determine the success or failure of any BPR effort. Because BPR initiatives encouraged quick change (rather than long, drawn-out transitions), they often produced tremendous levels of shock within workforces. A systems perspective on BPR would have produced much more caution for many of the organizations that went down that path (or perhaps even decisions not to go there!)

The same point can be made of a number of merger decisions. Most senior management teams proceed with mergers because their analysis is not done at a systems level, only a financial or strategic one. Yet most mergers fail to work out. As Pfeffer and Sutton point out (2006, 4), “Study after study shows that most mergers—some estimates are 70 percent or more—fail to deliver their intended benefits and destroy economic value in the process. A recent analysis of 93 studies covering more than 200,000 mergers published in peer-reviewed journals showed that, on average, the negative effects of a merge on shareholder value become evident less than a month after a merger is announced and persist thereafter.” A systems perspective on a potential merger would probably point out major differences in organizational cultures, workforce backgrounds, incompatible work rules, tremendous process changes that would need to be made, and many organizational fit issues that are hidden when the focus is only on market share, finances, or strategy.

images

Get in the habit of analysis—analysis will in time enable synthesis to become your habit of mind.

—Frank Lloyd Wright, architect

images

Not Having a Diagnostic Process

The other most common error made by management in attempting to understand performance breakdowns is the failure to have any process in place to effectively diagnose what is going on and why it happened. A performance analysis process is important because it is really the only way of ensuring a rigorous and objective approach to understanding what happened and why. Thus, a basic approach to performance analysis should provide, at a minimum, several things:

• A model that explains how good performance happens (or what contributes to produce good performance). Effectively, this is about an explanation of what needs to happen for the organization to get good results—what is the business model or organizational logic that explains what leads to productive performers?

• Either explicit or implicit questions that the analyst needs to answer to understand the performance issues. Without directions, most managers will fall back on what they're comfortable with. Any performance approach needs to provide enough direction to help managers walk a new walk.

• An explanation of how this approach will provide a systems perspective on performance within the organization. Without a systems perspective, the analysis will be simplistic and ignore critical factors, and any actions as the result of this analysis will likely fail.

• An approach that forces a rigorous approach to analyzing performance. “Rigorous” in this context does not mean that it needs to be scientific or time consuming. But it does need to have some methodology built in to encourage objectivity. The reality is that people will tend to see what reflects their biases. To be effective, any performance analysis has to find ways to work around those biases to encourage a more objective perspective.

Fortunately, there are plenty of models and performance analysis systems to choose from. Joe Harless, Thomas Gilbert, Geary Rummler, and others have all developed detailed performance analysis methodologies with tools. There are a wide range of models or approaches to understanding organizational performance that also satisfy the criteria listed just above. A smart and thoughtful manager could develop a performance analysis process that meets these criteria. But what is critical about the performance analysis is not the specific approach (because there are many reasonable ones with good track records). Instead, it's that any performance analysis needs to be rigorous, take a systems perspective, provide questions or tools to help in the analysis, and offer a model or explanation for effective performance.

Performance Solution Notebook

images

Part of the reason that health-care-acquired infections (HAIs) have become so prevalent is because of the poor job that health care organizations have done in analyzing this issue and especially looking at causality. As mentioned above, medical practices of overmedicating with antibiotics has produced MRSA (methicillin-resistant staphylococcus aureus) and other strains of antibiotic-resistant infections. Even though health care staff are aware of how people can spread HAIs, there is still insufficient focus on how some of these infections can be disseminated. A high percentage of HAI initiatives in the United States have focused primarily on hand washing (Goldman 2006). But hand washing isn't sufficient to deal with most HAIs. or instance, British hospitals have banned doctors from wearing ties, jewelry, and long sleeves exactly because these items can help spread HAIs (A 2007), while U.S. facilities have often focused primarily on washing hands and not other ways that staff can communicate HAIs. One study found that TV remotes were the worst carriers of bacteria in hospital rooms (Lentini and Mouzon 2007), yet most hospital HAI initiatives fail to address this.

A critical part to any causal analysis is one that approaches a performance gap with a focus on understanding why rather than fixing blame. Emphasizing blame and seeking scapegoats makes it harder to understand performance issues. “This orientation toward improving systems rather than blaming people who make mistakes is critical, since it encourages caregivers to report adverse events and near misses that might be preventable in the future” (Goldman 2006, 121). Yet a focus on blame is exactly the approach that many health care organizations have taken when approaching HAIs. As a result, it's often difficult for facilities to identify where infection rates are higher and what the true causes of infections are for that specific facility.

Additionally, many HAI initiatives fail to recognize the realities that many health care practitioners face. Though consistent hand washing seems like a simple and easy request to make of medical professionals, it ignores the time pressures that most health care workers face and how easy it is to inadvertently slip up. The challenge of both remembering to wash between patients while focusing on a range of other issues and also making the time for hand washing dozens or even hundreds of times each day can easily result in lots of slips where a staffer forgets or doesn't notice inadvertent contamination (Gawande 2007). Consequently, simplistic hand-washing programs that don't acknowledge why conscientious and caring medical professionals fail to consistently disinfect aren't really addressing the root causes of time pressure and distraction.

Hitting the Mark

images

Most executives just shoot from the hip when it comes time to address problems. Who has the time to study the problem? Unfortunately, this approach fails to take into account the important lessons identified in this chapter:

• Start first by focusing on accomplishments. Accomplishments are what the organization values, and behavior is a deceptive metric for performance. Accomplishments should be measurable and viewed in terms of results, and ultimately there should be a quantifiable gap.

• Once you've identified the performance gap (expressed as accomplishments or outcomes), then look at the behavior, tasks, and knowledge necessary to produce that result.

• Don't forget that especially with white-collar and service jobs, there are probably multiple ways to achieve the same outcome. So be careful about imposing a specific set of behaviors on workers as the only correct method or the “best” way to do the work.

• It's not enough to identify a performance gap, you need to determine the cause of the gap. And the root cause determines the appropriate solutions to improve results.

Once the performance analysis is done, what then? That should be the point where managers correct problems and build higher levels of performance. In short, this is about coming up with the solutions to the problem. You've seen in previous chapters why so many managers do a poor job figuring out what the problem is or what's causing it. The next chapter examines why so many managers mishandle the solution step.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.34.31