2

The New Business/ IT Conversation

Images

God is in the details.

—Gustave Flaubert

The Devil is in the details.

—Friedrich Wilhelm Nietzsche

What are God and the Devil doing in there— arguing theology over a beer?

—Bob Lewis

Interlude: This Isn’t New

Back before methodology was a word, I found myself in front of an IT project of some size and scope. My job: design the new system, as in document the requirements and create the specifications.

I was, I can now admit, completely unqualified for the job. Compounding the felony, in those days of my youth I was utterly unwilling to admit my ignorance about this or, for that matter, any other topic.

Which was why I didn’t do what any sensible programmer would have done, which would have been to head straight to the library. Instead I asked myself what I was qualified to do that would seem plausible enough that I could fake my way through the assignment.

The answer, as I explained to my possibly too-trusting manager, lay in the anthropology classes I’d taken in college. These provided a framework for interviewing respondents, documenting and organizing their knowledge.

And so, instead of asking what we now call business SMEs the questions other business analysts asked when interviewing business users before designing their systems, I asked them to explain what they did every day to get their jobs done. In follow-up conversations we discussed what software could automate and what it couldn’t, and what information the software would have to manage in order to automate it.

Eventually I wrote it all down—a description of how to handle purchasing, issuing, and receiving for the company’s sixteen “nonstrategic” inventories and how the new system fit in.

Through a combination of dumb luck and a highly supportive business sponsor (we didn’t know that’s what the role was called, but that’s what he was) we built the system and implemented the business processes it supported. Remarkably, it all worked as planned—not just the software but the whole new way of handling purchasing, receiving, issuing from stock, and overall inventory management.

The transformation from chartering IT projects to achieving intentional business change starts with a simple-sounding adjustment: Instead of asking a so-called business customer, “What do you want the system to do?” everyone in IT will get in the habit of asking, “How do you want your part of the business to run differently and better?”

Depending on what happens next, the IT representative might add, “I have some tools that might help you figure that out.”

That’s where the simplicity ends and a deep conversation about business function optimization, experience engineering,* or decision support begins.

These are the three major types of business change that can benefit from better information technology. Business function optimization is about how the work of the business gets done and can get done better. Experience engineering is self-defining—it’s about improving the experience everyone has when doing the work of the business* Decision support helps decision-makers make better decisions by making data and data analysis more available, reliable, and useful.

Collaborating with business executives and managers to design each of these in achievable terms will become the basis of the new business/IT conversation and the standard of competence for the new business analyst, who will often be retitled “internal business consultant.

Making them happen will become the standard of competence for the IT organization.

One at a time . . .

Business Function Optimization

Current State of the Art

You’re probably more accustomed to the term “business process reengineering.” The reason you won’t see this term here is that “process” is one of those words that have more than one meaning, depending on who’s using it and how precise they want to be.

In this book we use it to mean a series of planned steps that lead to consistent, repeatable, predictable results. The gold standard for process-ness is the manufacturing assembly line, which is where most process optimization theory comes from.

Modern process optimization practice has coalesced into four major disciplines. They are, in our utterly unbiased perspective:

•   Lean: reduces waste, and therefore costs; also, incidentally, reduces defects

•   Six Sigma: reduces variability, and therefore defects; also, incidentally, reduces costs

•   Theory of Constraints: removes process bottlenecks, improving capacity; also known as throughput

•   Reengineering: removes large sums from the corporate coffers while increasing risk

Whatever else you do, don’t choose one of these as the foundation for your business function optimization program. That seems to be the standard pattern, and it turns these disciplines into competing religions instead of complementary problem-solving techniques. The frequent result: an octagonal peg that fits the octagonal holes quite nicely but dents and mars the rhomboidal ones.

In case the point isn’t clear: imagine your business processes are, as is the case in most businesses, good enough to get the job done but not so good that significant improvement isn’t possible. Do you really think the problem with all of them is excessive waste? Or a too-high defect rate? Or insufficient capacity?

In our experience, and, we think, in yours, different business functions, which have different purposes, will have different flaws that require different techniques to address them.

Once you understand this, you’ll understand the next challenge you’d face if you were to adopt one of these disciplines as your corporate business function improvement religion: the expertise needed to establish these programs doesn’t come cheap, even if you choose only one. Develop the expertise you need so as to apply the right one to the right problem and you’ll find you’ve invested quite a bit.

Which will then bring you to the next hurdle: for each business function you’ll find yourself with each process improvement discipline’s leaders insisting theirs is the right one for the job.

Another reason not to start with these: they’re all process improvement disciplines, whose practitioners rarely acknowledge that not all business functions are best implemented as processes in the first place.

And one more: none of these disciplines provides tools designed to leverage the capabilities that new information technology brings to the business.

Business Function Optimization: Starting the Conversation

We’ve found that one of the best ways to approach the subject of business function optimization is to start by explaining the difference between business processes and business practices. Business processes are, as noted above, assembly lines. Perform each step correctly and in the right sequence and you can’t avoid achieving success.

Business practices, in contrast, depend more on knowledge, experience, and judgment than on following a fixed set of steps. Project management is an excellent example. All good project managers know the essential steps they need to follow in order to bring a project to successful conclusion. They also know that following the right steps in the right order is just the ante that gets them into the game.

Unlike those who execute the steps in a business process, who require training in the steps they execute, in a business practice the practitioners require quite a lot more: street smarts to accompany their book smarts, judgment, and the bag o’ tricks that comes only from experience.

Business practices rarely follow a sequence of simple steps. The sequential steps are broadly stated categories of action (gather information), not specific actions (insert flange A into slot B). It’s up to the practitioner to figure out exactly what information must be gathered at this time, and what the best ways of gathering it are.

One more point: the question of process versus practice isn’t a matter of one-or-the-other categorization. They’re the poles of a continuum of possibilities.

Extending the Conversation: The Six Dimensions of Business Function Optimization

Message from Bob and Dave: if you ignore the entire rest of the book, don’t ignore what follows.

When it comes to process management, a very old wisecrack has it that you can make things cheaper, faster, or better— pick two.

The concept is right. When you try to optimize a business function you face trade-offs among the different parameters you could optimize for.

But to make the concept useful you need to refine it a bit, because each of these three business function characteristics has two separate and independent aspects to it. You can, in fact, optimize a business function for any of six different dimensions:

•   Fixed cost: the cost of turning on the lights; the onetime investment in infrastructure you need to make for the process to work as you want it to work.

•   Incremental cost: also known as marginal cost, the additional cost needed to output one more of the work products the business function exists to produce.

•   Cycle time: the total time that elapses between starting to work on a product and it rolling off the factory floor.

•   Throughput :* the number of products that roll off the factory floor in a unit of time.

•   Quality: in plain English, a highly ambiguous word. Here we use Philip Crosby’s definition: quality means adherence to specifications and, from the perspective of business function optimization, addressing any elements of the business function that cause defects.1 This differentiates quality from . . .

•   Excellence: in plain English, another highly ambiguous word, often used as a synonym for quality. Here, excellence means the presence of features and characteristics customers value in the business function’s work products and the business function’s adaptability—the extent to which it can adjust, tailor, and customize as circumstances demand.

What’s most important in conversations about the six dimensions of process optimization is ranking them. That’s because the choices available for improving any one of them inevitably results in trade-offs among the others.

Imagine, for example, that quality is, for a given process, your top priority. One of the most popular strategies for maximizing quality is to reduce excellence—to prohibit tailoring and customization and turn down most requests for exceptions. Quality would outrank excellence.

And if you consider excellence to be your second-highest priority? That probably means increasing physical inspections—a step that results in higher incremental costs.

Or, imagine that for a different process, minimizing incremental cost is most import ant. This almost always requires investments in systems and other infrastructure—reductions in incremental cost call for increases in fixed costs.

And so on.

Incrementalism versus Starting Over

When collaborating with front-line business managers—the folks directly responsible for optimizing business functions— an important dimension of the conversation should be about incremental optimization versus starting over.

As a general rule, incrementalism should be the general rule. It entails less risk, delivers business results faster, and is far less disruptive besides.

Starting over is the right choice in just a few situations: (1) for strategic reasons the company has decided to replace a major business system, which means the system on which the business function runs won’t be there anymore for it to run on; (2) the business process in question follows pretzel logic that’s a response to major system deficiencies; and (3) after multiple mergers and acquisitions, the company has ended up with multiple versions of a business function, has now decided to centralize or standardize, and for political reasons or legitimate dissatisfaction with all versions has decided to redesign the business function from scratch.

One more, and it’s a tough one to spot: (4) the way you wash your clothes right now is to beat them against rocks in the river. Incrementalism would most likely lead you to program robots to beat them against the rocks more efficiently. It’s hard to see how incrementalism could lead you to invent a washing machine.2

Incrementalism: Theory of Constraints Revisited Eli-yahu Goldratt’s Theory of Constraints3 is usually thought of as a way to improve throughput. It works by identifying the worst process bottleneck,* fixing it, and then identifying the worst remaining process bottleneck.

That’s about right, except that we need to generalize “bottleneck” so it isn’t limited to ways to improve throughput.

Here’s our modified take on how to go about it:

Step 1: Rank the six dimensions of optimization in descending order of importance.

Step 2: Decide whether any or all of the top three dimensions are unsatisfactory. If none of them is unsatisfactory, be happy and find something else to occupy your attention. Otherwise, any highly ranked and unsatisfactory dimension is called a pain point*

Step 3: Map out the business process. We’ve found that a combination of black-box analysis, which describes processes in terms of their inputs and outputs only, and so-called swim-lane diagrams for describing the actual process flow is a good way to go about this. In any event, process mapping is a vital skill for any business analyst who wants to become an internal (or, for that matter, external) business consultant.

Beyond this quick sketch, techniques for mapping business processes exceed the scope of this book, as they’re described in exquisite detail in many other existing works.

Step 4: Identify the worst bottleneck steps in the process map, with bottleneck defined as a process step that causes a pain point.

Step 5: Fix one of the worst bottlenecks. If you can’t fix a bottleneck without changing or replacing one or more business applications, as is the case more often than not, work with IT to change or replace them.

Step 6: Loop to step 4 until you reach the point of diminishing returns.

With incremental business function optimization, information technology changes are a consequence of desired business changes.

Starting Over: Business Function Replacement, Version 1— Designing and Building Everything from Scratch This isn’t a treatise on business function design. If you’ve decided to start over and redesign from scratch—to reengineer—here’s a quick sketch:

Step 1: Black box. Create an input/output view of the function, or, more accurately, an output/input view. Start with outputs, not inputs, as it’s the outputs that are the point of the function.

Step 2: Optimization. Rank the six dimensions of process optimization.

With these two steps you’ll have defined what you’re trying to accomplish: the what.

Step 3: White box. Use swim-lane diagrams to design the how. Four tips here: (1) each swim lane should have between five and nine boxes in it; (2) resist the temptation to add more, instead creating new swim-lane diagrams that drill down into a box in the primary swim-lane diagram; (3) to account for your information technology requirements, add one or more lanes that treat each application as a robot that’s just another actor in the process you’re designing; and (4) try, starting with the process flow as you want customers or business users to experience it. That’s often an excellent way to create the first, top-level white-box description.

Step 3a: Process bypass process. Your goal is to improve things, not to turn your company into a stifling, choking bureaucracy that’s perfectly designed to drive customers away in frustration.

Give employees a way to escape from the function’s step-by-step design when the design doesn’t fit the situation. These are called exceptions, and they happen all the time.

Step 4: The exalted state of good enough. You aren’t going to achieve perfection, so don’t bother trying. Implement the new function when it, including its process bypass process, is good enough to get the job done. After that, the same incremental optimization method we just described takes over.

Starting Over Version 2: Systems Replacements Companies the world over are finding themselves trapped by their legacy systems. “Legacy” is, by the way, a strange term to have made its way into our IT vocabularies. In any other context a legacy is something you’re delighted to be the beneficiary of. A legacy system, on the other hand, is a leaky boat, becalmed in the Sargasso Sea of your enterprise. It’s something you neither value nor can easily escape.

And oh, by the way, one or more critically important business processes rely on it.

Eventually, everyone involved agrees it’s time to retire it.

Mistake #1: Looking in the wrong direction. This juncture is one of two places where most companies get it wrong. They not only define an IT project (“Replace the mainframe” or something along those lines) but define it in terms of where they’re coming from, not where they’re going.

They aren’t even implementing an ERP (enterprise resource planning), or CRM (customer relationship management), or warehouse management system. They’re bent on retiring the mainframe. And so they do, eventually. When they do, they “modernize” the system, which usually entails replacing tens or hundreds of thousands of lines of batch COBOL code with tens or hundreds of thousands of lines of batch Java or C# code, proudly deployed in “the cloud,” as if that adds any business benefit.

It doesn’t. The sole business benefit is a modest reduction in software license fees, with little or no additional business function optimization; not even much in the way of additional future flexibility.

Mistake #2: Asking the wrong question. Imagine your company avoids mistake #1 and decides what it’s moving to. For convenience, imagine the plan is to replace its legacy systems with a modern COTS (commercial off-the-shelf software)* or SaaS (software as a service) ERP solution.

That’s when companies typically face the question of whether to implement the new system “plain vanilla” or with “chocolate sprinkles”—in non-gelato terms, whether to configure the application to support the company’s current business functions or to force every business function manager in the company to adapt to the new system’s default way of doing things.

That’s such a wrong question that many businesses never recover from asking it.

It’s the wrong question because it ignores the six dimensions of business function optimization.

As a wise IT master once explained, software is just an opinion. It’s an opinion about how your business should handle whatever process or practice the software is designed to support.

Which leaves you to answer this question: Is the software’s “opinion” better or worse than your company’s current opinion about the subject?

If it’s better, your organization should unhesitatingly “change its mind,” which is to say, it should adopt the process or practice embedded in the software. If it’s worse, you should reconfigure the new software to support how you do things right now.

Which leaves the question of how to go about comparing the two. It’s a question that’s easily answered, at least in principle.

Start with the same six dimensions of optimization we’ve been beating you about the head and shoulders with the last few pages, and for the business function in question, rank them in order of importance.

Next, measure how they perform with respect to the dimensions you’ve ranked most important. That’s your baseline—data your business collaborators presumably collect as a matter of course.

They don’t? Really?

Oddly, we’ve found that relatively few organizations do a good job of this. In any event, do what you have to so you know how the current business function performs.

Then . . . and this is the hard part . . . find a way to simulate your new application’s embedded approach to doing the same work, and figure out how it performs with respect to the dimensions of optimization that are most important for you.

Implement whichever approach will perform better.

The decision isn’t plain vanilla versus chocolate sprinkles. Call it mint chocolate chip versus mango sorbet.

We do, by the way, recognize that persuading everyone won’t be as simple as just showing everyone how the numbers compare. It should be, and if your organization has a culture of honest inquiry (see below), it will be easier; but people, including you, bring their backgrounds, experience, and biases to every one of these decisions.

But as they’ll bring them whether you’re arguing over the chocolate sprinkle count or our deux sorbets alternative, you might as well go with the sorbets.

Configuration versus Customization

You might have noticed we didn’t talk about customizing the new application as the alternative to plain vanilla implementations. We used the term “configure.” The distinction is huge.

Configuration means using tools built into the application you’ve licensed specifically to modify its functioning to your organization’s needs, without violating or changing its design or code in any way.

Customization means fiddling with the code, or perhaps writing a whole new satellite application that doesn’t make correct use of the integration points included in the package for this purpose when it’s needed.

The difference: when the time comes for IT to update the package to a new version, configurations rarely cause challenges. Customizations, on the other hand, greatly increase the cost and risk when updating to new versions.

So while vanilla versus chocolate sprinkles is a conversation to be avoided, customizing a package so as to satisfy a “requirement” is something to avoid except for the most dire of circumstances.

Experience Engineering

Experience engineering is the second type of business change the new IT will support. To a certain extent it’s a matter of art and aesthetics, but mostly, done right, it’s data-driven engineering. There’s a short version and a long version.

The short version: Find a bunch of naturally irritable people. Give them standard tasks to accomplish. Ask them what they find irritating in accomplishing them.

Fix what they tell you, if you can.

That’s experience engineering in a nutshell.

It isn’t, of course, quite that simple, which leads to the long version.

Experience engineering starts by understanding what users are trying to accomplish and the tasks they have to undertake to accomplish it, and finishes by optimizing each touchpoint.

The rule about tasks is straightforward: users, whether external customers or internal staff, should have as few tasks as possible standing between them and the outcome they want.

Touchpoints are a bit more interesting. They’re the intersection of tasks and channels. As an example, a customer might want to schedule an appointment with someone in your company. That’s a task—one step in accomplishing something or other. Some customers might prefer to schedule appointments using their telephone. That—executing the task of scheduling an appointment on the channel that is the telephone—is a touchpoint. Scheduling an appointment via online chat is a different touchpoint, as is scheduling one on an online calendar or scheduling one using a mobile app.

As a general rule, no matter what the task, users will have more than one channel available, and they won’t stick to the same channel for all the tasks they need to undertake: a first task might be executed via a smartphone app, the second might be via online chat, and the third might be through email or a phone call.

Experience engineering is a matter of making it as easy as possible to get something done, by minimizing both the number of tasks needed to get it done and the level of annoyance resulting from each touchpoint.

Step by Step

For each outcome users might want to accomplish:

Step 1: Develop personas. A persona, in case you’re unfamiliar with the term, is a way to categorize users into typical groups that experience your company, and then to characterize them by way of a convenient shorthand. “Demanding Dan” might be one persona, consisting of all difficult but highly profitable customers; “Agreeable Anne” might be the persona for the type of customer who accepts whatever experience they happen to get without complaint, until they get on social media.

These examples are oversimplified. The personas you’ll probably want are cross-classifications that include demographics like age, sex, marital status, and income level; psychographics like extroversion/introversion; social media anger management; skill categories of various kinds; or any number of other characteristics.

Step 2: For each persona, consider how they experience your company. Do this in terms of broad outcomes, and consider the tasks they might have to undertake to accomplish each outcome, such as researching products, purchasing an item in person or online, returning an item, or asking for service.

Step 3: Catalog the touchpoints resulting from step 2, each combination of task and channel. When your loyal authors were wee laddies this was simple. Most personas either visited your business or called. Now, depending on the task, any given persona might choose interactive voice response via telephone, calls to your customer service call center, calls to a personal representative, email, online chat, your website, a mobile app, or your company’s Facebook page . . . to name some of the more prominent channels.

Step 4: That’s a lot of touchpoints. Even more interesting, for a given task, personas will expect to be able to accomplish it regardless of channel, and will expect to perform the next task using whatever channel is most convenient for them, regardless of which channel they used for the previous touchpoint.

This has a significant implication for your company’s technical architecture: application capabilities must be made portable across all channels. Otherwise, the cost of supporting them all will increase exponentially as you add channels; meanwhile, the level of user dissatisfaction will increase exponentially if you fail to add them.

Step 5: Don’t rely on your own judgment. Don’t assume. Verify the validity of your personas by talking to or surveying groups. Validate your touchpoint designs the same way.

Also, don’t assume a single set of design principles will be valid for all personas.

For example, a persona who needs to achieve some particular outcome only occasionally might prefer a simple, uncluttered, intuitive user interface when visiting the web page that represents one touchpoint on the path to accomplishing it. A different persona—one who has to accomplish the same outcome a dozen times a day—will almost certainly value the efficiency that comes from having as much information and functionality available on a single page as possible.

Step 6: They don’t like you that much and don’t want to get to know you better. In the end, the best touchpoint design is to either eliminate the task it supports altogether, consolidate the task with some other task, or at least use what you know to navigate each user to where they’re most likely to want to be.

Take, for example, Uber. While you can schedule a pickup for some future date and time, its default assumption is that you want a ride right now—that’s where you start.

Also, its experience designers recognized that paying for a ride at the end of the journey is an irritating experience, even with credit card readers now common in the cab’s back seat. So Uber consolidated this task with the scheduling task.

It’s irritation removal at its finest.

Remember, with few exceptions, your goal, overall and touchpoint by touchpoint, is to irritate as little as possible the customers and users who correspond to each persona. And there are exceptions to the touch-them-least-is-touching-them-best guideline. Destination retail outlets like the Apple Store and Cabela’s sporting goods stores are examples; so are theme parks.

But for the most part, irritating users as little as possible is a lofty enough goal.

More Thoughts and Suggestions

Start with customers. Real, paying customers. Why would they want to take any of their valuable time to let you watch them as they try to accomplish standard customer tasks?

Some retailers have set up shopping laboratories—mock retail environments equipped with cameras and sophisticated tracking technologies. They pay customers to shop for stuff in them, tracking their movements, what their eyes fixate on, what kind of signage leads them to put something in their shopping cart, whether they’re more likely to buy the same merchandise if it’s on an end-cap, and so on.

But no matter how sophisticated the lab, there’s no certainty that customers who are willing to shop in exchange for money behave the same way as shoppers who want to buy stuff in real stores.

There’s also no certainty that shoppers in, say, a Cleveland suburb (if that’s where the lab is located) behave the same way as shoppers in downtown New York, Tallahassee, or Stevens Point.

Hint: Read Paco Underhill’s Why We Buy.4 Underhill suggests visiting real stores, finding a quiet spot, and watching real shoppers try to cope with what they encounter.

This, or the equivalent, is good advice for engineering all experiences your customers have with your company, to the extent it’s practical, at least.

For example, when a customer visits your website (and likewise your mobile app), you can track where they click, how much time they spend on different pages, and what they were looking at when they decided not to buy your merchandise.

Also, when the time comes to redesign your website or mobile app, recruit some friendly customers who are willing to look at your new designs and let you know what they like and don’t like about them, in exchange for a modest sum. Or, especially for direct-to-consumer websites and mobile apps, invest more to learn more. Labs where real users are monitored to see how they interact with your website or app have become the gold standard.

When they phone your call center, you can do more than just “record their calls for quality purposes.” For starters, you can figure out how often callers make the wrong choice when navigating your menus as well as how often they hit 0 or say “representative” when an automated alternative would have helped them accomplish what they wanted to accomplish. That’s along with the standard fare: queue time, abandon rates, and so on.

Beyond all this, artificial intelligence (AI) is encroaching, in the form of chat bots and email autoresponders. These are attractive from the perspective of cost minimization. But remember your goal of minimizing touchpoint-induced irritation. So especially with these AI-based solutions, test, test, and test some more to make sure they aren’t noticeably more irritating than their human equivalents.

Speaking of AI, here’s a look ahead, and a hint.

The look ahead: the technology is just about ripe for customers telling your automated systems, in plain language, why they’re calling.*

You don’t want your competitors to get ahead of you with this technology. If you decide you can’t afford it, figure out how to train humans to accomplish the same thing. If you can’t afford humans either, figure out how many customers you’ll lose to competitors that offer this level of service and the cost of that lost revenue.

The hint: never mind the quality assurance recordings. Ask the human beings who staff your call centers what the people they talk to find most aggravating about your company. They know. They deal with your crabbiest customers all the time and they’d be happy to tell you what’s making your customers crabby, all without a dime of investment in big-data social media analytics.

And a tip: ask them to track issues as they answer calls throughout the day. The loudest callers make the biggest impression. A small bit of tracking will help everyone keep loud callers in perspective.

And yes, in case you’re wondering, we do expect your average business analyst of the future to help guide their business counterparts through this thought process.

Decision Support

And so we get to decision support, the third type of IT-supported business change.

“Decision support” is an old but still useful term. IT has been trying for decades to build systems that help executives and managers make better decisions, with limited success at best.

The technology has improved over this span of time, from custom reports written by IT programmers, reading data maintained by the company’s business applications; to “user friendly”* report-writers that go after the same data so IT no longer had to be involved; to carefully designed data warehouses and the user-friendly business intelligence tools designed to analyze the contents of the data warehouses so as to make the business more intelligent; to modern big-data repositories contained in “data lakes” that require data scientists to look for useful patterns while making sure the statistical analyses run against what’s stored in the data lakes and conclusions drawn from them are valid.

All of which constitutes technological progress. Whole books, of significant size and heft, have been written about business intelligence and the associated IT engineering. Unlike business function optimization and experience engineering, business intelligence implementations for the most part have been about business change from the day they were first envisioned. We have nothing new to offer on that front.

Except for this: none of it is worth a thing without a culture of honest inquiry.

A Culture of Honest Inquiry and How to Get One

Honest inquiry is a matter of embracing the conclusions that result from what the best evidence and soundest logic tell you. It’s a matter of understanding that your gut is for digesting food—your brain is where thinking takes place.

Reliable evidence, and relying on evidence, is vital to making smart decisions in business. In Good to Great, Jim Collins quotes Lyle Everingham, CEO of Kroger during its transition from muddling through to twenty-five years of outstanding performance: “Once we looked at the facts, there was really no question about what we had to do.” 5 A&P, its lackluster competitor, pretended, creating a new store concept, the Golden Key, to, supposedly, test ideas. Its executives didn’t like what the evidence told them, so they closed the Golden Key business.

Kroger had, and apparently still has, a culture of honest inquiry, where executives, managers, and employees do their best to use trustworthy evidence to drive decision-making. Creating a culture like this takes work, persistence, and sometimes political dexterity. Here are some specific measures you can take to foster a culture of honest inquiry in your workplace.

It starts with wanting to know what’s really going on out there. Enron and WorldCom happened, in part, because their executives were so busy trying to make their companies look good that they obscured what was really going on to themselves. Your dashboards, financial reports, and other forms of organizational listening are to make you smarter. If that isn’t what you plan to use them for, don’t bother.

Confidence comes from doubt. Certainty, in contrast, comes from arrogance. If an employee is confident and can explain why, wonderful. If that employee’s certainty preempts everyone else’s ability to make their case, the employee is on the wrong side of things.

And yes, we are including the company’s top executives as employees as we say this.

Start every decision by creating a decision process. You don’t have to be in charge to encourage this habit. Just ask the question, How will we make this decision? That changes the discussion from who wins to how to create confidence in the outcome. The results: a better decision, a stronger consensus, and a few more employees who see the benefit of honest inquiry.

Don’t create disincentives for honesty. If you ask for honest data and use it to “hold people accountable,” you won’t get honest data. Why would you? The superior alternative is to employ people who take responsibility without external enforcement, so you don’t have to hold them accountable, and to make sure employees who give you honest evidence aren’t shot as unwelcome messengers. This works much better and takes less effort.

The “view from 50,000 feet” is for illustration, not persuasion. A high-level strategic view is essential for focusing the efforts of the organization. High-level logic, in contrast, is oxy-moronic: detailed evidence and analysis is what determines whether the high-level view makes sense or just looks good in the PowerPoint.

Evidence too far removed from the original source is suspect. Don’t trust summaries of summaries of summaries, especially if they tell you what you want to hear. Even with the best of intentions the game of telephone is in play. And many of those trying to persuade decision-makers don’t have the best of intentions.

Be skeptical of those with a financial stake in the decision.

But don’t ignore them. A conflict of interest suggests bias but doesn’t automatically make someone wrong. Be wary and dig into their evidence, especially if their evidence is a summary of a summary of a summary—even more so if it tells you what you want to hear.* But if you demonstrate to your satisfaction that they’ve cooked the evidence . . . go ahead and ignore them from now on. They’ve earned it.

Beware of anecdotes and metaphors. They’re useful . . . for illustrating a point or for demonstrating that something is possible. For anything else you need statistically valid evidence. Yes, someone said there are three types of lies.6 He miscounted; argument by anecdote is far more pernicious than argument by statistics, and argument by metaphor is even worse. Yes, you do have to understand statistics well enough to evaluate the evidence. Sorry. That’s part of your toolkit.

Be alert for “solving for the number.” This is a popular management pastime that preceded technology. It’s achieved increasingly high levels of false precision with the advent of the electronic spreadsheet, and even more with business intelligence software. It refers to the practice of starting with the answer you want and then fiddling with filters, adjusting assumptions, and, for the ultra-sophisticated, applying various different statistical procedures to your data until you get the results you want.

If you work in a business without a culture of honest inquiry you’ll need time and patience to build the habit of rationalism. You won’t do so by preaching and lecturing about the general principle.

The way to build a culture of honest inquiry is one decision at a time. Especially, you can help build it by finding opportunities to be persuaded by evidence and logic and by making it okay for employees to change their minds.

And in case you’re wondering, yes, when it comes to decision support this is part of the new business/IT conversation. But this is a part that doesn’t rest with IT’s business analysts.

It’s a tough conversation the CIO has to have in the executive suite.

In Conclusion

In most organizations, CIOs, IT managers, and especially business analysts sincerely want to satisfy their internal customers. This means getting the product right, which in turn means establishing elaborate mechanisms for figuring out what the internal customers want the software to do.

With a bit less sincerity and a bit more cynicism, all parties would know the answer: these so-called internal customers don’t want the software to do anything. They want their part of the business to run differently and better; they want real, paying customers to have a great experience interacting with the company whenever, however, and whyever* they’re interacting with it.

And, they want to make more informed decisions whenever it’s possible for more and better information to help them do so. That’s what the new business/IT conversation will be about.

Information technology will often be part of the discussions.

If You Remember Nothing Else …

•   The new business/IT conversation begins with IT asking, How do you want your part of the business to run differently and better?

•   Business processes (your assembly lines) and practices (your knowledge, experience, and secret sauce) are the two poles of the continuum of how to organize the work that has to get done in an organization. Figuring out where on the continuum a specific business function should be placed is the starting point for making sure it’s properly designed.

•   You can’t effectively design a business function until you’ve defined its outputs and determined what inputs are needed to create those outputs; you also must know how the six dimensions of business function optimization rank.

•   Designing the customer/user experience is complicated. Success starts by setting this goal: make their experience as un-irritating as possible.

•   There’s no point implementing any decision-support technology until the enterprise has begun to institute a culture of honest inquiry. Decision-support systems and practices are valuable only to the extent they reinforce this culture.

What You Can Do Right Now

•   Educate the company’s business analysts to stop asking their collaborators in the business what they want the software to do, and instead ask them how they want their part of the company to run differently and better.

•   Educate business analysts in the fine arts of process design and optimization, and in experience engineering.

•   Educate every manager in the company in the six dimensions of optimization. Ban “vanilla versus chocolate sprinkles” debates when implementing commercial software packages, replacing them with six-dimensions-based process selections.

•   In the C-suite, introduce the idea of a “culture of honest inquiry” as a prerequisite for implementing better analytics capabilities.

* This is often called “customer experience engineering,” but as with IT leadership’s oft-given message that everyone in the department should have a “customer service attitude,” the word “customer” adds nothing of any consequence to the message while reinforcing a relationship model that should, as already emphasized, be discarded.

Companies generally benefit when IT staff have a service attitude. Likewise, experiences should be engineered.

When real, paying customers interact with your website or mobile app, they are, in effect, doing some of your company’s work.

We’ll cover a fourth type of business change (strategic, transformational change) in chapter 3.

* Here, we consider throughput and capacity to be synonymous. They aren’t actually quite the same thing—throughput is the actual number of work products the business function outputs in a unit of time, while capacity is the maximum potential throughput. But as throughput can be measured while capacity can only be extrapolated, for the purposes of this conversation, optimizing for one or the other is pretty much the same.

* We prefer bottleneck to Goldratt’s constraint because in other contexts we differentiate between problems and constraints, the difference being that we can solve problems. In this usage, constraints are conditions beyond our control that we have to work with or around.

* In this book. Most consultants call anything anyone gripes about a “pain point,” which might be why so many business improvement efforts base their priorities on whatever sticks to the wall instead of sound engineering.

The correct name for these is “Rummler-Brache diagrams,” to give credit where it’s due. If you don’t know what these are under either name, they’re a handy tool for describing work as it flows from one step to the next and from one actor to the next.

That thought you were holding? Goldratt’s original Theory of Constraints assumes the goal of all process optimization is improving throughput. We’re applying the same pattern to a wider range of optimization possibilities.

* Yes, it should be COTSS. But it isn’t, and there’s nothing any of us can do about it.

* Yes, yes, yes, if you want to be precise, the technology for letting your customers tell you why they’re calling has been around for a century. What’s new is that the automated systems will soon be able to accurately interpret what your customers say.

* With apologies to Rocky and Bullwinkle, to be more accurate, less user “fiendly.”

* For more on this subject, google “confirmation bias.”

* Not a word, but it should be.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.99.6