Chapter 2. Analysis

To use language is to enter into the territory of categories, which are as necessary as they are dangerous.

Rebecca Solnit, The Mother of All Questions

The only cost that matters is opportunity cost.

Larry Page

This chapter covers foundational patterns for analysis that you can broadly apply. They are MECE, Logic Tree, and Hypothesis.

MECE stands for “Mutually Exclusive, Collectively Exhaustive.” It represents a kind of metapattern. It offers a quick way to check that the building blocks of your strategy work are valid and complete. I call it a metapattern because it doesn’t produce any direct output that you can drop right into your strategy like many of the others. It’s a light form of analysis that’s broadly applicable across all the other patterns we’ll explore. 

Logic Tree is used by strategy consultants as a simple tool for determining a set of relevant problems and possible causes. It helps organize your ideas, making quick work of examining any problem.

The Hypothesis pattern is a way of making a guess, based on some supporting suppositions and data, about what the root problem might be.

The patterns we’re starting with are the most abstract. These are tools for analysis that will act as the underpinnings of any strategy work.

In the world of strategy consulting, analysis of this kind is performed on what they call cases. A case is a particular industry problem to be solved, like a detective “on the case.” Job candidates for consultant positions at McKinsey or Bain or BCG must go through the case interview, in which they use a framework of tools and a certain approach to analyzing a problem to properly define and understand it, so they can make good recommendations or solutions. This is not dissimilar from when we are asked to envision a project or an architecture or define a technology solution within a business context. It’s about how to make great choices from competing viable alternatives.

In business, opportunity cost refers to what happens when you pick one alternative from many: you may realize a gain from the one you pick, but forfeit any potential gains that could have been realized by the opportunities you didn’t pick. If the returns on the choice you picked are more than the returns you could have had otherwise, you made the best decision from the available options.

There are obvious questions we get asked a lot. Do you choose to upgrade the current data center, or move to the cloud? Should you build or buy? Should you train your teams on artificial intelligence in-house, or execute an “acqui-hire” (buy a company not for its technology or customers but to get its knowledgeable employees)? What database vendor should you go with? Do you rush to be first to market and get customer feedback even if your product is a bit buggy, or make it solid and delay the launch?

Answering these questions well is hard, because these are complex problems with many moving parts, and because there is considerable risk involved when decisions are hard to reverse, when they’re costly, or when you get only one shot at them. As architects, we’re asked to make recommendations with imperfect knowledge, and need to do research, try some stuff out, and make a call. The more times we show good judgment, make the right call through a fog of business uncertainties, and minimize opportunity cost and maximize returns, the more our own stock price goes up in the organization.

As a technology strategist, you have many jobs:

  • Survey the landscape across your industry, organization, customers, stakeholders, competitors, and employees.

  • Examine trends in technology.

  • Determine what current priorities, problems, and possible opportunities are presented to your company.

  • Analyze and synthesize these problems and opportunities into a course of action: decide what to do, and what not to do.

  • Make strong recommendations for how to allocate your company’s resources, in what way, in what places, to what extent, and to what end.

That is the work of the technology strategist, whether you’re an architect, director, VP, CTO, or CIO.

Because there is not unlimited money and time to invest in everything, strategy is about making the right recommendations to minimize organizational damage and positional disadvantage, and maximize advantage, profit, and benefit. The better you are at raw analysis, the more often you’ll make choices with higher probabilities of winning.

MECE

MECE, pronounced “mee-see,” is a tool created by the leading business strategy firm McKinsey. As stated previously, it stands for “Mutually Exclusive, Collectively Exhaustive,” and dictates the relation of the content, but not the format, of your lists. Because of the vital importance of lists, this is one of the most useful tools you can have in your tool box.

The single most important thing you can do to improve your chances of making a winning technology is to become quite good at making lists.

Lists are the raw material of strategy and technology architecture. They are the building blocks, the lifeblood. They are the foundation of your strategy work. And they are everywhere. Therefore, if they are weak, your strategy will crumble. You can be a strong technologist, have a good idea, and care about it passionately. But if you aren’t practically perfect at list-making, your strategy will flounder and your efforts will fail.

That’s because everything you do as you create your technology strategy starts its life as a list, and then blossoms into something else. Your strategy is, at heart, a list of lists. Thinking of your work from this perspective is maybe the best trick to creating a sane, organized, productive context for your work. Let’s talk about lists for a moment.

There are two parts to a practically perfect list: it must be conceived properly, and it must be MECE, which we will define in a moment.

In a properly conceived list, two things are crystal clear:

  • Who the audience is

  • Why they care

You can determine who your audience is by asking the following key questions:

  • Upon reading this list, can the audience make a decision they could not make before having the information in the list?

  • Upon reading the list, can the audience now go do something they could not have known to do before?

These are the two reasons to bother creating any kind of information in a strategy. In this context, there is little point, time, or patience for a document that merely helps a general audience “understand” something. Your lists must be lean. That means making them directive toward work that someone will go and do, or providing the data that allows a decision maker to decide the best course of action. The RACI is a list. It answers the question for the project team of who is assigned to what role so that everyone knows who is in charge of what, who is the decision maker for what, and who is doing the work, and if someone sees his name on the list with an “R” by an item, he can go do that work. The Stakeholder List is primarily for the project manager. It lets him decide whom to include in what meetings and whom to contact for certain questions. But if these, and all the many other lists you create as part of your technology strategy, are not MECE, your building blocks will be weak and your strategic efforts will crumble. Let’s look at some examples to make this clear.

This formula is MECE:

Opportunity Cost = Return of Most Lucrative Option – Return of Chosen Option

This formula is MECE:

Profit = Revenue – Cost

Revenue – Cost = Profit is MECE. That’s because together those three items make a complete thought, divided across lines that don’t overlap, and nothing is left out. All of the parts of the money are accounted for within the same level of discourse. It is nonsense to leave out Revenue and simply state “– Cost = Profit.” There are only two ways to increase profit: increase revenue or decrease costs. Recognizing the formula as MECE can help remind you to address both the cost and the revenue aspects in your strategy.

This list is MECE:

Spades, Diamonds, Hearts, Clubs

This list is MECE:

Winter, Spring, Summer, Fall

Each entry in the list is mutually exclusive of every other one. There is no overlap in their content. Winter ends on a specific day of the year, and then the next day is the start of Spring. Every date on the calendar is, with certainty, part of one and only one season. There is no card in the deck that is part Spades and part Diamonds.

The elements in the list, when taken together as a collection, entirely define the category. No item is left out, leaving an incomplete definition. Thus, the list is collectively exhaustive.

This is not MECE:

North, South, West

It’s not collectively exhaustive. It fails to include East, and is therefore an improperly structured list.

Consider the following list:

Revenue – Cost = Profit. Free Cash Flow.

This is not MECE because “free cash flow” is not at the same level of discourse as the other items. It is true that free cash flow is an important part of any public company’s earnings statements. But that is unrelated to this equation, even though they appear to all be in the category of “stuff about money in a company.” That’s a weak category for a list because it’s not sufficiently directed to an audience for a goal.

What about this one:

Internal Stakeholders, External Stakeholders, Development Teams

This isn’t MECE because “internal” and “external” divide the world between them. Development Teams are a subcategory of Internal Stakeholders for a technology strategy.

Elements that are subcategories of other elements must not be included. Consider this list:

North, South, Southwest, East

This is not MECE because it leaves out one of the elements, West, and so is not collectively exhaustive. It also includes Southwest, which is not topologically on the same plane as the other elements. It dips into a lower level of distinction, as in the “free cash flow” example. Southwest is contained within the higher level of abstraction of South. So the elements on this list are not mutually exclusive.

These examples are straightforward (obvious) in order to illustrate the point. But they share an attribute that precious few lists in the world have: they are enums by definition. It is clear what goes on the list and what doesn’t. Most things in life are not this simple.

Consider the following list of departments or job roles in a dev shop:

  • Software Developers

  • Architects

  • Analysts

It’s not exhaustive: we left out Testers, and other roles depending on your organization, such as Release Engineers, Database Administrators, Project Managers, and so on. To test if our list is MECE, we must ensure we have pushed ourselves to think of all the relevant components that make up that category.

Remember the first rule: know your audience. Your longer, more detailed lists should be kept for your private analysis to help you reach your conclusion, or reserved for lists of things to be done in the project, such as a work breakdown structure. But you don’t want long lists when working with executives because they have Executive ADD. Even though you’ll worry that you’re leaving crucial things out, just give them the summary, but make it MECE. Then you can reveal only the headline: the impactful conclusion that makes a difference to your audience.

Consider this list of age groups:

  • 0–5
  • 6–10
  • 11–15
  • 16–25
  • 26–35
  • 36–45
  • 46–54
  • 55–65
  • 66–75
  • 76 and above

This list is technically MECE. None of the categories overlap, and the sum of the subcategories equals the whole category. It might be OK for a data scientist doing customer segmentation. But probably not even then. It’s too fine-grained and low-level, so it’s not very good for strategy work. You need to keep your visor higher; look more broadly to horizons to distill the few things that really make an impact and drive change. It’s more analysis and art than science. So even though the list is technically correct, you will lose your audience with details like this, and you can find ways to cluster and consolidate them better, along the contours of a real difference or divergence depending on your own organization’s products, services, and markets.

Let’s look at a quick example of how to apply this idea of MECE lists.

Applying MECE Lists

Imagine you’ve been enlisted to create a recommendation to the CTO for a new database system to replace your legacy system. If you merely state the single database system you want to buy, any responsible executive will reject your recommendation as heavily biased, poorly considered, and potentially reckless.

So we want to first consider our audience, with empathy, and always ask: Who is this for, and what do they need to know either to make a decision or to do the thing in question?

Your deciding audience wants to know that they have been given a clear, thorough, thoughtful, unbiased proposal and that they are not being manipulated. In our empathy, we realize that everyone has a boss, and that no one in a company of any size just makes a decision in a vacuum. It’s not the CTO’s money. So your CTO must in turn answer to his bosses for the system he selects, and is accountable for its success. Your recommendation will be successful if you give your deciding audience a list of MECE lists.

But the list of database system choices is potentially in the thousands. It is impossible to include all of them, and impractical and unhelpful to include even 20 of them. Being ridiculous is not what is meant by “collectively exhaustive.” So first we’ll create a list of criteria to help us make our final list MECE. I include three or five factors on which you will base your selection and write those down, as they become part of your recommendation too. You’re showing your audience how you came to your conclusion, just like showing the long math in school: you’re not just giving the answer, but providing the steps by which you arrived at it. This helps the audience follow your story and agree with your conclusion.

Then we’ll perform a survey of the landscape, including systems that meet the criteria. Include open source alternatives as well as commercial vendors. We might have a few of each. If we recommended only the one we already wanted, we would miss the chance to perform the analysis, squander an opportunity for learning that might change or augment our view, and lose confidence in our choice and ability to execute. Including only our one recommendation would certainly and immediately invite considerable skepticism and questioning about the alternatives and how we considered them.

So make a MECE list of options. The list is exhaustive according to your chosen criteria. Say you have 8 or 10 options in your list of “all the database systems considered.” Say so in your recommendation. It shows you’ve done your homework and suggests less bias and a more data-driven, analytical approach. Then say you narrowed it down to five options to present. That list includes two you reject and state why. You have a list of three options remaining.

For each element on your list of remaining recommended vendors, create another list of lists: “advantages, disadvantages” (that’s a MECE list itself). The elements in each list should be something about the technology, particularly 1) the functional requirements such as key features that distinguish it from the competition and 2) nonfunctional requirements such as performance, availability, security, and maintainability (that’s a MECE list, too). Consider these systems also from the business perspective: ability to train the staff, popularity/access to future staff, ease of use, and so forth.

Then from the list of acceptible candidates, present them all, ranked as Good, Better, and Best. (The Good, Better, Best list is MECE too, because you wouldn’t improve its MECE-ness by adding a “Horrible” option: the category or name of this list is the acceptable options, which presumably does not include “horrible, and therefore unusable ones.”)

The Good option might be the one that is acceptable to you, and is low cost but not optimal. The Best one might be the most desirable but highest cost, and so on.

Organizing your list this way makes an executive feel more confident that you have an understanding of the entire landscape, aren’t too biased, and show your reasoning. That makes your recommendation stronger.

Getting good at quickly checking if you are thinking in lists and then making sure they’re MECE has the pleasant side effect of helping build your powers of analysis. Think of MECE as a lens. Every time you make a list, immediately test if it is MECE. Use it as a heuristic device with your team: inspect your list with the team as you’re meeting, be sure to ask if the current list you’re working on is MECE, and then refine it. Your team may groan at first, but they will gradually start to see the value, and then they will not be able to imagine how they ever lived without it.

Make your work lists of lists, and make those lists MECE. Your recommendations have a better chance of getting accepted, supported, and executed on. And you will create more power for your organization and your team.

Logic Tree

If I had only one hour to solve a problem, I would spend up to two-thirds of that hour in attempting to define what the problem is.

Unnamed engineering professor at Yale, via William Markle

A Logic Tree is sometimes called an Issue Tree in the world of strategy consulting. The tree branches out as a decomposition of the problem you’re starting with. Collect possible root causes into groups, using the MECE technique, and then break them down into subgroups. As a technologist, analysis of this kind should be very straightforward for you.

The output of a Logic Tree exercise is a diagram. You can draw it in a mind mapping tool or presentation software. If you sketch on a whiteboard or paper for your initial draft with your team, transfer it into digital form so you can keep it in your growing Strategy Deck.

You will use Logic Trees in two ways. The first is for determining the problem. These are called Diagnostic Logic Trees. The second is for determining the solution set, called Solution Logic Trees. Either way, you’re following the same method with the same type of diagram as output.

Every strategy starts with a set of problems to be solved. The strategy itself is the set of solutions to those problems. A Logic Tree is the critical starting point for any strategy. It ensures you have defined the problem correctly and helps you enumerate the best strategic solutions.

If you are not very clear on the reason for making a strategy, it will be more general work, less relevant to any audience, and less executable. So if you’re asked to make a multiyear strategy, or a smaller local strategy, be certain you have alignment on what problem your strategy is meant to address before doing any other work. If you just got asked to make a strategy (as sometimes happens), be sure to ask your manager a few questions first about what problem she wants solved.

People at large organizations spend a lot of time doing hard work on poorly defined or unimportant problems. The result is useless at best, and a disaster at worst. To avoid this trap, you first must know what problem you are solving. There is no generic, cookie-cutter strategy in the world: there are frameworks to help you consider which set of actions is right for you. This one will help focus your work, make it go quicker, and make your resulting strategy more relevant and executable.

Diagnostic Logic Tree

Diagnostic Logic Trees attempt to determine the applicable subcategories of problems and a root cause. They answer the question of why the issue has occurred.

As you ask “why,” you are using your powers of deductive reasoning, working backward from a known current state.

To reiterate, you start any strategy by first clarifying what problem you need to solve. You are then ready to create the Diagnostic Logic Tree to determine why this problem or situation is occurring. 

Solution Logic Tree

Solution Logic Trees are a way of representing possible solutions or courses of action to address a problem. They answer the question of how to proceed. You create this kind of tree after making the Diagnostic Logic Tree.

Creating the Tree

To create the tree, you’ll first conduct a diagnostic analysis and then a solution-oriented analysis. These are separate exercises.  It is tempting to jump to solutions without taking the time to gain a clear understanding of the true problem.

Represent your thinking in two separate trees. You may be familiar with the Five Whys, or fishbone diagram, which is also called an Ishikawa diagram. We’ll use a similar structure to create the trees.

Once you are presented with a problem, ask why that would be the case. You may quickly see several possible reasons. Write each reason as the second level of the fishbone diagram. Then ask in turn why that would be the case, and write the reasons at the next level. Do this using the MECE technique (see “MECE”). Repeat a total of five times to come up with a set of possible root causes. Now you can make a declarative statement in your Ghost Deck (see “Ghost Deck”) that this is the problem and this is the root cause. For more on Ghost Decks, see Chapter 9.

My colleague at Sabre, Justin Ricketts, likes to use the example in Figure 2-1 to help teams see how to approach a Five Whys analysis. It shows a memorable way of demonstrating how you can come to simple solutions and processes.

Figure 2-1. Example of Five Whys

So it turns out that the Jefferson Monument was eroding because the lights that illuminate the statue at dusk attracted tiny midge flies, which attracted spiders, which attracted pigeons, which required cleaning crews to use harsh chemical processes to continuously keep it clean.

The solution to the problem was to simply turn on the lights that illuminate the statue one hour later—saving work crews, preserving the statue, saving on electricity and bulb replacement costs, and disrupting fewer tourists who come to observe the statue—and it cost nothing and took no time to implement. Justin’s illustration is a great way of showing how you can get to the root cause, but also how the solution may be easier than you’d think.

After you have determined some problems your strategy can address, and then figured out their root causes, you can start to formulate plans for addressing them through a variety of lenses and with tools we’ll explore in subsequent chapters.

The other part of the strategy scoping is to consider solutions. To do this, start by imagining the ideal end state—that is, what the world looks like after the problem has been solved or no longer exists. Make that declarative statement in your Ghost Deck. Ask “how” that state could be realized by determining what the prior necessary condition would have to be to achieve it. Do this in five layers of depth, regressing closer to the current state that has the problem. This will help you plan the path forward.

Problems Versus Opportunities

Here we’ve focused, for the sake of brevity and convenience, on problem solving. But if you focus only on problems, the best you can do is maintain the status quo. Therefore, don’t forget to focus on opportunities for your strategy, things that represent gains to your customers and organization that they might not be aware they need.

This requires you imagining a better world, absent any direct feedback that people are hurting without it. For example, no one in 2007 was walking around the streets feeling the pain of not having a smartphone—they didn’t exist. No one had apps, and no one was sad about it. The iPhone didn’t directly address a clear and present pain that consumers felt at not having apps in their lives. But the invention represented a gain for consumers, augmenting and improving their lives and giving them conveniences they hadn’t thought of or knew they wanted.

Apple commonly employs this strategy of looking for customer gains, not just pains to solve. Take one of many other examples from the company: in 1998, no one was in despair or unable to be productive because their computers were only one color: boring black. But once Apple made the iMac in five colors named after fruit, the product sold like hotcakes, and is actually responsible for saving the company, bringing it out of the financial crisis created in Steve Jobs’s absence. The strategy seems to have worked out OK for Apple. 

Hypothesis

Let us employ the symbol 1, or unity, to represent the universe, and let us understand it as comprehending every conceivable class of objects whether actually existing or not, it being premised that the same individual may be found in more than one class, inasmuch as it may possess more than one quality in common with other individuals.

George Boole, The Mathematical Analysis of Logic

A hypothesis is a starting point for an investigation. When you hypothesize, you make a claim about why something might be the case, based on limited data, to offer an explanation or a path forward. You wouldn’t make a proposition about something you are certain of. You may not have enough evidence yet to even convince you that it’s true. But making such a claim puts a stake in the ground that suggests a path for focused analysis. In philosophy of logic, a proposition takes the basic form P → Q, meaning “if P, then Q.”

In your strategy work, there is no one single moment in which you declare a hypothesis. A hypothesis is a tool that gets worked into conversation, that gets used together with other tools as a helper. Unless you regularly keep company with strategy consultants, you won’t often hear people say, “My hypothesis is…” (but strategy folks love the phrase). You have to recognize that when your team asks, “Why do you think this happened?” or “What’s the reason for this?” you’re being asked to state a hypothesis.

Consultants at Bain and McKinsey are hired at exorbitant rates to answer hard questions for CEOs. They might have an engagement to recommend whether the company should sell a certain division and exit the market or whether it should acquire a company, or they might be asked why profits are down in Europe, or what strategy the company should use to market in China. These are big, difficult, strategic questions. If they were easy to answer, there would be no need for consultants.

These consultants will spend the next six weeks to six months answering these questions. They conduct research using every available channel, run workshops with key employees, and create recommendations. Their work product is a deck. These decks are usually very long and dense, containing loads of graphs and charts. This deck represents the answer to the key questions that started the engagement.

McKinsey consultants famously start engagements by quickly making a hypothesis, maybe after only a few days or hours on the job at a new company. Given that there is so much on the line, they don’t work at the company, they may not have prior experience in the client’s industry, and they may hold an MBA but be otherwise straight out of school, this sounds preposterous. But it isn’t, and here’s why. They’re very good at forming hypotheses, using mental models similar to what we’ll discuss here.

The Five Questions

Hypothesis formulating is making a claim about the world: “this is that.” Or, “the reason for X is Y.” Or, “the way to make A better is to stop doing B and start doing C.” I suppose you can just start making statements along these lines and call it a hypothesis. But that’s not going to get anyone a strategy consulting job at McKinsey, and it’s not going to serve you as a building block for your strategy.

This pattern is implied by the hypothesizing that strategy consultants do, but is not their process. So you might see very different material on this pattern in other sources. What I describe here I’ve adapted and customized based on my graduate studies in philosophy and what I have to put to work making successful strategies in my roles as CTO, CIO, and Chief Architect in a variety of companies.

Clinton Anderson was a Bain strategy consultant for 20 years. He once told me that his job in that time was about asking the right questions. The hard part is determining what the right questions are. Professor of Philosophy Alison Brown helped me see that in this context, hypothesizing (asking the right questions) tends to mean we start by asking these five key questions:

  1. What is the conjunct of propositions that describe the problem?

  2. What semantics characterize these propositions?

  3. What are the possible outcomes?

  4. What are the probabilities of each of these outcomes coming true?

  5. What “ease and impact” scoring values suggest the right strategy?

This is our framework for asking those questions well. Let’s take them in order.

1. The Conjunct of Propositions Describing the Problem

When it’s time to perform an analysis, which is most of the time, we start with the first of our five questions: What is the conjunct of propositions that describe the problem?

Twentieth-century philosopher Ludwig Wittgenstein was one of the leading thinkers in propositional logic. Propositions and propositional logic are well, but not definitively, explored in his book Tractatus Logico-Philosophicus, which I highly recommend. Ten years earlier, in 1911, Wittgenstein’s teacher Bertrand Russell wrote a paper titled “Le Réalisme Analytique” in which he describes propositions.  Here we’ll unpack a few simple tools from this field to aid in our analysis.

In the Tractatus, Wittgenstein writes that “a proposition asserts the existence of a state of affairs” (section 4.21). So when you make a proposition, you are making a claim about the world. You are characterizing something that should be able to be expressed as a truth value.

When you are presented a problem, define it as a set of propositions. Each proposition is connected by the conjunct (the logical operator AND). Within each proposition, the variables, or constituent names, are also linked by logical connectors, so that you can deduce the truth value of the overall formula from determining the truth or falsity of each variable.

In modal logic, a proposition is true in accordance with its being borne out by the facts. So you must collect a few data points before making a proposition. Ultimately, your hypothesis will be a list of subhypotheses, each based on an insight, which in turn are each based on a series of data points (see Figure 2-2). As we frequently hear from machine learning teams: if you think your data is clean, you haven’t looked at it hard enough.

Figure 2-2. Hierarchy of data, insights, and hypotheses

Proposition P is a truth function of a set of constituent propositions if and only if you fix the truth value of P while determining the truth value of every proposition in the set. This is a cheerfully academic way of saying you have to be clear on what you are talking about: define your terms. People use “resource” to mean “compute power” or “human programmer.” When you say “customer,” do you mean the franchisor you are selling to or their customer? Your definition of “system” is likely too slippery to be talked about. So again, the simple solution is to define your terms.

Insights

An insight is when you mix your creative and intellectual labor with a set of data points to create a point of view resulting in a useful assertion. You “see into” an object of inquiry to reveal important characteristics about its nature. In regular conversation, this is required all the time—for instance, to understand the punchlines of jokes. An old Groucho Marx joke goes like this: “This morning I shot an elephant in my pajamas. How he got into my pajamas, I’ll never know.” Getting the joke requires us to see into the ambiguity of language, that the word “in” has multiple meanings. It can mean that Marx is wearing the pajamas, or that the elephant has found its way into Marx’s pajamas, which we don’t expect.

McKinsey publishes a set of its insights every year. This is a collection of conclusions and recommendations it’s reached based on its surveys and independent research (the data points). You can read one at https://mck.co/2MIY1Bs. That document represents a rich set of examples of what we’re talking about here. Let’s take one example of forming an insight. The document states, “Culture is the most significant self-reported barrier to digital effectiveness.” Then it presents a chart containing the top 10 factors technology executives cite as preventing them from effectively executing their digital strategies. This is not an insight, because it mixes no thought with the survey McKinsey conducted. It’s just a representation of the raw findings—the data. Additional research from McKinsey indicates that several companies that have addressed their cultural problems head-on, by cutting down silos, have performed better and more quickly than competitors that have not. This is another data point. These data points are combined to reveal the insights that companies that are more willing to take risks, more responsive to customers, and more connected across diverse functions do better in the market. These insights lead to the hypothesis that executives must not ignore this fact and must not wait for their cultures to change organically, but instead must foreground and emphasize this specific kind of culture change—cutting down silos—in order to succeed at their digital strategy. That’s making a claim, based on insights, based on data, and it recommends a course of action that is not obvious or intuitive. As you build your strategy, this is what you want to do.

Note that it’s a good idea to read and cite material like McKinsey Insights reports in your Strategy Deck appendix, as part of your data point collection work, to help you reach your own insights that lead to your strategy.

Sometimes, to the untrained eye, a mere tautology can appear as an insight. A tautology is something that is necessarily true, so as to be redundant. That’s not an insight. A tautology is an assertion—a proposition—that is true for every possible value of its constituent variables, so it’s not useful. Nineteenth-century German philosopher Hegel refers to them as “trivially true”: he’s basically saying that although “A = A” is true, it reduces to making no claim, so it shouldn’t be discussed like it matters—it’s trivial. Sometimes people (ourselves even) speak in redundant or circular terms when trying to define a problem. The statement “all bachelors are unmarried” is necessarily true as a proposition, but only because we have redundantly reworded the definition of bachelor: we have added nothing to our understanding. Watch out for tautologies as you perform analysis work in creating a hypothesis, or a Logic Tree (see “Logic Tree”), or in many other exercises in strategy creation where you need to form a real insight about the topic at hand.

2. The Semantics Characterizing These Propositions

Now let’s ask the second question in hypothesis formation: What are the semantics that characterize each of the propositions?

Here you are creating an interpretation, determining the discourse around each of these propositions. That’s because, again according to Wittgenstein, the “elementary proposition consists of names. It is a nexus, a concatenation of names” (Tractatus, section 4.22). But your interpretation must be clearly prescribed by a domain of discourse. To put it more simply, the word “play” means something different to a shortstop than it does to a theater goer than it does to a violinist than it does to a femme fatale in a film noir than it does to a toddler than it does to a deconstructionist philosopher. “Gradient” means something different to a data scientist than it does to a UI designer.

As you conduct your analysis, it’s powerful to realize that you are operating within a discourse, a patois, a learned and shared and, to some extent, private language. What are the terms you aren’t sure of? What are the terms someone else might not be sure about? Your work and the spheres of technology and architecture participate in what Wittgenstein called a language game. We use old words in new ways, and new words in old ways, and apply a word from one realm of life to another. Words have a preponderance of meaning.

This causes confusion, missed expectations, improper specifications, and incorrect application. It’s bad for software and organizations.

To sum up the point of this second of the five key questions: “stuff means stuff.” Being aware of the language games in which you and your teams are working is a great step toward being clear with your language.

This allows you to be clear on your definitions of each proposition, such that you can assign quantifiers and qualifiers with more rigor. You examine here what is believed, what is doubted, what is hoped for, and what has been invested in to determine the truth value of your proposition. For example:

  • Ask yourself how biases might be entering your work. Keep a data dictionary to act as a glossary of terms if necessary. Keep your language and terms clear and precise and accurate. A common mistake here is use of the word “platform.” People in tech say it so much that they think they have one just because they said it, but it often is improperly used as a synonym for “system” or “application.” A true platform offers APIs that allow customers to build something new of their own on top of it. Android is a platform. Alibaba is a platform. AWS is a platform. Salesforce is a platform. Your mission-critical system might be important to your customers, and wonderful, but if people can’t make new applications of their own on top of your system without talking to you, it’s not a platform.

  • Ask yourself what language is used that isn’t clear. I have heard product managers ask teams for a “concept model.” This was apparently an art term brought over from a previous employer, and might be a great idea. But no one knew what it was. Is it the same as an information architecture? A set of use cases? A set of UX wireframes? Are we telling customers that we’re delivering an AI platform, but the data scientists think we’re doing a few machine learning algorithms in the background? That’s laudable, but different, and linguistic alignment turns out to be A Thing. You can’t deliver it if you don’t know what it is. Rooting out ambiguous terms will go a long way later.

3. Possible Outcomes

Our third question is, What are the possible outcomes?

Determining the possible outcomes of a decision or action is an act of imagination, and also of reasoning. You can brainstorm with your team to consider what the possible outcomes might be. Write them down into a MECE list. Keep this list around in a spreadsheet or something, because although you’ll soon get started by focusing on one path, that doesn’t invalidate others. You’ll want to use this list for further exploration.

Brainstorming is a useful activity when organized and timeboxed. It will give you a load of sticky notes that suggest good next steps. But it’s not going to draw the trajectory from here to a possible future in any sophisticated way, or help you hash through your hypothesis as a thought experiment before you go too far. For that, we’ll quickly review inductive and deductive reasoning.

Inductive reasoning finds a fact (a true proposition) and generalizes from there to create a new proposition about broader circumstances. You draw conclusions based on data. The data, as true facts, offer evidence that supports a conclusion. This is what we hope to do as a necessary first step in strategy construction. This is quite useful in hypothesizing. But people fall into traps when they generalize here, and can draw incorrect conclusions. With inductive reasoning, the facts are certainly true, but the conclusion is only probably true. It cannot be certainly true. Insights are the product of inductive reasoning. It can add nuance and support to the claims you make within your strategy work if you show the probabilities. More importantly, you must be careful to not take as certainly true what is only maybe true. We see this frequently at business meetings, and we need to be able to identify when claims are being overstated so we can determine what other evidence we should collect, or take a different direction in the analysis.

Start by defining your terms and looking at the data you have, and labeling it properly. What relevant facts are there, what research can you do, what database queries can you make, what invoices can you find, what logs can you trace, who can you call up to get a report so you can start with some thread of data?

If you’re considering opportunities instead of problem solving, read McKinsey Insights reports, industry articles, technology trends books, business books, Harvard Business Review, O’Reilly books, MIT Sloan School of Management books, and your favorite websites, and talk with your colleagues to see what you might be able to take advantage of.

Either way, it’s a research problem, like in school. You’re like an investigator at the scene of the crime. You need a starting point that isn’t based on conjecture. You don’t need the whole picture, and probably can’t get it yet, if ever. So you start with something concrete to work with and take a shot at making a hypothesis quickly so you can start testing.

4. Probability of Each Outcome

The fourth question you ask in conducting an analysis regards figuring out how likely different possibilities are to occur: What are the probabilities of each of these outcomes coming true?

You don’t have to be super-specific, like “the probability of hypothesis A coming true is 76%.” If you feel you have enough real data and sophstication in your methods and a small enough problem set to make such a claim, knock yourself out. But I try not to talk that way. That’s because in general, people whose full-time jobs it is to predict things are typically pretty bad at it. For instance, Kevin Warsh, former governor of the Federal Reserve System, recently stated at the AH&LA Forum in Virginia that the Fed accurately predicted 0 of 144 financial crises globally that resulted in a recession between 2005 and 2014. And that’s kind of all the Fed does. But you can roughly assign probabilities to each of those outcomes with some kind of traffic light to represent ranges of probability such as High, Medium, or Low.

We might state our claims as “I hypothesize that our customers can increase their revenue by 40% if they use our machine learning product.” Or we might say, “I hypothesize that within five years mobile phones will represent only 10% of the market and therefore we should use our technology to create a wearables product.” Those are fine hypotheses, assuming we have done our homework and can show the data in a slide that helped us draw that conclusion. But one trap here is a logical fallacy called false precision. If we were to ask anyone on the street what the temperature of a human body is, we would likely hear “98.6 degrees.” This is not as true as it suggests. The precision of this number, and the decimal place, gives the impression that it is a single number that is constant and never fluctuates within more than a tenth of a degree. Of course, human body temperatures regularly fluctuate, and depend on a great variety of factors, such that it’s more accurately stated as a range (it’s something like 97.5 to 99.5 degrees under normal conditions). Precise numbers make things that aren’t facts look like facts. Executives don’t like having expectations set for them that aren’t met. We set them up for disappointment when we overstate things this way. We tend to produce numbers instead of ranges for estimates all the time. I suspect that’s because we are afraid as technologists to state that we don’t know something, since our whole careers are predicated not on how sociable and sporting and what snappy dressers we are, but on how smart we are. Train yourself to use ranges. Technologists commit the fallacy of false precision more than any other group of people I’ve seen—as much as 27.3% more.

We must not be misled by the traps of inductive reasoning. Just because we have seen something in the past does not mean it will continue.

Bertrand Russell famously and colorfully indicts inductive reasoning thusly: Imagine a turkey who is an inductive reasoner. He is fed without fail every morning of his life for years, and reasonably concludes this will continue to the point of never thinking of it, until Thanksgiving morning when his throat is cut. This is a good lesson for technology strategists and business executives alike.

And we should make another, more nuanced point. We can reason that a fair coin has a 50% probability of coming up heads when we flip it. However, upon the first flip coming up heads, we then start to assume that the next time it will come up tails. Thus has much money been lost at the roulette tables in Las Vegas. Every flip is independent of the last. It is possible that we flip a coin 76 times, and that every time it comes up heads. Of course, the probability of flipping a fair coin 76 times and its coming up heads is 1.3 × 10–23 (or 1 in 1.3 sextillion). We do not expect this to happen. But if, that having happened, we were to place a bet that the next time it would come up tails, we would be seduced into thinking that the chances of it coming up heads yet again must be impossibly low. But there is no “yet again” to the fair coin. Even on this 77th flip, the probability of heads remains one in two. This is debated delightfully in Tom Stoppard’s play Rosencrantz and Guildenstern Are Dead. Upon seeing the coin flipped heads 77 times in a row, one character remarks, “A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability.” (Fun fact: Rosencrantz and Guildenstern are two minor characters from Shakespeare’s play Hamlet.) So flipping with the same outcome 76 times in a row is an entirely different question, and different probability, than this discrete flip after 76 previous flips that happened to come up heads. So there are two matters, not one.

We must first understand the data without adding our assumptions, conjecture, explanations, filters, and biases to them, and make sure we’re clear on what we mean. Here’s an example, devious in its apparent innocence. Once when making a strategy for a company, my team needed to understand the number of customers we supported on the current hardware so we could help project costs for supporting more customers in the future, and use that as input to determine the cost differences if we migrated to the cloud. I asked the team how many customers were on the system. I was told it was about 30,000. That ballpark was good enough to start working, but as we needed to refine the business case, we thought eventually we should actually query the database. I was told it was 44,000—a difference of 47%! A short time later, I was given another number of 39,000 and then later, 34,000. This was a very straightforward question. We were all over the map. How could this be? It turned out that there were some guesses, and then some assumptions built into the queries people ran—in one case the DBA filtered by “active” customers (a perfectly reasonable assumption), which refined the query to throw out rows that hadn’t been updated “recently” (whatever that means). Starting with good data and only true facts, and refining what you name things so you’re clear about their status, is critical to increasing the probability of your inductions being true, relevant, useful, and important.

Bayesian probability

There is a tendency in our planning to conclude the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.

Nobel Prize winner Thomas Schelling

Schelling’s lesson for us is to not make assumptions too quickly regarding what we find unfamiliar in the data. Unless you start conducting a data science project on your own strategy project, the probabilities you assign to your hypotheses will be more like educated guesses. Those guesses should be as free as possible of assumptions based on what is unfamiliar to you. It may be new, or it may be new to you, but that fact alone is value-neutral in terms of what strategy should be pursued.

Here’s a little framework, as a set of general steps, for assigning better probabilities to your hypotheses.

Imagine that the president asks you if acquiring a certain technology company is the right thing to do, or you’re weighing in at a meeting about whether we should pursue customers in South America or Europe next, or the CIO asks you to recommend whether we should build or buy a key part of the technology offering, or the CTO asks if we should use an open source or commercial database at the heart of our next product, or your manager asks why this component keeps failing on a semiregular basis. In short, something has happened to cause these questions to be posed: there’s a new event requiring you to hypothesize a diagnostic explanation, offer an opportunity, or project an outcome. Let’s take a semi-Bayesian approach to the case:

  1. The first step here is recognizing that these are very difficult, open-ended questions, and that you are in fact being asked to hypothesize.

  2. Next, based on the event, quickly develop your first hypothesis, a judgment of something that you predict might be the case based on data and insights you draw from them.

  3. Then determine the probability that your hypothesis is correct, without succumbing to the fallacy of false precision.

  4. To do so, first ask: What is the prior probability? That’s X. It’s the probability you would have assigned to your hypothesis coming true before this new event occurred, under the current circumstances. This should help separate the distinctive and relevant aspects of the situation (the signal) from the noise.

  5. Now estimate the probability of this event occurring as a condition of your hypothesis being true (Y)…

  6. …and of it being false (Z).

  7. Assign your posterior probability—your revised estimate based on the fact of the event.

This technique is useful during troubleshooting, or when you’re creating Logic Trees for diagnostics.

This sounds like an unrealistically laborious process to undergo, but once you get used to it, you can do it roughly in your head in a minute or two. Instead of assigning a specific numeric probability, you can use ranges and just state High, Medium, or Low.

In broader strategy discussions, I suppose you could use this as a model if the situation calls for it, but this is a level of detail that I rarely see applied. Once people know enough and are talking enough about these things to be able to do this, they start to a conversation in a new way where no one would think to ask about this kind of precision.

This is the simpler and more useful formula for how to make decisions under high levels of uncertainty:

  1. Create and hold a variety of hypotheses in our heads at once.

  2. Think about them probabilistically, using an informal application of the Bayesian method.

  3. Update them frequently as we get new information that might be more or less consistent with each.

Deductive reasoning

Deductive reasoning is the opposite of inductive reasoning: from a general principle, you move to a specific conclusion. It asks, “If we assume the premises are true, does the conclusion logically follow?”

If the premise is true, then it should be very easy to test it, using the basic rules of logic, to determine the validity, assumptions, and contradictions that are at work in the analysis.

Our job here is to be sure that the stated principle is one that is valid enough to cause us to act on it, and determine the ways in which we must act. For example, sometimes enterprise architects publish a set of principles for the organization to follow. I’ve done this myself, following my TOGAF (The Open Group Architecture Framework), training many years ago. A popular principle of this kind is “Data is an asset.” The point of such principles is that architects can’t specify where the programmers should put every semicolon in their code, and nor should they. The principles allow that when developers are left to their own devices to make a local judgment call, they can refer to the principles to help them decide how to create this particular module in accordance with the stated architectural values. I’ve also heard this principle ridiculed as “meaningless” or “empty.” But if such a principle is not stated, developers on a team maintaining, say, a shopping service might not siphon off the shopping data to save for later, because they’ve written only the code necessary to fulfill customer shopping requests. In that case, the data scientists who follow them—hoping to exercise some machine learning algorithms for better classification, customer segmentation, or a recommender system—will be out of luck. If architects publish their premises, and teams can perform a bit of deductive reasoning to form a logical conclusion to direct them in solving a local concern, the organization will be more aligned and more agile.

5. Ease and Impact Scoring

The fifth and last question in our analysis framework takes up the set of possible outcomes along with their probabilities, so assigns them a value in order to prioritize them. We ask: What “ease and impact” scoring suggests the right strategy?

We’ve done our homework, collected data, formed propositions as insights while recognizing the semantics at work, stated hypotheses, and assigned them probabilities, and now we’ve got a pretty sizable collection of possible stuff we could set the organization off to go execute. But we can’t do everything at once. So we must prioritize.

To prioritize the work, we’ll use a practical method.

Create a spreadsheet listing your hypotheses or other work items. Add two columns: one for ease of execution (how easy it would be to get that done) and one for impact (how much of a difference it would make to do it, how much positional advantage it would give you). Figure 2-3 presents an example.

Figure 2-3. Resulting scatterplot chart of scoring your proposals on ease of execution and impact/value

Using a spreadsheet program to automatically plot these items, color four quadrants with equal areas behind the plot, like so:

  • The top right are items that are relatively easier to do and have the greatest effect. Color this quadrant green. They should be prioritized first.

  • The bottom left are the things that will be hard to do and make only a small advance. Color this quadrant red. They should be prioritized last.

  • The top-left quadrant are things that are easiest to do but have a relatively small impact. Color this quadrant yellow. They should probably be prioritized second.

  • The bottom-right quadrant are things that are hard or very time-consuming to get done, but are important to advancing your strategy. They don’t represent quick wins and aren’t the most important, so should likely be prioritized in a third group.

This is only a guide, not a hard-and-fast rule. It serves only as input for you to make your final determinations on what to recommend doing in what order. So it may be that there are elements from the middling quadrants that you exchange in priority, doing one or two big-ticket items instead of several easier, small ones. This is a judgment call depending on your other competing priorities, team capacity, and strategic directives from executives. The resulting chart makes a great visual for your deck, to help substantiate that you considered many alternatives, took a data-driven approach, and made your recommendations from the many available options based on what made sense.

You can use a variation on this, replacing “Impact” for “Value” and “Ease” for “Effort.” While these are words a businessperson readily understands, I don’t like them as much because they are not both positive axioms, so they’re inverted from each other: you want more value for less effort. So “easy” is good, and “valuable/impactful” is good too. It’s kind of “six of one, half a dozen of the other,” as they say, but helps make the chart quick and easier for your audience to understand, which is empathetic and therefore helpful rhetorically.

Signal and Noise

We can draw a line from the Ease and Impact scatterplot to a related idea: the 80/20 rule, sometimes called the Pareto rule. An common example of this rule is that 80% of your profits come from 20% of your customers. Using the 80/20 rule as a starting point gives us a different way to filter and sort our data, hypotheses, and strategic priorities. It’s an informal way of separating the signal from the noise:

Signal

Something that points to the true state of affairs, something that represents the stuff that matters.

Noise

Random patterns that might easily be mistaken for signal, or the sound produced by competing signals.

Business moves quickly, and we are frequently asked to make recommendations and estimates long before we feel comfortable that we have enough data, or enough understanding of the problem to do so properly. I’ve seen architects poring over data for many weeks on end, trying to ensure they’ve looked at every aspect of the problem before coming to any conclusion. They do endless prototyping and analysis for months just to determine that, yes, in fact the most popular deep learning library is the one they want to use. This doesn’t work for modern business. While it seems thorough, and is perhaps well intentioned, it’s bad for business and it’s unnecessary. You cannot read and try out everything, and everything isn’t important. As Larry Page stated, the only cost that matters is opportunity cost, so hone your intuition to make quick “good enough” conclusions, which you can carefully refine later.

There’s a discussion in Nate Silver’s wonderful book The Signal and the Noise that illustrates the point using poker. Silver discusses how “keeping the water level high” means that new players can level the playing field with very experienced, strong players by doing these things:

  • Learn the hands

  • Learn a rough idea of the odds

  • Fold your worst hands

  • Make a modest effort to consider what cards your opponents might hold

Doing only those things will substantially mitigate your losses. Because of the distribution of the cards, 80% of the time, you’ll be making the same decision about your hand as the best poker players would, even though you’ve spent 20% of the time learning the intricacies of the game as they have.

Therefore, when wading through hypotheses to make your strategy recommendations, you can hope to make the same recommendations as the best strategists by using just a few of the most applicable patterns, identifying the most fundamental data points, and developing hypotheses that open to the biggest impact. You might come up with 67 recommendations to fix the problem. What are the 10 best, based on high impact and ease of execution? You have better hopes of getting 10 important things done in a quicker time frame, with more clarity of vision for the teams, if you can prioritize well.

Come up with a few, or several, hypotheses quickly, and pick the one that looks most promising to investigate. Your first avenue might not be right, or it may be that multiple forces are at work and there’s not a clear, discrete, simple answer.

Perhaps the question posed to you as part of formulating your strategy is: How do we make higher-quality products with less defect leakage? You might hypothesize that you don’t have high-quality developers because you don’t pay a market rate. Alternatively, you might hypothesize that you have high-quality developers, but the code base was allowed to grow without a concomitant investment in test and deployment automation. Or perhaps you have a capable development staff and automation but don’t have domain expertise, or the product management team has consistently prioritized features over nonfunctional requirements. Or management says it cares about quality, but at the end of the day, everyone is bonused on hitting the date, and ultimately that prevails at a cost to stable, maintainable software. Brainstorming for a few candidate ideas goes quickly at a whiteboard. All of these hypotheses sound reasonable, and will quickly spring to mind. They present very different strategies for correcting them, and it is not necessary that only one is the most impactful root cause.

There is an obvious challenge with starting with a hypothesis so early in your engagement. You can, almost per force, introduce a bias. That’s to be assumed, and it serves to give you a scope of work to begin to gather the data and understand the relevant factors. And then you can stand back objectively and let the data speak for itself. Revise your hypothesis if the data does not support it, and follow a different path.

Your initial hypotheses may very well be wrong. That’s fine. This is about putting a stake in the ground to get a good place to start, and then coming up with more hypotheses. It’s about proving something right or wrong as quickly as possible so that you can move on.

Context

Taking things out of context is another common cause for faulty reasoning, which leads to faulty conclusions, which makes for bad strategies. Unfortunately, it’s all too common. We forget or forego the context in which an executive or competitor made a declaration, or the context for a managerial decision to use this vendor instead of that, or the context in which an outage occurred or a message was sent.

Recording, and making transparent, how you arrived at a conclusion will help provide context to future readers of your strategy. Indeed, many of these patterns are tools to help you build, piece by piece, a proper set of propositions to arrive at the right strategic conclusions, and happily offer a transparent trail of how you got there.

Resist the temptation to wait until you have all the data before you start. You will never have all the data. There is no such thing as “all the data.” The universe is an infinite conjunct of propositions. Therefore, you must necessarily draw a line around some set of propositions that you collect together in strong relation. Then be bold and make a claim. Ask smart people you trust who aren’t sycophants to argue the hypothesis.

Eventually a hypothesis will need to be tested by action. Let the impact of being wrong determine how much analysis you do before taking that action—to a point. Once you start building on your hypothesis by creating the execution plan, you will be able to tell if you’re in the right ballpark.

Once you’re in the ballpark, you need to perform more data gathering. This means conducting research within your company and on the web, reading industry reports, and finding anything you can to help you determine that your hypothesis is true (in the case of diagnosing problems), or probable (as in the case of imagining opportunities).

This is a simple technique, but starting with it early in your strategy engagement will help align your subsequent strategic technology choices with the business.

Objects and Relations

As Wittgenstein shows us in the Tractatus, the world is all that is the case. It is a collection of propositions, an infinite conjunct of lists-of-lists of objects, their attributes, and their relations to one another.

For our purposes here, let’s call an object anything that is a possible focus of inquiry. It’s something that we can call discrete, such that we could refer to it directly, as a sign, like a child pointing at a ball and uttering “ball!” (Yes, that’s a very problematic statement in semiotics, the philosophical study of signs, but I won’t fascinate you with the reasons here.)

When you conduct an analysis, determine what the objects are, how they are compositions of other objects, and where objects are finally atomic (no longer usefully divisible for your purposes).

Determine what their necessary relation is to other objects: that is, this object exists if and only if another object does.

What is a necessary but not sufficient condition? To get a job, it is a requirement (necessary) that you apply for it, but that alone is not sufficient, as you must also interview and get accepted.

What are the contingent relations? This object exists if and only if a given relation or attribute continues to exist.

For our purposes, there are a few different kinds of relations in the world, but surprisingly few. Let’s review them, as shown in Figure 2-4, so when we’re conducting research or creating our strategy, we can sort large volumes of data more quickly and reliably.

Figure 2-4. Kinds of relations, on a spectrum of interestingness

Identity

Identity says, this thing is that thing, in every particle: A = A. This kind of relation is a tautology and represents a thing in itself, in a vacuum. It’s not interesting.

Let me take that back, just slightly. It’s interesting in only one way. But it’s a doozie. As astonishing as it may sound, companies tend to not know who they are. By which I mean, they don’t have a clear sense of their own identity. Everyone in the organization does not have a rock-solid shared understanding of why the company exists, who its customers and partners and competitors are, or how it makes money. If I had not seen this many, many times in my career I wouldn’t remark on it, because it sounds absurd. The lesson is to be sure that you do know the answer to these questions, so it can inform your strategy. It may surprise you.

Equality

Equality means the complete description of this differs in no case from the complete description of that: A = B. This kind of relation is important if you can show that two seemingly different objects reduce to the same thing, or two seemingly different courses of action actually reduce to the same effect. This is fine for informal analyses. Saying “this is that” is the stuff of metaphors and poetry. That’s actually the definition of a metaphor, because it’s certain that “this” is, by definition, this, and not that, upon any inspection. It turns out to be impossibly problematic, even for the smartest people in history, to properly find a good referent for “this,” or “here.” Nonetheless, such statements abound in conference room meetings, and can mislead us if we don’t recognize them as the poetry of business that they are.

Association

Associations represent the state that two known different things offer some kind of interaction that changes them. Things are now a bit more interesting. Associations can be directional: one way or two way. Determining nonobvious associations is necessary, but not sufficient, to doing great strategy work.

Predicate

The term “predicate” is somewhat overloaded. In logic a predicate is a property or attribute of something, which is different than in grammar, where it refers to an affirming verb statement about a subject (a related idea). A predicate is an expression asserting some state of affairs. It represents something you can say about an object, something descriptive about its attributes.

In grammar and logic we can say “is a cat” is an expression. It is not a complete sentence. Likewise, “has a longer tail than” is a more complex predicate, as it presumes two variable values (the subject and the object).

The complex predicate is a statement of a relational property, an assertion that “cat” is a be-able thing and existence (“is”) is a thing (sort of). Alone, it is not true or false. To determine whether a predicate is true or false, you must fill in the missing referent (the who or what that is being referred to). To say, “Mister Boy is a cat,” we now have an assertion that we can test the validity of.

We write ∃x to mean “there exists some x.” This is a claim that x is something to be, that is capable of bearing the property of existence, which means both less and more than we tend to think. We can state “x = cat” and “B = Mister Boy,” just assigning variables. We state ∃xBx to mean “there exists some cat such that Mister Boy is a cat.”

This may sound like I’m being overly complicated about obvious stuff, like who’s this guy who doesn’t know if he has a cat or not? Well, things may be in certain sets, or in multiple sets, or in no sets. Sets may be empty (have no members). Sets exist within a discourse that mediates the objects, the relations, and the sets they’re part of.

Consider this proposition. Consider “everyone.” We say “everyone,” but we don’t mean it. Do we mean all the people alive in the universe right now? Do we include Sophia the robot citizen of Saudi Arabia? Everyone who has ever lived, is now alive, and will be alive in the future? That’s the MECE set translation of “everyone.” So I think we don’t usually mean that. We mean something more modest like “these six analysts at the customer site in this one room who used this one feature one day, but there might be more or less of them even now as we speak and they may have put someone else into some of those roles by next week.”

The ∀ symbol means All The Things, everything. It means that within a domain of discourse any (all) of the members can be substituted for a variable—something is universally true within the universe of the domain.

Let x = the predicate “is in New York City.”

The predicate logic expression ∀x then translates to “everything is in New York City.” One challenge in everyday life as we sit around making great technology strategies is that this is a valid statement—not because it’s the case that everything is in New York City (contrary to what New Yorkers may think), but because it’s a properly constructed statement within predicate logic.

To help with this, predicate logic conveniently provides us the idea of the domain of discourse. Think of a domain as a set.

We’ll refer to your set (your universe you’re demarcating, your domain of discourse) by S. S has only one member (in set theory, and thereby in computers, we call this a singleton). And that member is the “Empire State Building.”

The statement ∀xSx could be translated then as “for everything that is in New York City, a domain exists called the Empire State Building, and that domain, with all its universe of members, has the property of existing in New York City,” which actually reduces to “everything is in New York City,” which sounds wrong, but is not only valid (properly constructed) but sound—it’s actually true. That is, it’s true in this case, because of the demarcation of the domain of discourse—the entirety of “everything” here consists only of the one thing, the Empire State Building. It’s also the precise equivalent of saying “The Empire State Building is in New York City,” which is nice because it’s rather exhausting talking the other way.

The lesson for both the architect and the strategist is that we have this domain today, but can we create a competitive advantage by establishing an outpost in an adjacent domain? If our domain is “hotels,” an adjacent domain might be “vacation rentals” since they both have to do with travel accommodations. If our domain is publishing books, an adjacent domain might be streaming instructional videos, since they both have to do with ways of teaching. A book and a video may have very similar content ostensibly, but use very different means of production, have different audiences, and use different distribution channels. They might be one thing and they might be two separate things, depending on your purposes and where the business is going, can go, and should go.

So the architect must look at these two ideas, determine the relevant questions about what members are in each set, and check how much overlap there is. That helps you determine whether you’re really talking about one thing or two things. This is important in system decomposition. The mistake I see architects make a lot, and the reason I belabor all this here, is that they don’t start in this prior step: they make all manner of assumptions about the demarcation of the domain, don’t look at the propositions, and don’t examine the language. So they won’t decompose the system properly or in a strategic way. But starting in this prior step of determining what the domain really is, and what the sets are, what each set’s members are, and what the discourse is will have wonderful ramifications for the way you design the system, how extensible and performant it is, and what business strategies it enables or curtails.

Remember the Five Questions. The second question, regarding the semantics around a proposition, shows us that we are in a domain. In short: when people say “everything,” they never mean everything.

To help clarify, push people to make absolute statements. If someone says, “This thing is what happens,” then you can take them at their predicate logical meaning and ask, “Is that true for everyone, alive and dead, always, and in all cases, across time and all eternity?” Then they say “no.” And you say, “Well, that’s what you said.” And they reply, “I’m so grateful to be corrected by you; that’s really charming. What I meant was, that’s what Sally does on Tuesdays. If it’s not raining.”

Examine your objects and their relations. Make lists of their predicates. Be careful to not overstate.

To adhere to this lesson, deconstructing this a bit will help us.

Predicates are incredibly important in analysis, and are the building blocks of predicate logic and propositional logic. Predicates are the mines where most gold is hidden, and where the most miners meet their doom. In other words, the attributes of an object are more complicated than they appear, and if you get them wrong, the consequences for your analysis can be disproportionately problematic. It is deceptively difficult to list the predicates of an object.

Correlations

Two objects are correlated if a change in one will usually produce a change in the other, or if the two objects are very frequently found together. This is much the stuff of machine learning, in which algorithms execute over massive data sets to determine the algorithms that describe the data in order to make predictions. These are particularly fascinating, and must be carefully noted throughout your analysis.

Causation

The fact of this state of affairs necessarily and unequivocally causes some next state to occur. It seems obvious to suggest that if you hit a ball with a cricket bat, you caused it to sail into the air. Fair enough. In simple, direct, physical relations, it’s harmless to assign causations to things, unless you’re a quantum physicist. Familiar causation of this kind affords wonderful things, like ball games and rocket ships and being able to perform crucial acts like drinking coffee. It’s important. But in the business world, as in the unruly sphere of human behavior, assigning causations is dangerous. It is almost never the case that there is a simple, easily explained line directly creating a new circumstance. Sigmund Freud calls things that have a preponderance of valid-sounding causes “over-determined.” That is, there are too many things operating at once, all contributing in some way to producing this state of affairs to really reasonably say, “This caused that.” Or worse, “This causes that” (present tense), as if it’s a rule that it always happens that way. Things tend to be more nuanced, contingent, more correlated, more variously associated in a complex business and technology world than straight causation allows for. Causes tend to be a panoply of reasons, with various prior causes, operating at various intensities in various circumstances in varying frequency. If you can identify all these vectors, you might be able to find a cause. That requires careful work and a lot of description. So if you can find a true causation in your analysis, more power to you; that’s fantastic: it will make your job much easier. But, you know, good luck with that. And don’t blow too much time on it. Do just enough to make a useful claim without overstating, overreaching, and overestimating probabilities.

Strategic Analysis as Machine Learning

In its most basic sense, the process of machine learning (ML) has roughly the same basic construction as our analysis process as presented here.  It involves hypothesizing, finding a model, and casting probabilities, much like the work of strategy consultants. Of course, as a relative of data science, it follows more or less the scientific method. Though this doesn’t extend our pattern set, I thought I would draw a connection conceptually, because this connection makes the world feel richer and more delightful.

In the popular imagination, perhaps for grammatical reasons, people tend to think of “machine learning” as the machine itself learning what to do, such as what next chess move to make. But what the machine is learning is actually a function: what it’s learning is what function best explains the data. A machine learning job is one that, given a mass of data, determines how to frame the data in the context of a hypothetical function (f) that would explain the data, and that hypothetical function is the thing the machine learning algorithm tries to figure out. In the simplest terms, given the data as input, use the learned function to predict the probability that the output is accurate. Stated as a function, that’s:

Output = f(Input)

The job of machine learning is to determine this equation:

Y = f(x)

...where x is the input data, f is the function or model that can draw correlations and fit the data (such as the function that can draw a line through data points on a plot), and Y is the label, the predicted value the ML elects. Machine learning asks what is the right function f to give you label Y?

The process goes like this:

  1. Determine your hypothesis, your question, the label you want to find.

  2. Determine the data sources that can provide you a meaningful, relevant answer or context, using internal and external sources. Prepare and clean the data and impute missing values.

  3. Determine the right model. In ML we ask, would this work best with linear regression, a random forest, or another model? Usually an ensemble of methods can produce the best results.

  4. Fit the model.

  5. Predict.

In ML, fitting the model means finding the algorithm that draws a line through the data points, the statistical function that explains the data best such that it can properly label new data. For the strategist, it means something analogous: finding the right mental model, the right systems architecture, the right recommendations and decisions across people, processes, and technology that creates the best path through the available data to the future. This involves making some predictions about what the world will look like and how you’ll want to be positioned, and assigning probabilities as we’ve discussed.

My hope is that this correlation between our present work and our ML work is interesting to you and spurs some additional thoughts in your context. To me, the strategy process is analogous to this ML process. It works for me as a mental model, and I hope it does for you too. 

Summary

The steps for forming a sound analysis include:

  1. Quickly gather data to form more than one hypothesis based on the question.

  2. Perform the initial analysis by asking the Five Questions, examining context, using inductive reasoning, and separating signal from noise.

  3. Narrow the areas of focus by prioritizing them, scored according to ease and impact.

  4. Assign probabilities to key propositions by using Bayesian probability.

  5. Gather more data, test, and revise your hypotheses and analysis as necessary.

Use the MECE technique (see “MECE”) and Logic Trees (see “Logic Tree”) throughout, as applicable.

This is an iterative process (depending again on the scope of your assignment and whether you have time for more than one shot at it).

This is also what I’ll call here a fractal process. With a fractal, each part has the same statistical character as the whole: the pattern is self-similar across any scale. It can be big, such that you’re applying it across a broad question or problem with a lot of research and formal expressions of each of the items, taking hours to complete. Or it can be small: using this process on just this, another small piece of one small piece of the puzzle. A fractal is an equation that is eminently scalable. If you train yourself to think this way, using it as a default processing mechanism when people make claims to you, you’ll start to do this quickly, naturally, and informally in your head, once you get really good at it.

Remember, too, that when you are given a problem to solve, you should analyze the problem and the solution separately.

Whether the scope of the strategy you are building is small and local, or broad and far-reaching, these questions and the analysis patterns presented here will help you create a great strategy. Use them all as metapatterns throughout your work, like fractals: quickly and informally in your head for small problems, and with lots of evidence, time, care, discussion, and formal recording all along the way for big problems.

In the next chapter, we dive into the patterns for creating your strategy, starting with the broadest context: that of the outside world.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.136.226