Appendix B: More on Context-Space Mapping
To make sense with context-space mapping, we first need to go right back to first-principles: the core concept of context-space.
Before any notion of order or structure, there is simply “the everything”: everything and nothing, all one, with that “everything-and-nothing” linked to everything-and-nothing else, in a place-that-is-no-place that incorporates within itself every possibility. It’s not “chaos” – it simply is. That’s always where we start.
There are all manner of names for this “active no-thing-ness.” Lao Tse called it “the Tao,” for example, while the ancient Greeks described it as “the Void.” In many of the diagrams here, I’ve used the term “
reality” as the center, to remind us of this. Yet for the more business-oriented purposes of enterprise-architecture, though, we’ll need to constrain the scope of that “the everything” somewhat, into a smaller subset of that reality, a narrower and more usable chunk of context. So let’s call that
context-space – the holographic, bounded-yet-unbounded space that still contains every possibility within that chosen context (see Figure
B-1).
Elsewhere in this book, I’ve split this context-space into problem-space – the context in which things happen – and solution-space – the space in which we decide what to do in relation to what’s happening. But ultimately there’s just the context: “the only true model of a system is the system itself.”
Yet to make sense of anything, we need to impose some kind of structure. One place to start would be to filter “the everything” in terms of its variability. Perceived-repeatability is one obvious example of a variability that we might find useful, but there are of course many others.
At the start, this gives us a finely graded spectrum of variability across the context (see Figure
B-2).
Interestingly, though, most human sensory-perception does not work well with smooth gradations: it works much better with firmer boundaries. Hence most sensemaking will usually attempt to place some kind of ordered structure upon what may initially seem like unbounded chaos, to act as a filter that can help us to separate “signal” – that which we’re interested in – from “noise” – that which is not of apparent interest for now
(see Figure B-3).
For example, when we look at the physical world of matter and material, we can see both of these processes in action, even within matter itself. There is a fairly smooth gradation of variability, primarily linked to temperature; yet there are also explicit “phase-boundaries” where the internal relationships of matter undergo fundamental changes. Significant amounts of energy (“latent heat”) can be absorbed or released in the “phase-transitions” between these modes. In effect, these will present as four distinct states of matter, traditionally described as Earth, Water, Air and Fire, for which the respective scientific terms are Solid, Liquid, Gas and Plasma (see Figure
B-4).
When we look at the internal structures of matter within each of these states, we would typically describe the respective structural relationships as simple, complicated, complex, and chaotic, as phases or domains within the context-space of matter. This type of categorization along a single axis represents a simple first-order map of that context-space – hence context-space mapping.
We can do the same with almost any other gradation-type view into that overall context-space. Within that gradation, we should be able to identify, or choose, phase-boundaries that partition the context-space into distinct regions along that axis: for example, the nominal split of the visible-light spectrum into red, orange, yellow, green, blue, indigo, and violet.
For enterprise-architecture, business-architecture and the like, maybe the most useful split is along an axis of repeatability, dividing the inherent uncertainty of context-space into regions that, in parallel with those states of matter, we could perhaps describe, respectively, as Simple, Complicated, Complex, and Chaotic.
There’s a risk at this point that some people might mistake this for the well-known Cynefin framework.
Given that risk of confusion, it’s really important to note that what we’re describing here is not Cynefin. The two frameworks might look somewhat similar at the surface, but they are different in many fundamental ways: they have different origins and a different theoretical base, they are used in significantly different ways, and they have different roles and functions in the overall process of sensemaking and decision-making. To illustrate the difference, context-space mapping would describe a Cynefin-style frame (though not Cynefin’s methods!) as merely one instantiation of a generic class of context-space base-maps.
Once again, it’s important to understand that context-space mapping and Cynefin are fundamentally different: don’t mix them up!
For our purposes, though, there’s one more tweak that we need to make on terminology, in order to reduce possible misinterpretations in sensemaking. Simple and Complicated are both safe enough as terms: they both fit cleanly into a simple cause–effect world, so we can safely leave them unchanged. But as terms, Complex tends to be complex, and Chaotic often downright chaotic: there are way too many meanings for both of these. For example, “Chaotic” can mean anything from the colloquial chaos of “I have no idea what’s going on here,” to the mind-bending complexities of quantum-mechanics and chaos-science. To simplify these right down, we’ll swap “Complex” for the unambiguous term “Ambiguous,” and “Chaotic” for the term “Not-known” – because the latter is most often what we’re dealing with in everyday chaos. Overall, that gives us an acronym of SCAN: Simple, Complicated, Ambiguous, Not-known. We’ll also use the term “Reality” to denote any part of the context where we haven’t yet sorted anything out into usable categories.
With that terminology issue settled, we can also see
How and why we’ve arrived at those particular categorizations
How and why to use any specific axis for such categorization
What the boundaries between the “domains” in the categorization will look like
How, why, and when the nominally Simple boundaries between categories may move around (Complicated), blur (Ambiguous) or fragment (Not-known)
This provides us with a layered, recursive richness that is largely absent in most other sensemaking-frameworks. It also provides a means to link right across every possible view into context-space, rather than solely a specific set of interventions that focus only on a single domain.
A first-order (single-axis) context-space map – such as the Simple-to-Chaotic “stack” in Figure B-4 – is not all that much use in practice. To make it more useful, we’ll often need to add other axes as filters for sensemaking, to enable relevant information to fall out of the respective comparison. And we make it more useful again by selecting a related set of axes to provide a multi-dimensional base-map upon which other filters can be placed.
Simple two-dimensional base-maps are the easiest to work with, for obvious reasons, but three or more dimensions are entirely feasible – the tetradian (see Figure 8-9 or 8-10) is one example of a four-dimensional frame compressed into three-dimensions for use as a base-map.
To do this, we choose axes that force the domains of that original single-axis spectrum into relations of opposition and similarity with each other. For example, we could use “levels of abstraction” as the core axis, and overlay that with timescale in one direction and a “value-versus-truth” spectrum in the other. As shown in Figure
B-5, that would give us the respective base-map and its “cross-map” of interpretive text-overlays:
Here Simple and Not-known are opposites in their interpretations, but similar in terms of timescale; Ambiguous and Not-known are similar in their means of interpretation, but opposites in terms of timescale; Simple and Ambiguous, and Complicated and Not-known, oppose each other on both axes; yet all domains are related in terms of layers of abstraction. The central region of Reality is essentially a reminder that all of the other domains are each just an abstraction from the real: they represent related yet arbitrary views into what is actually the total “hologram” of the context-space.
We then layer this recursively to apply to the nominal boundaries between each of the domains, so that these too may be considered to be fixed, movable, blurred, or porous, or fragmented, or transient. An axis based on a binary “true-or-false” categorization (in other words, a Simple boundary) will split the context-space into two domains along that axis. If both overlay-axes have Simple categorizations (or movable two-part categorizations, in Complicated style), the overall context-space is split into four regions – which aligns well with the “matter”-type categorization of Simple, Complicated, Complex, and Chaotic back in Figure B-4. Likewise a smooth gradation along both axes pushes the context-space into four regions with Ambiguous or even chaotic Not-known boundaries between them.
Because of this, a four-region base-map is likely to be the most common and most useful two-dimensional type. Other layouts are possible, of course, and often useful: for example, a pair of tri-value axes would typically be used to align an eight- or nine-domain primary axis, such as seven-color plus infra-red and ultra-violet.
The result is a consistent structure for base-maps that are both bounded and not-bounded, and that describe the whole of a context-space by structured views into that context-space that also acknowledge that, in reality, the context-space itself has no actual structure.
SCAN Cross-Map (Response-Patterns)
So far there are well over a hundred SCAN cross-maps for use in enterprise-architecture assessments. Figure
B-6 presents one such example of a cross-map for sensemaking:
This shows three cross-maps on the SCAN base-frame:
Typical governance-methods in each of the domains (“algorithms,” etc.)
Keywords for typical tactics to use with each of the four main domains (“reciprocation,” etc.)
The core axes for SCAN dynamics (“Now!”, “certain,” etc.)
The governance-methods are straightforward: we would typically use rules for anything that’s currently “in” the Simple domain, algorithms for anything that’s “in” the Complicated domain, and so on. (The bit about “in” relates to SCAN dynamics, which we’ll see in a moment.)
The
tactics-keywords are suggestions for how to tackle assessment of something that’s currently “in” the respective domain:
Rotation: Work our way through a list, such as a set of work-instructions or a checklist.
Regulation: Follow the rules and guidance of an existing standard or regulation.
Reciprocation: Look for balance across interactions within the overall system.
Resonance: Look for feedback-loops, damping-effects, and similar characteristic patterns from hard-systems theory.
Recursion: Look for situations where the same pattern repeats at different levels – particularly where these repetitions are nested inside each other.
Reflexion: Look for situations where a pattern repeats in “self-similar” form across multiple levels and/or different contexts.
Reframe: Use multiple perspectives to provide different views in a context, to elicit new information and insights (in essence, this is what we do in context-space mapping).
Rich-randomness: Use principles as “seed-anchors” to elicit insights from structured serendipity and suchlike.
The
SCAN dynamics arise from the two axes of the framework, and how they interact:
Vertical-axis: Amount of time remaining before action must be taken, working backwards from the “NOW!”
Horizontal-axis: Level of uncertainty or uniqueness in the context, from “certain” to infinitely uncertain or unique
Transition from plan to action: Moving downward toward the “NOW!” a relative and mobile marker to indicate the moment at which we run out of time to plan, and must begin to shift into action; in essence, the point after which stopping to think will slow things down
Boundary of effective-certainty: Moving side to side, a relative and mobile marker to indicate the level of uncertainty that can be tackled within the current context
The point about “in” a domain is that things move around. For example, as a project moves from plan toward execution, we have to simplify it down to a form that we can execute – especially if we want it to execute fast. That means we’d be shifting from Complicated to Simple – which then means that Simple rules and governance would apply. If we have to go back to plan, everything slows down. If things go wrong in execution, we’ll find ourselves in the Not-known domain – and will need to follow the tactics for that domain to work our way back to Simple again. You’ll also see another example in action in the next cross-map, about idea-development. Things move around in sensemaking, as we build our understanding of those things, so the tactics we use at each moment need to change and move around with them.
Jungian-Type Base-Map (“Embodied Best-Practice”)
The next cross-map, in Figure
B-7, draws on Jungian concepts to explore the sequence of idea-development:
This again illustrates the concept of “movement” within the categorized context-space. The idea first arises in and from the Not-known aspect of the context. At the start, it only has “inner value” – it exists only within that person, and it has not yet been tested. We then put the idea out for test: it becomes “real”-enough to describe to others, where it gains some level of “outer value,” though for the while it still remains Ambiguous. Once the idea has coalesced as a more concrete hypothesis, it slowly migrates toward the Complicated domain, becoming “outer truth,” a tested and usable theory. After further refinement, the theory becomes internalized as “inner truth,” a “law of science” or suchlike that underpins Simple, certain practice. Throughout all of this, there’s usually a lot of jostling back-and-forth between domains, as the levels of certainty and so on get settled out.
Ideally, the sequence should become a full loop, with Simple “law” feeding new ideas in the Not-known – but unfortunately, once an idea becomes “law,” it tends to get stuck there forever, preventing further innovation even when necessary. In such cases, innovation may become possible only via a transit through the region of “Reality,” shredding the categories and assumptions back to the raw basics – which in a business context can be disruptive in almost every possible sense…
Repeatability and “Truth”
This is a straightforward cross-map, in some ways taking us back to the core concepts of SCAN. It’s also one example where it might make more sense to show the domains as a vertical stack (see Figure B-4).
As shown in Figure
B-8, a Simple world requires a close correlation between repeatability and truth – or, as in most of the sciences, something can be considered “true” only if it is repeatable. The further we move away from that correlation, the more we are forced to move into the other domains, and thence into the different tactics that each of those domains requires. A flurry of special-cases becomes Complicated; things that repeat only some of the time are definitely Ambiguous; and things that barely repeat at all, or ever, would certainly throw us into the Not-known. Most real-world contexts incorporate a mix of all of these: hence why we need to be able to identify which domain we’re dealing with at each moment, and switch our tactics immediately to match.
Marketing Versus Sales
Figure
B-9 provides a useful cross-map that explores the perennial clash between Marketing and Sales. This draws on that dimension of timescale, from infinite to immediate, and on a less commonly used yet perhaps more important dimension: the concept of ownership, across a spectrum from
possession – the default view in modern societies – to
responsibility – which is actually more common in practice within organizations themselves.
The most Simple market is a monopoly. You alone set the rules, and others (particularly the “consumers”) have no choice but to buy according to your rules. In an all-too-literal sense, you possess that portion of the market, and hence also that portion of people’s lives. Much of it is about trying to control what people do, often in a very physical sense: people are treated as objects or subjects rather than as people, and there is no need for marketing at all.
Yet there are two very real dangers here. One is that a monopoly is an extreme abstraction of reality, and if reality moves away from alignment with that purported “truth,” the market can sometimes vanish overnight – as happens quite often on the Internet, for example. The other is that monopolies often breed deep-seated resentment, and if the monopoly cannot be bypassed, the resentment may explode elsewhere – as happened with British monopolies on salt, fabrics, and many other essential items in colonial India. We see much the same in lesser form with Microsoft’s current dominance of the operating-system and office-software markets – contexts where a “natural monopoly” will tend to occur simply because of the need for standardized information interchange. So while possession of a market may seem like the best possible strategy, the long-term consequences can be much riskier than they look.
Most conventional marketing sits firmly in the Complicated domain: crunch the numbers, map the trends, analyze every-which-way to find out how to make the market predictable. People tend to be regarded as units of information, a datapoint within the statistics, rather than as individual people; in fact, it’s very much about information, the conceptual dimension, and often also about trying to “control” what people think about a product or service. Trying to determine what people feel pushes the emphasis more toward the Ambiguous domain, while the common notion here of “taking control” of a market pushes the emphasis the other way, toward the Simple domain and all of its concomitant risks. Note also the cross-map with timescale: marketing may occur before or after but not at the exact moment of sale.
We move into a more Ambiguous domain of marketing in any emergent-market, or whenever we regard people more as people rather than as “consumers.” This type of market demands much more acceptance of human factors, of “wicked problems” and other real-world forms of complexity. Often there will also be a need for a weakening of the separation between “us” (“producers”) and “them” (“consumers”) – as can be seen in co-operatives, in some forms of crowdsourcing, and also in Agile-type development where the customer is also part of the development-team. The central theme is about relationships, which, although still “abstract” in terms of timescale, may in effect extend and push the boundary of this domain quite a long way toward real-time, into what would otherwise be Not-known space.
Yet by definition, Sales themselves will always reside in the Not-known domain, because every decision to buy or not-buy is in part a quantum-event, a “market-of-one.” The ultimate drivers for all such decisions are values-based, not “rational” or “truth”-based, which means – as just about any good salesperson would tell us – that the focus here is on emotion, on aspirations. And given that sales deals with real-time events, we’re somewhat forced into the principles-versus-rules spectrum there (see Figure B-6). Online sales will go toward the rule-based end of the spectrum, because that’s all that IT systems can handle; but real salespeople working face-to-face with real clients or customers (not “consumers” here!) will recognize key principles such as the need to listen – and also to know when to stop talking, so as to allow space for the decision to take place.
This cross-map also shows us that, by definition, the conventional approaches to sales and marketing are diametrically opposed by the nature of what they do and how they work. Yet we can bridge that gap somewhat either via the Ambiguous domain of emergent marketing, or else by the Simple-domain methods such as IT-based sales – though supported, again, by cross-links to Ambiguous-domain tactics to remove the risk and resentment around perceived monopolies. Which approach is “best” in each case will depend on the context – which this cross-map, and others, will again help us to identify.
Plan/Do/Check/Act
Figure
B-10 shows another very useful cross-map that helps to clarify what’s actually happening within the PDCA improvement-cycle, and is also a good illustration of the
dynamics in context-space mapping.
The cycle starts with Plan. This is primarily about information, and takes place before real-time contact, both of which tend to place it in the Complicated domain.
The aim of the Plan is to create rules that are Simple enough to apply in real-time when we Do the actual work. Although “work” can take many forms, it still needs to be made concrete in some way in the real world, which in effect places an emphasis on the physical dimension.
The work itself is not abstract: it happens in the real world, in real-time – in other words, it requires a transit through the inherent uncertainty of the undefined Reality domain.
On completion, we move back out of real-time to reflect on the difference between what we’d intended in Plan and Do; what actually happened during that transit through real-world Reality; and what we can do about it, in Check, and leading onward to Act. Learnings need to be both personal and collective, which places us on the “values” side of the “truth”/“value” spectrum (see Figure B-7). Long-term experience indicates that such learning takes place in a social or relational context, away from the action, through tactics such as After Action Reviews – all of which indicates that this part of the cycle situates in the Ambiguous domain.
The outcome of the Check phase is a set of guidelines for revised future action. We need to Act on those guidelines so as to embody the required changes in personal awareness and action, via a personal review of the underlying principles of the context and how they apply to that specific individual. To change how we work also requires that we face the personal challenges implied by any kind of change, so it’s also about personal aspirations and personal responsibility, in the sense of “response-ability” – the ability to choose appropriate responses to the context in real-time action. Ultimately all of this is unique to the individual, a “market-of-one” (see Figure B-9) – and hence places this phase of the PDCA cycle in the Not-known domain.
We then wait for an appropriate new real-world context – in other words, another transit through “Reality” – to start the cycle again with a new Plan.
This cycle is also echoed in the problem-solving method first proposed by the Hungarian mathematician George Polya in his 1945 classic
How To Solve It. The steps in his cycle are: Understand the problem; Devise a plan; Carry out the plan; Review and extend – which is the same as PDCA, but starting one step earlier, where PDCA’s “Act” includes a re-understanding of the problem.
There are several ways in which the PDCA cycle can fail. One is that an obsessive production-oriented context skews the path to take a shortcut through Reality, to give a tighter loop of Plan/Do/(Reality)/Plan (see Figure
B-11). This cuts out Check and Act – which may seem unnecessary in the short-term, but is probably disastrous in the medium- to longer-term, since it assumes that the rules created by the plan will always apply. Not so much Simple as dangerously simplistic…
Another type of failure occurs when extreme self-doubt skews the other return-path back through Reality to give a probably even-tighter loop of Check/Act/(Reality)/Check (see Figure
B-12). In effect, this is a kind of personalized version of “analysis-paralysis” – much may be learned, but nothing is actually done, because the loop never arrives at Do.
Yet another failure-loop is Plan/Do/(Reality)/Check/Plan, in which the review takes place, but pressure of work forces a return to the Plan phase before any actual change can be embedded in personal action via Act (see Figure B-13). This is perhaps the least effective form of “process-improvement,” but seems depressingly common in real-world business-practice.
ISO-9000 Core
The ISO-9000 core (vision, policy, procedure, work-instruction) provides a fairly straightforward cross-map to something that’s usually presented as a vertical stack, but actually makes more sense in a base-diagram layout (see Figure
B-14).
A work-instruction defines Simple rules that apply to a specific context. In segment-model terms, it provides the row-4 or row-5 detail-level What, How, and Who that apply at a specific When-event, with Where usually defined in more generic terms (such as any location that uses a specific machine). The underlying Why is usually not specified.
When anything significant needs to change – for example, a new version of software, or a new machine – we move “upward” to the procedure to define new work-instructions for the changed context. This accepts that the world is more Complicated than can be described in simple rules, yet is still assumed to be predictable. The procedure specifies the Who in terms of responsibilities, and also far more of the underlying Why – the row-3 “logical” layer, in segment-model terms.
When the procedure’s guiding reasons and responsibilities need to change, we move upward again to policy. This provides guidance in a more Ambiguous world of modal-logic: in requirements-modeling terms, a more fluid “should” or “could” rather than the imperative “shall.” The policy describes the Why for dependent procedures – the row-2 “conceptual” layer, in segment-model terms (though “relational” might be a more accurate term here, as we’ll see from other cross-maps).
When the “world” of the context changes to the extent that the fundamental assumptions of current policy can no longer apply, we turn to vision. This is a core set of statements about principles and values that in effect define what the enterprise is. Because this vision should never change, it provides a stable anchor in any Not-known context – in segment-model terms, either the row-1 “contextual,” row-0 “enterprise” layers or “universals” segment (though again “aspirational” might be a more useful term here).
Note that in some ways this cross-map is the exact opposite of the “Repeatability and truth” cross-map earlier (see Figure B-8). There, the purported “universality” of a given “truth” increases as we move from Not-known to Simple, whereas here the values become more general and broader in scope as we move from Simple to Not-known.
Skill-Levels
This cross-map (see Figure
B-15) links to a well-known and very useful heuristic on the amount of time that it takes to develop specific levels of skill.
The “trust in capability” spectrum here is actually an inverse of the amount of supervision needed both to compensate for lack of skill and to shield the person from the consequences of real-world complexity and chaos in that context.
A trainee can be “let loose” on Simple tasks after about ten hours or so of disciplined practice (a 1–2 day training-course).
An apprentice will begin to be able to tackle more Complicated tasks after about 100 hours of disciplined practice (2–4 weeks). However, most of those tasks will still need to be supervised, and insulated from real-world complexity.
A journeyman will begin to be able to tackle more Ambiguous tasks that include inherent uncertainties after some 1000 hours of disciplined practice (six months full-time experience). Typical uncertainties include variability of materials, slippage of schedules, and, above all, people. Traditionally there is an intermediate point within the 1000–10000 hour range at which the person is expected to go out on their own with only minimal mentoring: in education, this is the completion of the bachelor’s degree, while in a traditional technical training, this is the point at which the apprentice becomes qualified as a literal “journeyman” or “day-paid worker.”
A trainee should reach a master level after about 10,000 hours (five years) of disciplined practice. This was the traditional point at which a journeyman was expected to produce a “master-piece” to demonstrate their literal “mastery” in handling the often-chaotic Not-known nature of the real-world. This period is also still the typical duration of a university education from freshman to completion of master’s degree.
Skill should continue to be developed thereafter, supported by the peer-group. Building-architects, for example, often come into their prime only in their fifties or later: it really does take that long to assimilate and embody all of the vast range of information and experiences that are needed to do the work well. Hence, there is yet another heuristic level of 100,000 hours or so (more than 50 years) – which is probably the amount of experience needed to cope with true Reality.
Another skills cross-map (see Figure B-16) shows why this isn’t as straightforward as a simple linear stack. In the earlier stages of skills-development – from Simple to later Complicated – we in effect pretend that each context is predictable, controllable, reducible to some kind of ordered system. Up until the end of that stage, we only face predictable tame-problems, for which analysis alone is usually enough.
But at some point, late in the apprenticeship, there’s a crucial transition beyond which we need to be able to tackle wild-problems that may be unique, unrepeatable, or inherently uncertain, and require synthesis as much if not more than analysis. These are real-world challenges that we can learn to direct what happens, yet it can never actually be controlled – a distinction that is sometimes subtle but extremely important, and actually marks the transition to true skill. This is where we must make the move toward the skills of the journeyman, to tackle the Ambiguous, and onward to the deep-skills of the master, to tackle the unique and the Not-known.
As indicated in the cross-map above, there are fundamental differences in worldview on either side of that transition. To tackle the full complexities of Reality, analysis alone is not enough.
Automated Versus Manual Processes
This final cross-map on automation (see Figure
B-17) is a logical corollary from the skills-maps above (see Figures
B-15 and
B-16). It also has cross-links with the asset-types set (see Figure 8-9). The cross-map itself is reasonably straightforward, but also has extremely important implications for systems-design.
Physical machines follow Simple rules – the “laws of physics” and the like. The Victorians in particular did brilliant work exploring what can be done with mechanical ingenuity – such as Babbage’s “difference engine,” or, earlier, Harrison’s chronometer. Yet, in the end, there are real limits to what can be done with unassisted machines.
Once we introduce real-time information-processing, algorithmic automation becomes possible, capable of handling a much more Complicated world. Yet here too there are real limits – most of which become all too evident when system-designers make the mistake of thinking that “complexity” is solely a synonym for “very complicated.”
As with skills-development, there is a crucial crossover-point at which we have to accept that the world is not entirely repeatable, and that it does include inherent uncertainties. One of the most important breakthroughs in IT-based systems here has been the shift to heuristic pattern-recognition – yet there are real dangers, especially in military robotics, that system-designers will delude themselves into thinking that this is as predictable as it is for the Complicated contexts. Instead, to work with the interweaving relational interdependencies of this Ambiguous domain – especially the real complexities of relations between real people – the best use of automation here is to provide decision-support for human decision-making.
In a true Not-known context, by definition there is little or nothing that a rule-based system can work with, since – again by definition – there are no perceivable cause–effect relationships, and hence no perceivable rules. The only viable option here is a true expert skills-based system, embodied primarily in a real person rather than solely an IT-based “system.” These would rely on principles and aspirations to guide real-time decision-making. One essential point is that there is no way to determine beforehand what any decision will be, and hence how decisions are made. Although there are indeed a very small number of IT-based systems that operate in this kind of “world” – such as those based on “genetic-programming” concepts – we have no real certainty at the detail-level as to how they actually work!
Note that most – perhaps all – real-world contexts include a mix of all of these domains. This is why any real-world system must provide appropriate procedures for escalation and de-escalation: moving “upward” from Simple to Ambiguous to handle inherent-uncertainty via human skills, and “downward” from Ambiguous to Simple to make best use of the reliability and predictability of machines.