16.4 Patterns in System Architecting Decisions

Going from a list of decisions and alternatives (such as the one presented in the previous section for the NEOSS example) to their formulation as an architecture optimization problem can be a hard task in itself. However, we have observed that a number of Patterns appear over and over when ­system architecture optimization problems are formulated. For example, the problem of partitioning the Saturn V architecture into stages is fundamentally the same as the instrument-packaging problem defined in the previous section. We essentially have to allocate elements to bins in both cases, and the optimization algorithm doesn’t care whether it is assigning remote sensing instruments to orbits or assigning delta-V requirements to stages. This section describes a set of Patterns that will help the system architect formulate system architecture optimization problems.

The idea of Patterns in design is often attributed to Chris Alexander, an Austrian civil architect who in 1977 published A Pattern Language: Towns, Buildings, Construction. [5] This book describes a set of problems that appear recurrently when buildings are designed, and it also discusses the core of a solution that is known to work well for that problem in different situations. For example, Alexander describes the Pattern of “Light on two sides of every room” as follows:

When they have a choice, people will always gravitate to those rooms which have light on two sides, and leave the rooms which are only lit from one side unused and empty.

This is the descriptive part of the Pattern. Alexander provides a prescriptive part as well, which explains how to achieve this welcoming effect in different cases: small buildings (simply put four rooms, one in each corner of the house); medium-size buildings (wrinkle the edge, turn corners); and large buildings (convolute the edge further, or have shallow rooms with two windows side by side). The intent is thus to identify the Pattern when one comes across a similar problem, so that one can reuse the solution that is known to work, potentially saving resources in redesign.

This practical approach to architecture was adapted in other fields, especially in computer ­science, with the book Design Patterns: Elements of Reusable Object-Oriented Software. [6] Just as Alexander described a set of recurrent problems and solutions in civil architecture, the “Gang of Four” created a compendium of over twenty patterns in object-oriented programming, with solutions in pseudocode that are ready to be reused. These patterns include such abstract ideas as Singletons, Abstract Factories, Iterators, Interpreters, and Decorators. Most experienced object-oriented programmers are familiar with these concepts and use them daily to synthesize and communicate system architectures. [7]

Our discussion of Patterns goes beyond programmed decisions and optimization. Studying these Patterns will lead us to discuss typical architectural tradeoffs, as well as the main options for those tradeoffs, which we will call architectural “styles” (such as monolithic versus distributed architectures and channelized versus cross-strapped architectures). The Patterns effectively provide a common vocabulary for communicating and discussing trade-offs and corresponding styles. [8] Thus these Patterns are useful for formulating optimization problems but also, at a broader scale, as a framework for organizing architectural decisions.

From Programmed Decisions to Patterns

In general, optimization problems that result from programmed decisions in system architecture are instances of combinatorial optimization problems. Furthermore, most of them are similar to one or more “classical” optimization problems that appear over and over again in operations research, [9] such as the traveling salesman* and the knapsack problemshown in Appendix D.

We now introduce our six Patterns of programmed decisions in system architecture: DECISION-OPTION, ASSIGNING, PARTITIONING, PERMUTING, DOWN-SELECTING, and CONNECTING. Some of the Patterns that we will discuss in this section are variations or generalizations of classical problems. For instance, our DOWN-SELECTING Pattern looks a lot like the 0/1 version of the knapsack problem, with the added twist that we account for interactions between elements. Table 16.2 contains a list of the Patterns and a short description of each.

Table 16.2 | The six Patterns of architectural decisions

Pattern Description
DECISION-OPTION A group of decisions where each decision has its own discrete set of options
DOWN-SELECTING A group of binary decisions representing a subset from a set of candidate entities
ASSIGNING Given two different sets of entities, a group of decisions assigning each element from one set to any subset of entities from the other set
PARTITIONING A group of decisions representing a partitioning of a set of entities into subsets that are mutually exclusive and exhaustive
PERMUTING A group of decisions representing a one-to-one mapping between a set of entities and a set of positions
CONNECTING Given a set of entities that are nodes in a graph, a group of decisions representing the connections between those nodes

Each of these Patterns enforces a different underlying formulation of decisions that can be exploited to gain insight into the system architecture (such as the architecture styles presented later in this section) and also to solve the problem more efficiently by using more appropriate tools. As we introduce the Patterns, it will become apparent that most problems can be formulated using more than one Pattern. Thus the Patterns should be seen not as mutually exclusive, but rather as complementary. That having been said, one Pattern typically is more useful than the others, because it provides more insight and/or leads to more efficient optimization.

The DECISION-OPTION Pattern

The DECISION-OPTION Pattern appears when there is a set of decisions, each with its own discrete (and relatively small) independent set of options. More formally, given a set of n generic decisions D = {X1, X2, … , Xn}, where each decision Xi has its own discrete set of mi options Oi={zi1,zi2,,zimi} an architecture in the DECISION-OPTION problem is given by an assignment of options to decisions A={XizijOi}i=1,,n

The DECISION-OPTION Pattern is the most direct representation of our Apollo example, in which each decision had a small number of available options (for example, EOR = {yes, no}, moonArrival = {orbit, direct}). An architecture, in the DECISION-OPTION problem, can be represented by an array of values, where each position in the array represents the value chosen for a particular decision. For example, in the Apollo example, the actual Apollo architecture can be represented by

A={yes;orbit;yes;orbit;orbit;3;2;storable;storable}

A pictorial representation of a generic DECISION-OPTION problem is provided in Figure  16.3, which shows 3 different decisions with 3, 2, and 4 options, respectively. Two different architectures and their representations as arrays of integers are also shown.

Two decision option architectures are shown with options for each decision.

Figure 16.3  Pictorial representation of the DECISION-OPTION Pattern. Two different DECISION-OPTION architectures are shown for a simple case with 3 decisions and 24 possible architectures.

DECISION-OPTION problems can be readily represented using decision trees, although in practice, the tree will often be too big to represent on a single sheet of paper. They can also be represented using morphological matrices, as shown in Table 16.3.

Table 16.3 | DECISION-OPTION problems can be represented using morphological matrices.

Decision/Option Option 1 Option 2 Option 3 Option 4
Decision 1 Option 1.1 Option 1.2 Option 1.3
Decision 2 Option 2.1 Option 2.2
Decision 3 Option 3.1 Option 3.2 Option 3.3 Option 3.4

The size of the tradespace of a generic DECISION-OPTION problem is simply given by the product of the number of options for each decision. Hence, in the example shown in Figure 16.3, there are 3 * 2 * 4 = 24 different architectures. Thus the number of architectures grows very quickly with the number of decisions and options for each decision.

The set of options for each decision is defined as a discrete set. For problems with continuous values (such as in vehicle suspension, where the spring rate can take on any value), we would need either to provide a finite set of acceptable values (such as spring rates of 400, 500, and 600 lb/in.) or to define the boundaries and a discretizing step to construct this set.

The Apollo case is a clear instance of a “pure” DECISION-OPTION problem, because all decisions were given a discrete set of mutually exclusive alternatives. Other examples follow.

  • Example 1: Consider the architecture of an autonomous underwater vehicle (AUV; see Figure 16.4). A simple representation of the architecture consists of the following decisions: the configuration (“torpedo,” “blended wing body,” “hybrid,” “rectangular”); the ability to swim (yes, no); the ability to hover like a helicopter (yes, no); the navigation method (dead reckoning, underwater acoustic positioning system); the propulsion method (propeller-based, Kort nozzles, passive gliding); the type of motors (brushed, brushless); the power system (rechargeable batteries, fuel cells, solar); and different sensor decisions: sonar (yes, no), magnetometer (yes, no), thermistor (yes, no). These 10 decisions with 4, 2, 2, 2, 3, 2, 3, 2, 2, and 2 options, respectively, yield a total of 4,608 different AUV architectures before constraints are considered. [10]

  • Example 2: Providing communication services in remote or developing areas for commercial or military applications requires the use of dedicated assets such as satellites, drones, [11] or even balloons. Consider an aerial network of balloons ­hosting military communications payloads acting as communications relays between ground vehicles. [12] Major architectural decisions include deciding between two different types of radios (where the type of radio determines the range of communication) and ­between two balloon altitudes (where altitude drives the coverage of the network). These two decisions are made at each of 10 preselected sites. If we include the ­option not to have a balloon in a site, this yields 3 × 2 × 3 × 2 … = 610 ≃ 60 million architectures. The reader may note that this formulation enumerates the two different altitudes for the trivial case where no balloon is assigned to the site, which is unnecessary. These options could be eliminated using a constraint. Note also that this formulation assumes that we can mix and match different types of radios for different sites, which would impose an interoperability requirement between them. In this example, decisions concerning different sites are identical and have the same set of options. We will see later that a more natural formulation of this problem includes a mix of a DECISION-OPTION Pattern and an ASSIGNING Pattern.

An autonomous underwater vehicle resembles a tube with wings.

Figure 16.4  The Bluefin 12 BOSS, the Nereus, and the SeaExplorer, three different AUV architectures. (Source: (a) National Oceanic and Atmospheric Administration (b) Image courtesy of AUVfest 2008: Partnership Runs Deep, Navy/NOAA, OceanExplorer.noaa.gov (c) Photo courtesy of ALSEAMAR)

A submarine with solar panels integrated into the body of the boat floats on the surface.
The submarine Sea explorer navigates in the depths of the water.

The fundamental idea in DECISION-OPTION problems is that the options for each decision are mostly independent (except for constraints that may preclude some combinations) and, in general, differ across decisions. For example, the value “16GB” could never be an option for the decision “type of processor.” We will see that this independence distinguishes the DECISION-OPTION Pattern from all the other Patterns, where decisions can share options. Moreover, in the DECISION-OPTION problem, one must choose exactly one option for each decision (for example, EOR cannot be yes and no at the same time), whereas in the other Patterns, one must usually choose a combination of options. For these reasons, the DECISION-OPTION Pattern is best represented by a morphological matrix.

The DECISION-OPTION Pattern is the most intuitive and flexible of all Patterns. No sequence, pre-conditions, or relationships are implicitly assumed between decisions. If present, such features need to be modeled using constraints (such as the invalid mission-mode options from the Apollo example). DECISION-OPTION problems appear most often for Tasks 3 and 4 of Table 16.1, the specialization and characterization of function and form.

DECISION-OPTION is a very general Pattern that can be used to represent pretty much any programmed decision problem, which is why we start with it. However, we saw in the examples that some problems have an underlying structure (coupling between decisions) that makes them easier to formulate using other Patterns.

The DOWN-SELECTING Pattern

The DOWN-SELECTING Pattern arises when we have a set of candidate elements and need to choose a subset of them. More formally, given a set of elements U = {e1, e2, … , em}, an architecture in the DOWN-SELECTING Pattern is given by a subset of the elements in the set: A=SU.

An architecture in the DOWN-SELECTING Pattern can be represented by a subset of elements or by a binary vector. For example, if we choose from our set of 8 candidate instruments {radiometer, altimeter, imager, sounder, lidar, GPS receiver, synthetic aperture radar (SAR), spectrometer} in the NEOSS example, two different architectures would be

A1={radiometer,altimeter,GPS}=[1,1,0,0,0,1,0,0]

A2={imager,sounder,SAR,spectrometer}=[0,0,1,1,0,0,1,1]

The DOWN-SELECTING Pattern can be seen as a set of binary decisions, where each decision concerns whether we choose an element or not. The size of the architectural tradespace of a DOWN-SELECTING problem is thus simply given by 2m, where m is the number of elements in the candidate set.

A pictorial representation of the DOWN-SELECTING Pattern is provided in Figure 16.5, where two different subsets of the same set of 8 elements are shown. The subset on the left chooses 5 of the 8 elements, whereas the subset on the right chooses 3 of the 8 original elements. Selected elements are shown in black inside rectangles indicated by solid lines, whereas discarded elements are shown in gray italics inside rectangles indicated by dashed lines.

Two circles representation the formulas A = left bracket, 0, 1,0, 1, 1, 0, 1, 1, right bracket, and A = left bracket, 1, 0, 1, 0, 0, 0, 0, 1 right bracket.

Figure 16.5  Pictorial representation of the DOWN-SELECTING Pattern. Two different DOWN-SELECTING architectures are shown for a simple case with 8 elements and 256 possible architectures.

The reader may wonder why we define a new Pattern for problems that can be expressed using the DECISION-OPTION Pattern. The reason is that there is an underlying structure in the DOWN-SELECTING Pattern that is absent in the DECISION-OPTION Pattern: Architectures are defined as a subset of candidate elements. Changing a decision in the DOWN-SELECTING Pattern always means adding an element to, or removing an element from, the selected subset, which has two effects: adding or subtracting some benefit, and adding or subtracting some cost. This underlying structure allows us to choose the best set of heuristics to solve the corresponding optimization problem. For example, single-point crossover, a heuristic commonly used in genetic algorithms, is usually an appropriate choice for DOWN-SELECTING problems, because it tends to keep good combinations of elements together, and the structure of the problem is smooth enough.

The DOWN-SELECTING Pattern appears in situations where resources are limited. The typical example is a limited budget forcing an organization to choose between competing ­systems or projects. The instrument selection problem from our NEOSS example is an instance of the DOWN-SELECTING Pattern, because we have to choose a subset of the candidate instruments. A key feature of this problem is that the value of choosing an instrument depends on what other instruments are selected. In other words, there are synergies and conflicts between instruments. For example, if we choose the radar altimeter but not the microwave radiometer, then the ­accuracy of the radar altimetry measurement will not be as good because we will not be able to correct for the effect of humidity in the air. A few more examples follow.

  • Example 1: IBM’s Watson is a complex cognitive software system known for beating human contestants in the TV show Jeopardy. Watson uses multiple natural language processing (NLP) strategies to parse a question stated in English and searches an extremely large knowledge database to find the most likely correct answers. [13] The problem of selecting among candidate NLPs can be seen as a DOWN-SELECTING problem. Adding more strategies has benefits (for example, more diversification ­usually means more flexibility and the ability to handle more general problems), but it also has costs (development time, and computational resources for each strategy). Moreover, there might be redundancies, synergies, and interferences between some NLPs. For example, deep and shallow NLP strategies complement each other and can be combined in hybrid systems. [14]

  • Example 2: RapidEye is a constellation of five satellites that provide daily high-­resolution optical imagery. In RapidEye’s first constellation, the satellites provide ­images in the red, green, blue, and near-infrared bands of the electromagnetic ­spectrum. For the next-generation constellation, RapidEye could presumably ­decide to incorporate more bands into the satellites, even in the microwave region of the spectrum, since more information (such as atmospheric temperature, chemical ­composition, or vegetation state) can be obtained from such multi-spectral measurements. Different bands have different capabilities and applications, and some of them are more redundant than others. For example, there are several bands to ­measure atmospheric concentration of carbon monoxide (a powerful pollutant), including 2.2 µm and 4.7 µm. If we choose one of these two, then having the other would ­arguably add less value than the first one. There are also synergies between different bands. For example, adding a band at 1.6 µm can increase the value of most other bands by providing an atmospheric correction for the presence of clouds.

The DOWN-SELECTING Pattern is the most natural Pattern for Task 6 of Table 16.1, goal selection, where a subset of goals or requirements must be chosen from a candidate set. It can also be useful in Task 3 of Table 16.1 (specialization of form or function) when several options can be chosen from a group of similar elements.

The DOWN-SELECTING Pattern is similar in nature to a classical optimization problem called the 0/1 integer knapsack problem, with an important caveat. In the standard formulation of the 0/1 integer knapsack problem, we are given a set of items, each of them with a given cost and benefit. The goal is to find the number of items of each type that maximizes benefit at a given cost.* In this classical formulation, the benefit and cost of the items are independent of the other items selected. In other words, there are no interactions between the elements. But in reality, the value of selecting a certain item will depend on the other items selected, because there are synergies, redundancies, and interferences between elements. These interactions are very important in DOWN-SELECTING problems because they can drive architectural decisions.

Consider the trivial example of a backpack and a set of available items that contains, among other things, three tubes of toothpaste of different brands (with benefits B1, B2, B3), a toothbrush (B4), a hot sandwich (B5), an ice-cold can of beer (B6), a towel (B7), and soap (B8). In a classical knapsack problem formulation, all Bi would be immutable, and thus the benefit of choosing all three tubes of toothpaste would be B1 + B2 + B3. In reality, the benefit of having the three tubes of toothpaste will arguably be much smaller than B1 + B2 + B3, as a consequence of redundancy between the items.

Similarly, in the classical formulation, the value of having the toothpaste does not depend on whether the toothbrush is selected or not. In reality, the value of having toothpaste without a toothbrush is much smaller, potentially zero. This is the effect of synergies, which, as we have noted before, is very important to capture.

Finally, one may argue that the value of choosing the ice-cold beer may slightly decrease if we pack the hot sandwich with it, because some of the heat in the sandwich will be transferred to the beer, ruining it. This is an example of negative interactions that we call interferences.

These ideas can be generalized to more complex systems. Our radar and radiometer instruments from the NEOSS example are highly synergistic elements, because the value that we get out of the combination is greater than the sum of their individual measurements. Conversely, the synthetic aperture radar and the lidar have negative interactions; both are high-energy instruments, and they are likely to have conflicting orbit requirements. Furthermore, there might be some redundancy between the radar altimeter and the lidar, since both can be used to do topographic measurements.

The importance of interactions between elements for DOWN-SELECTING problems is highlighted in Box 16.1.

Because of the presence of redundancies, synergies, and interferences between elements, DOWN-SELECTING problems are a lot harder to solve than classical knapsack problems, since the value of an element depends on all the other selected elements. One way of ­approaching the problem is to enumerate all possible subsets of selected elements. This can become quite tedious because it requires pre-computing 2N values. More sophisticated models of these interactions allow explicit modeling of the nature of the interactions and traceability of the value. [16]

The ASSIGNING Pattern

The ASSIGNING Pattern arises when we have two sets of elements (which we will simply call the “left” set and the “right” set) and we need to assign elements of the left set to any number of elements of the right set. For example, in the NEOSS example, one might predefine a set of ­instruments (imager, radiometer, sounder, radar, lidar, GPS receiver) and a set of orbits (geostationary, sun-synchronous, very low Earth polar orbit) so that each instrument can be assigned to any subset of orbits (including all or none of the orbits). A possible architecture is shown in Figure 16.6, where the sounder and radiometer are assigned to the geostationary orbit; the imager, sounder, radiometers, and radar are assigned to the sun-synchronous orbit; the lidar and radar are assigned to the polar orbit; and the GPS receiver is not assigned to any orbit.

A diagram has instruments related to orbits.

Figure 16.6  An example of the ASSIGNING Pattern for the NEOSS instrument-packaging problem.

A particular architecture in the ASSIGNING problem can be readily represented as an array of subsets. We can also construct the ASSIGNING Pattern as a DECISION-OPTION with only binary decisions (Do we assign element i to element j, yes or no?). Thus an architecture in the ASSIGNING problem can be represented as a binary matrix of size m?n, where m and n are the number of elements in the two sets.

Figure 16.7 provides an illustration of a generic ASSIGNING problem and illustrates two alternative architectures, with their corresponding representations as binary matrices.

Two architectures are shown with the following equations: A = left bracket, 1 1 0 0 1, 0 0 1 0 0, 0 0 1 0 0, right bracket. A = left bracket, 1 1 0 0 1, 0 0 1 0 1, 1 0 1 1 0, right bracket.

Figure 16.7  Pictorial representation of the ASSIGNING Pattern. Two different ASSIGNING architectures are shown for a case with 3 elements in the “left” set and five elements in the “right” set, for a total of 32,768 possible architectures.

Given the formulation as a binary matrix, the size of the tradespace of a generic ASSIGNING problem is simply given by 2mn, since there are 2m possibilities for each decision, where m is the number of options (elements in the matrix on the right-hand side of Figure 16.9) and n is the number of decisions (elements in the matrix on the left-hand side of Figure 16.9). Hence, in the example of Figure 16.6, there are 23?6=262,144 architectures. Note that the number of architectures grows exponentially with the product of the number of decisions and options.

The ASSIGNING Pattern is one of the most prominent ones in system architecture ­programmed decisions. Some examples follow.

  • Example 1: Every autonomous vehicle (be it a MQ-9 Reaper UAV, a DISH ­network communications satellite, a robotic vacuum cleaner, or the Google car) has a ­guidance, navigation, and control (GNC) subsystem that gathers information about the position and attitude of the system (navigation), decides where to go next ­(guidance), and changes its position and attitude to achieve the “go there” (control). We saw in Chapter 15 that a GNC system can be viewed as a set of sensors, computers, and actuators that can have connections between them. [17] These connections can be seen as two layers; sensors are connected to computers, and computers to actuators. Focusing on the first layer (sensors -> computers), if we have a predefined set of sensors and a predefined set of computers, we have to make a decision (or many decisions) about how to connect them to each other. This is a clear instance of the ASSIGNING Pattern: There are two predefined sets of elements (sensors and computers), and each element of one set (sensor) can be connected or assigned to any number of elements of the other set (computers). The same is true for the computer-to-actuator part of the problem.

  • Example 2: Recall our military communications aerial network example from the DECISION-OPTION Pattern. The decisions concerning which balloon type to ­allocate to each site can be seen as an instance of the ASSIGNING Pattern, because each balloon type can be assigned to any subset of sites (including none or all of them). Note that this assumes that we can deploy multiple balloons in a site, which may or may not be a realistic assumption.

The ASSIGNING Pattern can be thought of as a special case of the DECISION-OPTION Pattern where a set of identical decisions share a common set of options, and each decision can be assigned to any subset of those options, including the empty set. For example, one can assign workers to tasks (any worker can be assigned to any number of a pool of common tasks), instruments to orbits (any instrument can be assigned to any number of orbits), subsystems to requirements (any subsystem can be assigned to satisfy any number of requirements, or vice versa), or processes to objects (any process can be assigned to any subset of existing objects). Moreover, the ASSIGNING Pattern can also be thought of as a set of DOWN-SELECTING decisions, each decision corresponding to the assignment of an element of the left set to any subset of elements of the right set, with the additional constraint that each element of the left set can be assigned to only one subset of elements of the right set.

However, the ASSIGNING problem has an underlying structure that is absent in both the DECISION-OPTION problem and the DOWN-SELECTING problem—the fact that we are assigning elements of one set to another. If we were to formulate an ASSIGNING problem as a set of DOWN-SELECTING problems, it would not be implicit in the formulation that the candidate sets of elements for all decisions are in fact the same set, in particular what we called the “right set.”

Recall that a problem does not inherently belong to any given Pattern, but rather can usually be expressed in several Patterns. It will often be the case that a problem is most naturally formulated in a particular Pattern. If we have a DECISION-OPTION problem where all decisions are identical and have the same candidate sets, then it is more appropriate (simpler, more informative, and more elegant) to formulate it as a single ASSIGNMENT problem. Moreover, formulating it as an ASSIGNMENT problem will allow us to use heuristics that take advantage of the structure of the Pattern, such as heuristics based on style or balance considerations.

The ASSIGNING Pattern often appears in Tasks 2 (function-to-form mapping) and 5 (connecting form and function) of Table 16.1. In function-to-form mapping, the ASSIGNING Pattern is essentially about choosing how coupled we want our architecture to be. Suh’s principle of functional independence proposes that each function should be accomplished by one piece of form, and each piece of form should perform only one function. [18] On the other hand, having a more coupled mapping of function to form, where one element of form performs several functions, can sometimes reduce number of parts, weight, volume, and (ultimately) cost.

In the context of connectivity of form and function, the ASSIGNING Pattern is fundamentally about deciding how connected we want our architecture to be. In the NEOSS example, we assign instruments to orbits; every time we assign an instrument to an orbit, we add a “connection” in the architecture, which in this case takes the form of a copy of the instrument. In this example, connections (instruments in orbit) are costly. In the GNC example, we assign or connect sensors to computers (and computers to actuators); every time we assign one sensor to one computer, we add a connection in the architecture, which in this case can take the form of a cable, interfaces, and software to support the interfaces. In both cases, increasing the number of connections between the “left” set and the “right” set can improve system properties such as data throughput or reliability, but it comes at a price: increased complexity and cost.

Even though function-to-form mapping and the connectivity of form and function are fundamentally different problems, we see that they are decisions with similar features. In both cases, if we enforce that each element from the left set be assigned to at least one element of the right set (an additional constraint that is not present in the most general formulation of the pattern), we can define two “extreme” architectures at the boundaries of the architectural tradespace. These are a “channelized architecture,” where each “left” element is matched to exactly one “right” ­element (see Figure 16.8), and a “fully cross-strapped” architecture, where every “left” element is connected to every “right” element (see Figure 16.9).*

The Saturn V is a good example of a channelized architecture in a function-to-form mapping task, because the system was neatly decomposed into subsystems that performed a single function. An example of a channelized architecture in a connectivity task is the GNC system of the NASA X-38 Crew return vehicle, which has two fully independent redundant buses.

Element 1 connects to Element A. Element 2 connects to Element B. Element 3 connects to Element C.

Figure 16.8  Channelized style of architecture in the ASSIGNING Pattern.

An example of a fully cross-strapped architecture in mapping function to form in an organizational context is the idea of “Total Football” in soccer, in which all ten field players play both defensive and offensive roles. The Space Shuttle avionics system is an example of a fully cross-strapped architecture in a connectivity task, because all inertial measurement units are connected to all general-purpose computers, and all computers are connected to all rudder actuation systems.

Element 1 connects to Element A, B, and C. Element 2 connects to Element A, B, and C. Element 3 connects to Element A, B, and C.

Figure 16.9  Fully cross-strapped style of architecture in the ASSIGNING Pattern.

The channelized versus fully cross-strapped tradeoff applies to function-to-form mapping, and connectivity of form and function. In the case of form-to-function mapping, we can re-state Suh’s principle of functional independence as a channelized architecture, where each function is accomplished by one piece of form, and each piece of form performs only one function. In the connectivity case, the tradeoff is basically throughput and reliability versus cost.

These extremes of the ASSIGNING Pattern can be seen as architecture styles—that is, soft constraints or driving principles that simplify the architecture (typically by making a series of similar decisions identical) and that often result in more elegant architectures. The channelized versus fully cross-strapped trade-off and corresponding styles are discussed in Box 16.2.

The PARTITIONING Pattern

The PARTITIONING Pattern appears when we have a single set of N elements and we need to partition them into a number of non-empty and disjoint subsets (from 1 to N). Each element must be assigned to exactly one subset. In other words, we cannot “repeat” elements (assign them to more than one subset) or leave elements out (assign them to no subset). More formally, given a set of N elements U = {e1, e2, … , eN}, an architecture in the PARTITIONING problem is given by a partition P of the set U, which is any division of U into a number of non-overlapping subsets P={S1,S2,,Sm}, where SiU,Si{}i 1mN, and the subsets are mutually exclusive and exhaustive. In other words, P is a valid architecture if:

  1. The union of all subsets in P is equal to U: mi=1Si=U

  2. The intersection of all elements in P is empty: i=1mSi=

The instrument-packaging problem from our NEOSS example can be seen as an instance of the PARTITIONING Pattern: We have a set of instruments, and we are looking for different ways of partitioning this set into satellites. For example, all instruments can be assigned to a single large satellite, or they can all have their own dedicated satellites, or anything in between.

An architecture in the PARTITIONING Pattern is given by a partition P={S1,S2,,Sm}. In the NEOSS example, given the set of instruments {imager, radar, sounder, radiometer, lidar, GPS receiver}, one possible partition is given by {{radar, radiometer}, {imager, sounder, GPS receiver}, {lidar}}, and another one is given by {{radar}, {lidar}, {imager, sounder}, {radiometer, GPS receiver}}, as shown in Figure 16.10.

Two circles are divided with dotted lines.

Figure 16.10  Illustration of the PARTITIONING Pattern in the NEOSS instrument-packaging problem.

Such a partition can be represented in different ways—for example, an array of integers where position i indicates the index of the subset to which element i is assigned, so that entries with the same value designate elements that are in the same subset.

A general pictorial representation of the PARTITIONING Pattern is provided in Figure 16.11, where two different partitions for a set of 8 elements are shown, together with their representations as arrays of integers. The partition on the left side divides the 8 elements into four subsets, and the partition on the right side divides the 8 elements into two subsets.

Two circles are divided based on two equations.

Figure 16.11  Pictorial representation of the PARTITIONING Pattern. Two different PARTITIONING architectures are shown for a simple case with 8 elements and 4,140 possible architectures.

The number of possible partitions in a set grows very quickly with the number of elements in the set. For reference, there are 52 ways of partitioning 5 elements and over 115,000 ways of partitioning 10 elements. Because of the two constraints provided in the definition, partitions are harder to count than simple assignments or subsets. [19]

One might think that the PARTITIONING Pattern does not appear very often, since it has such “hard” constraints. In practice, though, many programmed decisions in system architecture are naturally formulated as PARTITIONING problems, especially when one is mapping function to form (Task 2 in Table 16.1) or decomposing function or form (Task 1 in Table 16.1).

  • Example 1: If one considers a set of underground oil reservoirs, the problem of how many facilities to build, and where, can be formulated as a PARTITIONING problem if subsets of reservoirs are implicitly identified with facilities. For example, if we consider three reservoirs A, B, and C, there are five possible partitions: (1) {A}, {B}, {C}, which would assign one facility to each reservoir; (2) {A, B}, {C}, which would assign ­reservoirs A and B to one facility and reservoir C to another facility; (3) {A, C}, {B}, which would assign reservoirs A and C to one facility and reservoir B to another facility; (4) {B, C}, {A}, which would assign reservoirs B and C to one facility and reservoir A to another facility; and (5) {A, B, C}, which would assign reservoirs A, B, and C to a single facility. Note that this formulation is based on abstract facilities. In other words, the formulation would tell us only which reservoirs are connected to which facilities, not the exact position of the facility, which would need to be determined afterwards.

  • Example 2: Consider an IT network for a large bank with 1,000 branches and 3,000 ATMs across the United States. These banks need to be interconnected through a number of routers. Deciding how many routers to use can be seen as a PARTITIONING problem, assuming that each branch and ATM needs to be connected to exactly one router. For example, one could imagine a solution where a single router provides service to all 4,000 nodes.

When it appears, the PARTITIONING Pattern is fundamentally about how much centralization we want in our architecture. Hence, it entails a discussion about two new architecture styles: centralized (or monolithic) and decentralized (or distributed) architectures. These styles are discussed in Box 16.3.

Interactions between elements—namely, the synergies and interferences introduced in the context of the DOWN-SELECTION Pattern—play a key role in the PARTITIONING Pattern. In the DOWN-SELECTION Pattern, it was assumed that interactions between elements {ei,ej} occur as long as both elements are selected. In the PARTITIONING Pattern, we acknowledge that physical interactions usually require that the elements be connected. In the NEOSS example, if the radar altimeter and the radiometer are both selected in the DOWN-SELECTION problem, but are put on different spacecraft and on different orbits in the PARTITIONING problem, it will very hard to obtain the benefits of their synergistic interaction! To capture this synergy, the instruments need to be either on the same spacecraft or close enough to allow for cross-registration. Similarly, instruments can interfere electromagnetically with each other only if they are on the same spacecraft or close enough.

For example, the Envisat satellite is the largest civil Earth observing satellite ever built. It weighs about 8 mt and carries 10 remote sensing instruments. Because the instruments are on the same satellite, they can look at the same spot on Earth’s surface simultaneously. Thus scientists can make the most of the synergies between these instruments by combining their measurements to generate rich data products. [20] Furthermore, Envisat had only one solar panel, one set of batteries, one set of communication antennas, and one frame for all its payload; and only one launch was needed. All of these things would have had to be replicated 10 times if the instruments had been broken down into 10 smaller satellites. However, this monolithic approach also has disadvantages. In Envisat, the synthetic aperture radar could work for only 2% of the orbit, because the solar panel of the satellite did not produce enough power for all the instruments to work at the same time; synthetic aperture radars consume a lot of power. Moreover, if the launch vehicle or the solar panel had failed, all the instruments would have been lost. In another example, the Metop satellite had a large conically scanning instrument that induced vibrations on the platform that affected a very sensitive sounder. [21] This is an example of a negative interaction or interference between elements in a monolithic architecture. Interactions can also be programmatic in nature: Envisat and Metop could not be launched until all their instruments, including the least technologically mature, were ready for launch. Envisat cost over 2 billion euros and took over 10 years to develop. This means that by the time it was launched, some of the technologies were almost obsolete.

The PERMUTING Pattern

The PERMUTING Pattern appears when we have a set of elements, and each element must be assigned to exactly one position. Choosing these positions is often equivalent to choosing the optimal ordering or sequence for a set of elements. For example, 123, 132, 213, 231, 312, and 321 are six different permutations of the digits 1, 2, and 3. Similarly, if we have three satellite missions, we have six different orders in which they can be launched. More formally, given a set of m generic elements of function or form U = U={e1,e2,,eN} an architecture in the PERMUTING problem is given by a permutation O—that is, any arrangement of the elements in U into a particular order*

:
O={xii[1;N]}i=1,,Nxixji,j

An architecture in the PERMUTING Pattern is thus readily represented by an array of integers with two possible interpretations: element-based or position-based. For example, the sequence {element 2, element 4, element 1, element 3} can be represented by the array O=[2,4,1,3] (element-based representation) or by the array O=[3,1,4,2] (position-based representation).

The pictorial representation of the PERMUTING Pattern is provided in Figure 16.12, where 2 out of 120 different permutations, and their representations as element-based arrays of integers, are shown for a generic set of 5 elements.

A diagram has two equations represented by elements and positions that relate to each other.

Figure 16.12  Pictorial representation of the PERMUTING Pattern. Two different PERMUTING architectures and their element-based representations are shown for a simple case with 5 elements and 120 possible architectures.

The size of the tradespace of a PERMUTING problem of m elements is given by the ­factorial of mm!=m(m1)(m2)1Note that the factorial grows extremely fast—faster than exponential functions. For 15 elements, there are over 1 trillion different permutations!

PERMUTING problems often concern the geometric layout of elements or the sequence of a set of processes or events. Recall from Parts 2 and 3 that operations are part of the architecture of the system, and they need to be considered early in the architecture process, especially because we can derive goals and metrics directly from the concept of operations during the stakeholder analysis. For example, one could try to optimize the sequence of destinations or (more broadly) tasks to do for an unmanned vehicle. The PERMUTING Pattern is also prominent when one is architecting a portfolio of systems and needs to decide the order in which the systems should be deployed. In what order should we launch the satellites in the NEOSS example? In this case, a key consideration will be related to ensuring that missions are launched in such a way as to ensure data continuity of existing measurement records. Other examples of PERMUTING problems include:

  • Example 1: Very large-scale integration (VLSI) circuits have on the order of 1010 transistors per die. Therefore, it is extremely important to minimize the total length of connections, because small variations in length can have a big impact on fabrication cost at the scale at which these chips are fabricated. One way of achieving this is to optimize the placement of interconnected cells on the chip, so that the total length of connections is minimized. This is an instance of the PERMUTING Pattern, where elements are gates or even individual transistors, and positions are positions on the board.

  • Example 2: A few years ago, NASA canceled the flagship space exploration program “Constellation” because of budgetary restrictions. To replace Constellation, a senior advisory committee of experts led by former Lockheed Martin CEO Norm Augustine generated several alternative strategies, one of which was known as the Flexible Path. In the context of the Flexible Path, an important architectural decision is the optimal sequence of destinations on the path to Mars (for example, Lagrange Points, then Moon surface, then Near-Earth Asteroid, and then Mars, or reverse order of Moon Surface and Near-Earth Asteroid). This is a complex decision that needs to take into account scientific, technological, and programmatic considerations.

We have seen that there are two major classes of PERMUTING problems: those that deal with time (scheduling) and those that deal with topology and geometric considerations. However, the PERMUTING Pattern is more general than it may appear at first. The PERMUTING Pattern is essentially a matching of a set of N elements to the set of integers from 1 to N, where no two elements can be matched to the same integer. Note that this formulation does not necessarily imply an ordering between the elements. The only condition is that the options be exclusive; if an element is assigned to an option, then no other element can be assigned to it. For example, if a circuit is assigned to a position in an electrical board, no other circuit can be assigned to that position.

Given this generalization, PERMUTING problems are most useful for Task 5 in Table 16.1, connectivity of function to form, as well as for defining the system deployment and/or concept of operations. Typical tradeoffs in time-related PERMUTING problems involve balanced value delivery over time and careful consideration of resource budgets. In these cases, the system architect often has to choose between front-loaded (“greedy”) system deployment and incremental system deployment. This is discussed in Box 16.4.

The CONNECTING Pattern

The CONNECTING Pattern appears when we have a single, fixed set of elements, and we want to decide how to connect them. These connections may or may not have a sense of ­“direction.” More formally, given a fixed set of m generic elements or nodes U={e1,e2,,em} an ­architecture in the CONNECTING problem is given by a graph that has U as its nodes and has a list of 1NN2 vertices on U: G={V1,V2,,VN} where each vertex connects two nodes: V={ei,ej}U×U.

The CONNECTING Pattern can be readily represented by a square binary matrix called the adjacency matrix. More precisely, the adjacency matrix A of a graph G is an n × n matrix where A(i,j)=1 if node i is directly connected by an edge to node j, and A(i,j)=0 otherwise. Note that in undirected graphs—that is, graphs in which edges have no orientation—A(i,j)=1 implies A(j,i)=1, and therefore the adjacency matrix of undirected graphs is symmetric. This is not the case for directed graphs, where edges do have a sense of direction.

The pictorial representation of the CONNECTING Pattern is provided in Figure 16.13, which shows two different ways of connecting a set of six nodes, and the corresponding binary matrices.

A diagram has two equations represented as elements and details the relationship between the equation and elements.

Figure 16.13  Pictorial representation of the CONNECTING Pattern. Two different CONNECTING architectures and their corresponding representations are shown for a simple case with 6 elements and 32,768 possible architectures.

The size of the tradespace of a CONNECTING problem is given by the size of the set of all possible adjacency matrices on a set of m elements. The number of different adjacency matrices depends on whether the graphs being considered are undirected or directed, and on whether nodes are allowed to be connected to themselves. This leaves the four different cases that are summarized in Table 16.4.

Table 16.4 | Size of the tradespace for the CONNECTING Pattern in four different cases.

Directed

Graph

(non-symmetric)

Undirected Graph

(symmetric)

Self-connections Allowed

(diagonal meaningful)

2m2 22m(m+1)2

Self-connections Not Allowed

(diagonal not meaningful)

2m2m=2m(m1) 22m(m+1)2

The foregoing formulas assume that there can be a maximum of one connection between any two given nodes or between a node and itself. In other words, it assumes that the adjacency matrix is a Boolean matrix.

Examples of CONNECTING problems follow.

  • Example 1: Say we are in charge of architecting the water distribution network of a small region. The region is divided into areas that differ in various characteristics (such as population density and natural resources), and each area needs water. In each area, there may or may not be a water generation/treatment plant (a well or a desalination plant). The problem of figuring out which connections to make between areas, given a certain layout of water generation/treatment plants, is most naturally formulated as an instance of the CONNECTING Pattern.

  • Example 2: The architecture of a power grid also fits the CONNECTING Pattern very naturally. Energy is produced and/or stored at nodes, and it is transported between nodes through connections. Having more connections costs a lot of money, but it allows more balanced management of the electricity produced.

The CONNECTING Pattern is useful to address Task 5 of Table 16.1, which we labeled as “Connecting Form and Function.” The two previous examples essentially deal with the definition of physical interfaces between system elements, but the nature of these interfaces can be informational, as in data networks, or can simply indicate logical dependencies (for example, software systems).

We argued in Section 16.1 that the connectivity task appears to some extent in all kinds of systems, because all systems have internal and external interfaces. One way to think about the CONNECTING Pattern is to think of the system at hand as a network and to look at different ways of connecting the set of nodes in a network. All systems can be represented as networks, but this representation is trivial for systems that are networks, such as data networks, transportation networks, power networks, and satellite constellations.* It is also easy to think about systems of systems as networks, where each system is a node and the edges are the interfaces between the individual systems.

The most natural commodity that flows through the nodes of these networks is arguably data, but vehicle traffic (ground, air, or space), electrical power, water, natural gas, and food are also possibilities. The amount and distribution of commodity that flows through the network are important drivers of the architecture. For example, centralized architectures that have one or more nodes through which most traffic flows can suffer from delays due to the appearance of bottlenecks.

More generally, the relevant tradeoffs in CONNECTING problems usually have to do with the degree of connectivity of nodes, and with their effect on performance metrics such as latency and throughput, as well as on other emergent network properties such as reliability and scalability. [22] Note, in some of these aspects, the similarity to the ASSIGNING Pattern, which can be seen as an instance of the CONNECTING Pattern where we have two types of nodes in the network, which we called the “left” set and the “right” set.

The main architectural styles in the CONNECTING Pattern come from network topologies: bus, star, ring, mesh, trees, and hybrid. These different styles are compared in Box 16.5.

A diagram explains the relationship between elements and 5 architectural styles. The architectural styles include bus architecture, star architecture, ring architecture, mesh architecture, and tree architecture.

Figure 16.14  Architectural styles in the CONNECTING Pattern, borrowed from network topologies.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.83.96