16.2 Formulating a System Architecture Optimization Problem

Let’s start with the Apollo example from Chapter 14. Recall that in Chapter 14 we defined nine decisions and a number of allowed values for each decision (such as EOR yes or no, command module crew of 2 or 3, service module fuel cryogenic or storable). We also had a way of measuring how good an Apollo architecture is by means of two metrics: total launched mass (IMLEO) and probability of mission success. We used the rocket equation to link mass to the decisions, and a risk table gave us the probability of mission success for each combination of decision variables.

This example can be formulated in such a way that we have a set of architectural decisions {di}={d1,d2,...,dN}, each with a set of choices or allowed values {{dij}}={{d11,d12,...,d1m1},{d21,d22,...,d2m2},...,{dN1,dN2,...,dNmn}} For example, d1d1 d1(EOR)={d11,=yes,d12=no}. corresponds to EOR and can take two values: d1(EOR)={d11,=yes,d12=no}. . An architecture is then defined by choosing one value of each decision; for example, architecture

A={didij}=[d1d12;d2d24;dNdNm].
For instance, the true Apollo architecture with LOR can be represented by the following array of values:

A={EORyes;earthOrbitLORyes;moonArrivalorbit;moonDepartrureorbit;cmCrew3;Imcrew2;smFuelstorable;imfuelstorable

or, more compactly, if we omit the names of the decisions,

A={yes;orbit;yes;orbit;orbit;3;2;storable;storable}

The next step in formulating a system architecture optimization problem is to express how well an architecture satisfies the needs of the stakeholders. We do this with a set of metrics M=[M1,,MP] and a function V() that translates architectures (represented by arrays of symbols) into metric values. In the Apollo example, the function V() would contain the rocket equation and the risk look-up table, and the two outputs would be the values for IMLEO and probability of mission success. This is already an enormous assumption: Can a few metrics summarize all the important features of the architecture? Problems that can be computationally treated often require measures like this, and we’ll want to evaluate the biases and assumptions of our metrics.

At this point, we can proceed to generating an architectural tradespace (such as those presented in Chapter 14 and Chapter 15) by enumerating all possible architectures and evaluating them according to the metrics defined. Alternatively, we can focus on finding the Pareto frontier by solving the following optimization problem:

A=argmaxAM=V(A={didij})

In other words, this optimization problem attempts to find a set of non-dominated architectures A in the space defined by the metrics M, computed from the architecture using the function V(). Because there is more than one metric (such as IMLEO and probability of mission success), the non-dominated architectures A optimize the tradeoffs between the metrics M. Let’s look at the components of the previous equation in more detail.

Architectural decisions are the variables that we optimize upon; we select different choices and measure how well the architecture performs as a result. As we discussed in Chapter 14, in most cases, architectural decisions will be represented using discrete and categorical variables (such as propellant type = {LH2, CH4, RP-1}), rather than continuous variables. This has important implications; working in a space with categorical variables precludes the use of gradient-based optimization algorithms. Furthermore, combinatorial optimization problems are usually harder to solve than continuous optimization problems. In fact, most combinatorial optimization problems are what computer scientists call NP-hard. Informally, this means that the time it takes to solve them grows exponentially with the size of the problem (the number of decisions, for example). In practice, these problems become impossible to solve exactly (that is, to find the real global optimal architectures) for relatively small numbers of decisions (say, 15 to 20). Moreover, there is a threshold for the number of decisions and options that we can solve; it is not far from the magic rule of 7+/– 2 decisions and options per decision. [1] Consequently, in formulating an architecture optimization problem, it is very important to choose the architectural decisions and their range of values carefully.

The value function V() links architectural decisions to metrics. It can be seen as the ­“transfer function” of the evaluation model, as illustrated in Figure 16.1.

An equation for a value function takes in input and gives out output.

Figure 16.1  The value function takes an architecture as an input and provides its figures of merit (value) as the output.

The value function is a very concentrated summary of the stakeholder analysis of the ­system: one that takes a compressed version of the architecture as an input (only the values of all the decisions involved) and provides one or a few outputs. The challenge is to choose, from the set of metrics developed in the stakeholder analysis, a subset that is feasible and useful to model. Let’s discuss how to do that.

Just as decisions need to be architecturally distinguishing (Chapter 14), metrics need to have sensitivity to decisions. We expect the metric to show a variation across different architectures. Indeed, if the metric gives the same output for all architectures, it isn’t useful. For instance, if none of our Hybrid Car architectures are inherently safer than others, safety should not be used as a metric, because it is not architecturally distinguishing.

Just as there were limits to the number of decisions and options, here too we have practical limits in the number of metrics. When there are too many metrics, most architectures become non-dominated. If we have 100 metrics that all come from stakeholder analysis, every architecture that gets the maximum score in one single metric will be non-dominated. In practice, a number of metrics between 2 and 5 is usually adequate, with 2 or 3 metrics being preferable in most cases.

Value functions often have a subjective component that arises from fuzzy and ambiguous stakeholder needs that are hard to quantify, such as “community engagement.” Even “scientific value” has a strong subjective component because of the uncertainty about whether a given ­scientific discovery will lead to subsequent scientific discoveries. Subjectivity in the evaluation of architectures is not inherently bad, but it can certainly hinder the decision-making process with consistency and bias issues. When we use subjective judgments in our value functions, it is important to maintain the traceability of our assessments to provide the rationale behind the scores. [2] Appendix C briefly discusses how knowledge-based systems, a technology from artificial intelligence, can help achieve this goal.

In addition to objective functions, most architecture optimization problems have constraints. In the Apollo example, we used constraints to eliminate nonsensical combinations of decisions (for example, LOR = yes and MoonArrival = Direct do not make sense together, because Lunar Orbit Rendezvous requires stopping at the Moon to assemble spacecraft before descent and therefore precludes direct arrival).

Constraints can also be used to eliminate architectures or families of architectures that are very likely to be dominated. For example, hybrid cars with a range of less than 5 km are unlikely to find a substantial market. Therefore, if at any point in the evaluation process this threshold is not achieved, the architecture can be eliminated.

In general, constraints can be used to express more or less stringent goals. Recall that in Chapter 11, we classified goals according to how constraining they were: absolutely ­constraining, constraining, or unconstrained. Here we formalize this idea. Computer scientists often ­distinguish between hard constraints and soft constraints, depending on their effect on the architectures that violate them. An architecture that does not satisfy a hard constraint is eliminated from the tradespace. An architecture that violates a soft constraint is somehow penalized (such as with a cost penalty) but is not eliminated from the tradespace.

A given constraint in a real architecting problem can be formulated as a hard or a soft constraint in a computational formulation. In some cases, hard constraints are preferable—for example, when a soft constraint might result in the evaluation of nonsensical architectures. This is the case in constraints that eliminate invalid combinations of decisions and options, such as the “LOR/Moon arrival” example discussed earlier. On the other hand, the constraint for the hybrid car having a range greater than 5 km could be implemented as a soft constraint. Soft constraints are generally implemented as penalties; the further past the soft constraint, the higher the penalty. Their goal is to speed up the search by driving the algorithm away from unpromising regions quickly.*

In summary, one can formulate a system architecture optimization problem by encoding the architecture as a set of decisions, defining one or more metrics that encapsulate the needs of the stakeholders, creating a value function mapping architectural decisions to metrics, and adding a set of hard or soft constraints as needed.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.104.95