Quality attributes describe externally visible properties of a software system and the expectations for that system’s operation. Quality attributes define how well a system should perform some action. These -ilities of the system are sometimes called quality requirements. Here is a list of some common quality attributes from Software Architecture in Practice [BCK12].
Design Time Properties | Runtime Properties | Conceptual Properties |
---|---|---|
Modifiability Maintainability Reusability Testability Buildability or Time-to-Market |
Availability Reliability Performance Scalability Security |
Manageability Supportability Simplicity Teachability |
Every architecture decision promotes or inhibits at least one quality attribute. Many design decisions promote one set of quality attributes while inhibiting others that are also important! When this happens, we’ll trade one quality attribute for another by choosing a structure for the architecture that favors one quality attribute but harms others.
When digging for ASRs, we’ll spend most of our time working with quality attributes. Quality attributes are used throughout the design process to guide technology selection, choose structures, pick patterns, and evaluate the fitness of our design decisions.
Traditional software engineering textbooks usually discuss two classes of requirements. Functional requirements describe the behavior of the software system. Non-functional requirements describe all system requirements that aren’t functional requirements, including what we’re calling quality attributes and constraints.
When you are designing a software architecture, it’s useful to distinguish between functionality, constraints, and quality attributes because each type of requirement implies a different set of forces are influencing the design. For example, constraints are non-negotiable whereas quality attributes can be nuanced and involve significant trade-offs.
Yes, quality attributes are non-functional requirements, but it is strange to use this term to describe them since quality attribute scenarios (sometimes called quality requirements) have a functional piece to them. Quality attributes make sense only in the context of system operation. In a quality attribute scenario, an artifact’s response is the direct result of some function.
A quality attribute is just a word. Scalability, availability, and performance are meaningless by themselves. We need to give these words meaning so we understand what to design. We use a quality attribute scenario to provide an unambiguous description of a quality attribute.
Quality attribute scenarios describe how the software system is expected to operate within a certain environmental context. There is a functional component to each scenario—stimulus and response—just like any feature. Quality attributes scenarios differ from functional requirements since they qualify the response using a response measure. It is not enough just to correctly respond. How the system responds is also important. The diagram visually depicts the six parts of a quality attribute scenario.
The stimulus is an event that requires the system to respond in some way. The stimulus kicks off the scenario and will vary depending on the type quality attribute. For example, the stimulus for an availability might be a node becoming unreachable whereas the stimulus for a modifiability scenario might be a request for a change.
The source is the person or system that initiations the stimulus. Examples include users, system components, and external systems.
The artifact is the part of the system whose behavior is characterized in the scenario. The artifact can be the whole system or a specific component.
The response is an externally visible action that takes place in the artifact as a result of the stimulus. Stimulus leads to response.
The response measure defines the success criteria for the scenario by defining what a successful response looks like. Response measures should be specific and measurable.
The environment context describes the operational circumstances surrounding the system during the scenario. The environment context should always be defined even if the context is normal. Abnormal contexts, such as peak load or a specific failure condition, are also interesting to consider.
Here is an example portability scenario for an interplanetary robotic explorer based on examples from the NASA Jet Propulsion Laboratory [WFD16].
Notice that the raw scenario in our example doesn’t mention specific response measures. Raw scenarios are simple descriptions that form the basis for more precise quality attribute scenarios. We call them raw because they need further cooking to become a good scenario. Think of a raw scenario as the start of a conversation.
Specifying all six parts of a formal quality attribute scenario is not always necessary. You can often get by with a simple statement that includes the stimulus, source, response, and response measure. Add the environment whenever the scenario does not describe a normal environmental context.
Here are some quality attribute scenarios for the Project Lionheart case study:
Quality Attribute | Scenario | Priority |
---|---|---|
Availability | When the RFP database does not respond, Lionheart should log the fault and respond with stale data within 3 seconds. | High |
Availability | A user’s searches for open RFPs and receives a list of RFPs 99% of the time on average over the course of the year. | High |
Scalability | New servers can be added during a planned maintenance window (less than 7 hours). | Low |
Performance | A user sees search results within 5 seconds when the system is at an average load of 2 searches per second. | High |
Reliability | Updates to RFPs should be reflected in the application within 24 hours of the change. | Low |
Availability | A user-initiated update (for example, starring an RFP) is reflected in the system within 5 seconds. | Low |
Availability | The system can handle a peak load of 100 searches per second with no more than a 10% dip in average response times. | Low |
Scalability | Data growth is expected to expand at a rate of 5% annually. The system should be able to grow to handle this with minimal effort. | Low |
A good-quality attribute scenario communicates the intent of the requirement so anyone can understand it. Great scenarios are precise and measurable. Two people who read the same quality attribute scenario should come away with the same understanding of the system’s scalability or performance or maintainability.
To create a response measure, start by estimating potential values based on your own experience. Use a straw man to kick off a conversation with stakeholders (see Activity 9, Response Measure Straw Man). What if it took nine months to migrate the system to a new microcontroller platform, would that work? How about six months? Eventually, you’ll find a response measure that resonates with stakeholders.
Good response measures are testable. Early in the system’s life, the architecture might exist only on paper, but it’s just a matter of time before you have a running system. If you can’t write a test using your scenario, then the scenario does not have a specific, measurable response measure.
During a meeting, Project Lionheart stakeholders shared the following statements. For each statement, identify the quality attribute and create a formal, six-part quality attribute scenario.
There’s a small number of users, but when a user submits a question or problem we need to be able to respond quickly, within a business day.
Releases happen at least once a month. Ideally, we’ll ship code as it is ready.
We need to verify that the RFP index is built correctly. The verification should be automated.
We need a new, permanent dev team to come up to speed quickly after the current team of contractors we’ve hired leaves.
Here are some things to think about:
What quality attribute is suggested by each statement? It’s OK to make up an ility if it helps describe the concern effectively.
Are there implied responses or response measures?
What missing information can you fill in based on your own experiences?
18.224.52.212