The goal of an architecture evaluation workshop is to gather and analyze the data necessary to assess the architecture. By the end of the workshop, we should be in a position to qualify how well the architecture satisfies desirable quality attributes and other ASRs.
While there are many ways to run an evaluation workshop, all workshops follow the same basic formula:
Let’s take a closer look at each of these steps.
As a part of our preparation, we must decide the goals of the evaluation and develop whatever artifacts we need to meet those goals. Here are some examples:
If you want to evaluate… | You might need artifacts like… |
---|---|
How well a specific quality attribute is promoted | Views relevant to the quality attributes of interest, test results, use cases, quality attribute scenarios |
Technology or pattern options | Technology or pattern descriptions, experiment overviews, experiment results, quality attribute scenarios |
Likelihood of hitting cost or schedule targets | Component overview, component estimates, technical dependencies, team capacity |
Design evolution path | Overviews of the current and to-be architecture, list of evolution steps |
Architecture description completeness or correctness | An architecture description, questions that should be answerable by looking at the description, description quality checklists |
Security | Abuse cases, misuse cases, threat models, data stores, views needed to identify sensitivity points and attack vectors |
Release Readiness | Quality attribute scenarios, relevant views, release checklists, test results |
In addition to preparing artifacts, we’ll need to create rubrics and decide what data is required to score them. If you plan to run the assessment as a workshop, then prepare the agenda and any materials needed to host the workshop.
When selecting reviewers, look for stakeholders and non-stakeholder experts who are detail-oriented and care about the system being designed. Ideal candidates will have relevant domain knowledge or expertise in the technologies and patterns used in the architecture. They will also be prepared to offer an objective assessment. As few as two reviewers can perform an assessment, but it’s possible to involve dozens of reviewers if required.
After the reviewers have committed to participate in the evaluation, we’ll need to prepare them to do a good review.
All reviewers should have the information needed to provide good feedback. Present necessary background information to reviewers, such as the system’s context, architecturally significant requirements, and the artifacts under review. Answer any questions the reviewers have about the context, rubrics, and goals.
A slide deck or whiteboard talk works well for this. When conducting an evaluation within your team, consider creating artifacts and reviewing context together, just-in-time at the start of the workshop.
Once the reviewers are primed and ready, it’s time to perform the assessment and generate some insights.
During the assessment part of the workshop, we’ll generate insights by guiding reviewers through a series of activities designed to illuminate inconsistencies in thinking or highlight potential problem areas. Many activities can be used to generate insights. Here are a few examples, which are described fully in Chapter 17, Activities to Evaluate Design Options:
Many evaluations will use some form of a scenario walkthrough as described. This activity is the most basic and reliable architecture evaluation tool.
The Question–Comment–Concern activity described is a form of visual brainstorming that helps reviewers quickly surface facts and questions about the architecture.
Risk storming, described, is also a form of visual brainstorming, which focuses exclusively on risks in specific views of the system.
If the goal of the evaluation is to compare and contrast alternatives, we might use the Sketch and Compare activity as described to pit two or more ways of promoting the same quality attributes against one another.
Code review, described, is not reliable on its own for finding architectural problems but it can identify misalignment between detailed design and the architecture. Code review is also an excellent tool for keeping tabs on static structures as they emerge.
If we’ve recorded ADRs for our system (see Activity 20, Architecture Decision Records), then we can replay the design decisions and decide whether those decisions still hold true. Proposed ADRs can be evaluated for fit as well.
Choose evaluation activities based on what we need to learn, the time available, and the stakeholders’ familiarity with architecture evaluations. A small, experienced evaluation team of only 3–4 people can generate interesting insights with a simple question–comment–concern activity in as little as 60 minutes. A less experienced group might yield better results by walking through scenarios or design decisions explicitly.
During an evaluation workshop, we want to determine whether the architecture passes our criteria, but this shouldn’t be the only outcome from the workshop. We also want to learn how to improve the architecture’s design, not just that it needs improvement.
No matter what criteria we use during the evaluation, we want a clear and definitive conclusion. Explicitly state how well the architecture stood up to the criteria used to evaluate it and make concrete recommendations for how the architecture can be improved. The conclusions from an architecture evaluation should not be a simple pass or fail.
Whether the architecture is fit for purpose is only half the story. It’s just as important to understand why the design is fit (or not) for purpose. Great designs can always be improved. Even a poor design will get some things right.
Use the insights generated during the evaluation to look for trends and opportunities. To decide how the architecture is good, look for risks and open questions. Risks show where the design might allow bad things to happen relative to criteria assessed in the workshop. Open questions shine a light on gaps in communication or knowledge about the architecture.
Use the data from the workshop to take advantage of reviewers’ different perspectives. Share the data and ask reviewers to look for trends. Ask reviewers what worries them. Collect their questions. Even if we know the answer to a question, the fact that someone asked implies there is room to improve communication.
Once we’ve analyzed the data and reached some conclusions, it’s time to decide what to do about it.
We don’t have to address every issue, risk, and open question identified during an evaluation. We won’t have time to fix everything. Prioritize the work that must get done and separate it from the issues that are interesting but not essential. Assign someone to decide what to do about each high-priority item.
To close the evaluation workshop, create a summary of the findings and follow-up actions and share the list with all participants. For smaller workshops, a simple email with action items works great. For larger workshops, a brief write-up with links to raw notes ties a nice bow around the evaluation.
The concluding write-up is an excellent way to summarize findings for stakeholders and acts as a visible sign of progress for the architecture. Summaries are an excellent resource both for future architects of this system and architects who want to run evaluation workshops for a different system.
3.145.188.172