Activity 33Observe Behavior

Add instrumentation to the software system so you can see runtime behaviors firsthand. Use the observations to answer specific questions about quality attributes and other stakeholder concerns. Once instrumentation is in place, either observe the system in normal use or inject stimuli to flex specific quality attribute scenarios.

Observing behavior is a great way to analyze runtime quality attributes. The ability to observe the system assumes that observability is designed into the architecture. Evolve the architecture as needed to promote required observability scenarios.

Benefits

  • Monitor the system over time to verify design assumptions.
  • Directly test how well quality attributes are promoted.
  • Produce concrete metrics that can be shared with stakeholders.

Activity Timing

Varies depending on the required analysis and how well the software system promotes observability.

Participants

One or more analysts, usually developers of the system.

Preparation and Materials

  • To add instrumentation there must be a working (or partially working) software system. Adding instrumentation can sometimes be a design task unto itself. Decisions around frameworks, data storage, and analysis must be made before observations can begin.

Steps

  1. Define the goals of the analysis. What question are you trying to answer? Use Activity 3, Goal-Question-Metric (GQM) Workshop to identify candidate metrics and the data required to compute those metrics.

  2. Decide how to generate data and design tests to drive the system.

  3. Add the required instrumentation and logging to the software system. Verify that your changes work before attempting meaningful analysis. You don’t want to spend a week running tests only to learn that your logging failed!

  4. Implement and execute tests, or allow the software system to be used as it normally would.

  5. Once data has been collected, perform the analysis. Compute metrics and answer the questions established in step 1. If you are unable to answer questions, then make adjustments and try again.

  6. Prepare and share findings with relevant stakeholders.

Guidelines and Hints

  • Observability is a quality attribute and must be designed into the architecture. Instrumentation can be added late, even after the system is in the wild, as long as you’ve designed the ability to produce and collect system events into the architecture.

  • As you answer questions about the software system, think about how the data can be used in automated analysis. Consider adding your metrics to system dashboards and alerting systems.

  • In theory, any runtime property of the system can be observed, including security, performance, availability, and reliability, among others.

Example

Some patterns such as event sourcing publish-subscribe (described) have observability baked in. Observability is a must-have for all modern distributed systems, especially microservices.

Netflix has done extensive work in this area and made much of their work available as open source.[53] One example is the Hystrix Dashboard, which allows developers to observe metrics produced by the Hystrix fault tolerance library for the JVM.[54] Another example is the Simian Army, a suite of tools used to stimulate a service-oriented system in various ways.[55]

In the simplest case, you can use any logging platform to record observed information. Take a look at logging platforms such as LogStash,[56] Splunk,[57] or Graylog[58] for storing, visualizing, and analyzing system events. Keep in mind that though these tools are powerful, their effectiveness depends heavily on how you’ve instrumented the system.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.172.214