Chapter 3. Designing Architecture for Continuous Delivery

Now that you have been introduced to the motivations for continuous delivery, you are ready to explore the technical foundations to enable this practice: software architecture. In this chapter, you will learn about the importance of designing systems that are loosely coupled and have high cohesion, and the associated technical and business costs if these guidelines aren’t followed. You will be introduced to the importance of designing effective APIs, how cloud computing has impacted software architecture, and why many Java developers are embracing service-oriented development. The key goal of this chapter is for you to understand how to create and cultivate an architecture that supports continuously delivering Java applications.

Fundamentals of Good Architecture

The Software Engineering Institute (SEI) defines software architecture as “the set of structures needed to reason about the system, which comprises software elements, relations among them, and properties of both.” Although this may at first glance appear quite abstract, the mention of structures, elements, and properties is core to what the majority of software engineers think of as architecture. Taking a slightly different perspective, it is quite possible that you can relate to Martin Fowler’s definition that software architecture consists of the “things that people perceive as hard to change.” Regardless of which definition you prefer, several properties of a software system are fundamental to creating fit-for-purpose architecture.

Loose Coupling

A loosely coupled system is one in which each of its components has, or makes use of, little or no knowledge of the definitions of other separate components. The obvious advantage is that components within a loosely coupled system can be replaced with alternative implementations that provide the same functionality. Loose coupling within programming is often interpreted as encapsulation—or information-hiding—versus nonencapsulation.

Within the Java programming language, this can be seen in primarily two places. First, method signatures utilize interface types versus concrete class types; the former makes extending applications much easier by loosely coupling and deferring the choice of concrete class until runtime. Second, JavaBeans or Plain Old Java Objects (POJOs) getters and setters (accessors and mutators), which enable hiding and controlling the access to internal state, give you much more control in making changes to the internals of the class.

At the application or service level, loose coupling is typically achieved through well-defined and flexible component interfaces; for example, using REST contracts (e.g., Pact or Spring Cloud Contract) with JSON over HTTP/S; using an interface definition language (IDL) such as gRPC, Thrift, or Avro; or messaging via RabbitMQ or Kafka. Examples of tight coupling include Java RMI, where domains objects are exchanged in the native Java serialization format.

High Cohesion

Cohesion refers to the degree to which the elements within a component belong together, and can be thought of within programming as the measure of strength of the relationship between pieces of functionality within a given module or class. Modules with high cohesion tend to be preferable, because high cohesion is associated with several desirable traits of software, including robustness, reliability, reusability, and understandability. In contrast, low cohesion is associated with undesirable traits such as being difficult to maintain, test, reuse, or even understand. A good example in the Java language can be found within the java.util.concurrent package, which contains classes that cohesively offer functions related to concurrency. The classic Java counterexample is the java.util package itself, which contains functions that relate to concurrency, collections, and a scanner for reading text input; these functions are clearly not cohesive.

At the application and service level, the level of cohesion is often evident by the interface exposed. For example, if a User service exposed functionality related only to working with application Users, such as add new User, update contact email address, or promote the User’s customer loyalty tier, this would be highly cohesive. A counter example would include a User service that also offered functionality to add items to an e-commerce shopping basket, or a payment API that also allowed stock information to be added to the system.

Coupling, Cohesion, and Continuous Delivery

Applications with a loosely coupled and highly cohesive architecture are easier to continuously deliver. Therefore, you should strive to design and evolve systems with this in mind. A good architecture facilitates CD through the following mechanisms:

Design

During the design phase of a new or evolving system, having clear and well-defined interfaces specified throughout the system allows for loose coupling and high cohesion. This, in turn, makes it easier to reason about the system. When given new requirements for a specific area of functionality, a highly cohesive system immediately directs you to where the work should take place, as an alternative to you having to trawl through the code of several multifunctional modules with low cohesion. Loose coupling allows you to change design details of an application (perhaps in order to reduce resource consumption) with a significant reduction in concern that you will impact other components within the overall system.

Build, unit, and integration test

A highly cohesive service or module facilitates dependency management (and the associated testing), as the amount of functionality offered is limited within scope. Unit testing, mocking, and stubbing is also much easier in a loosely coupled system, as you can simply swap configurable synthetic test doubles in for the real thing when testing.

Component test

Components that are highly cohesive lead to easy-to-understand test suites, as the context needed by developers in order to grok the tests and assertions is generally limited. Loose coupling of components allows external dependencies to be easily simulated or virtualized as required.

End-to-end test

Systems that are loosely coupled and highly cohesive are easier to orchestrate when performing end-to-end tests. Highly coupled systems tend to share data sources, which can make the curation of realistic test data fiendishly difficult. When the inevitable issues do occur with end-to-end testing, a highly cohesive system will generally be much easier to diagnose and debug, as functionality is logically grouped in relation to theme.

Deployment

Applications and services that are loosely coupled are generally easy to deploy in a continuous fashion, because each service has little or no knowledge of others. Highly coupled services typically have to be deployed in lockstep (or sequentially) because of the tight integration of functionality, which makes the process time-consuming and error prone. Highly cohesive services typically minimize the number of subsystems that have to be deployed in order to release new functionality, and this results in fewer artifacts being pushed down the pipeline, reducing resource consumption and coordination.

Observability

A cohesive service is easy to observe and comprehend. Imagine you have a service that performs five unrelated tasks, and suddenly your production monitoring tool alerts you to high CPU usage. It will be difficult to understand which functionality is causing the issue. A highly coupled application is often difficult to diagnose when things inevitably go wrong, as failures can cascade throughout the system, which, in turn, obfuscate the underlying causes.

With this foundational guidance in place, let’s now take a look at designing applications that provide business value by using modern architectural themes and practices.

Architecture for Business Agility

If you have ever worked on a large-scale software system that is continually being driven by new functional requirements, you will most likely at some point have bumped into a limiting factor imposed by the system architecture. This is almost inevitable because of the increased focus in the business world on short-term gains versus long-term investment, and unforeseen changes in both the business and the development of the technological landscape. Software architecture also tends to evolve over time within many companies, with occasional refactoring sprints being allocated to teams in order to prop up major issues, or in the worst case, little attention being paid until a “big bang” rewrite is forced. Continuous delivery can be used to monitor and enforce certain architectural properties, but you have to understand the principles of how architecture relates to business value, and then design applications and process accordingly.

Bad Architecture Limits Business Velocity

If a business has no well-defined architecture to its systems, it is much harder to properly assess the cost of doing something in a well-defined time frame. The “mess” of your architecture creates excessive costs and missed opportunities. This can have really bad competitive consequences, as the overhead incurred can increase dramatically, depending on the number of systems and overall complexity. Often developers and architects find it difficult to convince nontechnical management of these issues. Although empathy must be developed by both sides, one analogy that spans disciplines is building a house from a complete plan with the intended usage versus starting from an empty plot of land and adding rooms and floors as you go along and watching how the building is being used. Just as it would be much easier to design and build a house with the intended usage in mind from day one, the same conclusion can be drawn for designing and building software.

The other hidden cost with a lack of architectural quality is that an inordinate amount of time is spent patching systems rather than innovating. If more time is spent playing software bug whack-a-mole than actually creating new features, you know you have a rotten architecture. Good software architectures encourage the production of bug-free software and guard against the negative consequences of a bug; well-architected systems “contain” the error, and through their structure provide mechanisms to overcome any problems caused with minimal cost. Good architecture also encourages greater innovation, as it is clearer what needs doing to support innovation on top of it. In addition, good architecture in itself can be a catalyst for innovation. You might see gaps or opportunities that would have otherwise been hidden.

Although the continual monitoring and analysis of architectural qualities in relation to business value is somewhat orthogonal to the implementation of continuous delivery, the creation of a build pipeline can be an effective way to introduce the capture of relevant metrics. Tools such as SonarQube can be woven into the build process and used to show cohesion, coupling, and complexity hotspots within the code and also report high-level complexity metrics such as cyclomatic complexity (the quantitative measure of the number of linearly independent paths through a program’s source code) and the design structure quality index (DSQI) (an architectural design metric used to evaluate a computer program’s design structure and the efficiency of its modules).

Additional tooling, such as Adam Tornhill’s Code Maat, can also mine and analyze data from version-control systems, and show areas of the codebase that are regularly churning. This can demonstrate to the business that spending time and money on improving the architecture will facilitate understanding within these high-churn areas of the code, and, in turn, provide a high return on investment.

Complexity and Cost of Change

A typical mature or legacy architecture usually consists of a mix of technologies, often based on frameworks that were popular at the time of construction and the experiences of the engineers involved. This is where such a lack of structure (or an overly complex structure) in the architecture can greatly impact an organization; instead of having to consider just one technology when making changes, you end with a forest of interconnecting technologies in which no one person or team is the subject-matter expert. Making any change is a risky undertaking with a lot of inherent cost. If you compare this to an organization that has been careful to keep its architectural complexity under control, its costs of change could be a fraction of that experienced by its “complex” competitor.

Some organizations have fine-tuned management of their technical complexity and architecture to such a point that they are able, with complete confidence, to ship live multiple updates each day. The end result is that the organization with a complex architecture cannot keep pace with its more technically lean rival; this can result in a “death by a thousand paper cuts” with many delayed and failed features and bug fixes.

A well-defined software architecture assists the management of complexity by showing and tracking the following:

  • The real interdependencies between systems

  • What system holds which data and when

  • The overall technical complexity in terms of operating systems, frameworks, libraries, and programming languages used

Many of these properties can be verified and monitored within a good continuous delivery pipeline.

Best Practices for API-Driven Applications

All software applications expose APIs somewhere within the system, from the internal classes, packages, and modules, to the external systems interface. Since 2008, there has been an increase in APIs being seen as software products themselves: just look at the prevalence of SaaS-based API offerings like Google Maps, Stripe payments, and the Auth0 authentication API. From a programmer’s perspective, an API that is easy to work with must be highly cohesive and loosely coupled, and the same applies for integrating API-based services into the CD pipeline.

Build APIs “Outside-In”

A good API is typically designed outside-in, as this is the best way to meet user requirements without overly exposing internal implementation details. One of the challenges with classical SOA was that APIs were often designed inside-out, which meant that the interface presented “leaked” details on the internal entities and functions provided. This broke the principle of encapsulating data, and, in turn, meant that services integrating with other services were highly coupled as they relied on internal implementation details.

Many teams attempt to define a service API up front, but in reality the design process will be iterative. A useful technique to enable this iterative approach is the BDD technique named The Three Amigos, where any requirement should be defined with at least one developer, one QA specialist, and one project stakeholder present. The typical outputs from this stage of the service design process include a series of BDD-style acceptance tests that assert component-level (single microservice) requirements, such as Cucumber Gherkin syntax acceptance test scripts; and an API specification, such as a Swagger or RAML file, which the test scripts will operate against.

We also recommend that each service has basic (happy path) performance test scripts created (for example, using Gatling or JMeter) and security tests (for example, using bdd-security). These service-level component tests can then be run continuously within the build pipeline, and will validate local microservice functional and nonfunctional requirements. Additional internal resource API endpoints can be added to each service, which can be used to manipulate the internal state for test purposes or to expose metrics.

Good APIs Assist Continuous Testing and Delivery

The benefits to the CD process of exposing application or service functionality via a well-defined API include the following:

  • Easier automation of test fixture setup and teardown via internal resource endpoints (and this limits or removes the need to manipulate state via filesystem or data store access).

  • Easier automation of specification tests (e.g., REST Assured). Triggering functionality through a fragile UI is no longer required for every test.

  • API contracts can be validated automatically, potentially using techniques like consumer contracts and consumer-driven contracts (e.g., Pact-JVM).

  • Dependent services that expose functionality through an API can be efficiently mocked (e.g., WireMock), stubbed (e.g., stubby4j), or virtualized (e.g., Hoverfly).

  • Easier access to metrics and monitoring data via internal resource endpoints (e.g., Codahale Metrics or Spring Boot Actuator).

The popularity of APIs has increased exponentially in recent history, but with good reason, and embracing good architectural practices around this clearly makes implementing continuous delivery much easier.

Deployment Platforms and Architecture

In 2003, the deployment options for enterprise Java applications were relatively limited, consisting of mostly heavyweight application servers that attempted to provide cross-cutting platform concerns such as application life cycle management, configuration, logging, and transaction management. With the emergence of cloud computing from Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure; platform-as-a-service (PaaS) offerings, such as Heroku, Google App Engine, and Cloud Foundry; and container-as-a-service (CaaS) offerings like Kubernetes, Mesos, and Docker Swarm, there are now a lot more choices for Java developers. As the underlying deployment fabrics and platforms have changed, so too have the associated architectural best practices.

Designing Cloud-Native “Twelve-Factor” Applications

In early 2012, PaaS pioneer Heroku developed the Twelve-Factor App, a series of rules and guidance for helping developers build cloud-ready PaaS applications that:

  • Use declarative formats for setup automation, to minimize time and cost for new developers joining the project

  • Have a clean contract with the underlying operating system, offering maximum portability between execution environments

  • Are suitable for deployment on modern cloud platforms, minimizing the need for servers and systems administration

  • Minimize divergence between development and production, enabling continuous deployment for maximum agility

  • Can scale up without significant changes to tooling, architecture, or development practices

Let’s look briefly at each of the factors now, and see how they map to continuously deploying Java applications:

1. Codebase: one codebase tracked in revision control, many deploys

Each Java application (or service) should be tracked in a single, shared code repository. Deployment configuration files, such as scripts, Dockerfiles, and Jenkinsfiles, should be stored alongside the application code.

2. Dependencies: explicitly declare and isolate dependencies

Dependencies are commonly managed within Java applications by using build tooling such as Maven or Gradle, and OS-level dependencies should be clearly specified in the associated virtual machine (VM) image manifest, Dockerfile, or serverless configuration files.

3. Config: store config in the environment

The Twelve-Factor App guidelines suggest that configuration data should be injected into an application via environment variables. In practice, many Java developers prefer to use configuration files to manage these variables, and there can be potential security issues with exposing secrets via environment variables, particularly when building VMs or containers that contain secrets.

Storing nonsensitive configuration data in a remote service like Spring Cloud Config (backed by Git or Consul) and secrets in a service like HashiCorp’s Vault can be a good compromise between the Twelve-Factor recommendations and current best practices.

4. Backing services: treat backing services as attached resources (typically consumed over the network)

Java developers are accustomed to treating data stores and middleware in this fashion, and in-memory substitutes (e.g., HSQLDB, Apache Qpid, and Stubbed Cassandra) or service virtualization (e.g., Hoverfly and WireMock) can be used for in-process component testing within the build pipeline.

5. Build, release, run: strictly separate build and run stages

For a compiled language such as Java, this guideline comes as no surprise (and with little choice of implementation). It is worth mentioning that the flexibility provided by VM and container technology means that separate artifacts can be used to build, test, and run the application, each configured as appropriate. For example, a deployment artifact can be created for build and test with a full OS, JDK, and diagnostic tools; and an artifact can be built for running an application in production with only a minimal OS and JRE.

However, we see this as an anti‐pattern, as there should be only one artifact created that is the “single source of truth” that is pushed along the build pipeline. Using multiple artifacts can easily lead to configuration drift, in which the development and production artifacts have a subtly different configuration that can cause issues and make debugging challenging.

6. Processes: execute the app as one or more stateless processes

Building and running a Java application as a series of microservices can be made easier by using VM images, container images, or serverless functions.

7. Port binding: export services via port binding

Java developers are used to exposing application services via ports (e.g., running an application on Jetty or Apache Tomcat).

8. Concurrency: scale out via the process model

Traditional Java applications typically take the opposite approach to scaling, as the JVM runs as a giant “uberprocess” that is  often vertically scaled by adding more heap memory, or horizontally scaled by cloning and load-balancing across multiple running instances. However, the combination of decomposing Java applications into microservices and running these components within VMs, containers, or serverless runtimes can enable this approach to scalability. Regardless of the approach taken to implement scalability, this should be tested within the build pipeline.

9. Disposability: maximize robustness with fast startup and graceful shutdown

This can require a mindset shift with developers who are used to creating a traditional long-running Java application, where much of the expense of application configuration and initialization was front-loaded in the JVM/application startup process. Modern, container-ready applications should utilize more just-in-time (JIT) configuration, and ensure that best efforts are taken to clean up resource and state during shutdown.

10. Dev/prod parity: keep development, staging, and production as similar as possible

The use of VM or container technology in combination with orchestration technologies like VMware, Kubernetes, and Mesos can make this easier in comparison with traditional bare-metal deployments in which the underlying hardware and OS configuration is often significantly different from that of developer or test machines.

As an application artifact moves through the build pipeline, it should be exposed to more and more realistic environments (e.g., unit testing can run in-memory on a build box). However, exploratory end-to-end testing should be conducted in a production-like environment.

11. Logs: treat logs as event streams

Java has had a long and sometimes arduous relationship with logging frameworks, but modern frameworks like Logback and Log4j 2 can be configured to stream to standard output or streamed to disk.

12. Admin processes: run admin/management tasks as one-off processes

The ability to create simple Java applications that can be run within a container or as a serverless function allows administrative tasks to be run as one-off processes. However, these processes must be tested within (or as part of) the build pipeline.

The principles of the Twelve-Factor App have hinted at designing systems that not only embrace the properties of the underlying deployment fabric, but also actively exploit it. Closely related is a topic known as mechanical sympathy.

Cultivating Mechanical Sympathy

Martin Thompson and Dave Farley have talked about the concept of mechanical sympathy in software development for several years. They were inspired by the Formula One racing driver Jackie Stewart’s famous quote, “You don’t have to be an engineer to be a racing driver, but you do have to have mechanical sympathy.” Understanding how a car works will make you a better driver, and it has been argued that this is analogous to programmers understanding how computer hardware works. You don’t necessarily need a degree in computer science or to be a hardware engineer, but you do need to understand how hardware works and take that into consideration when you design software.

The days of architects sitting in ivory towers and drawing UML diagrams is over. Architects and developers must continue to develop practical and operational experience from working with the new technologies. Using PaaS, CaaS, and functions can fundamentally change the way your software interacts with the hardware it is running on. In fact, many modern PaaS and function-based solutions use container technology behind the scenes in order to provide process isolation, and it is beneficial to be aware of these changes:

  • PaaS and container technology can limit access to system resources because of developer/operator specifications, or resource contention.

  • Container technology can (incidentally) expose incorrect resource availability to the JVM (e.g., the number of processor cores typically exposed to a containerized JVM application is based on the underlying host hardware properties, not the restrictions applied to a running container).

  • When running a PaaS, additional layers of abstraction often are applied over the operating system (e.g., orchestration framework, container technology itself, and an additional OS).

  • PaaS and container orchestration and scheduling frameworks often stop, start, and move containers (and applications) much more often in comparison to traditional deployment platforms.

  • The hardware fabric upon which public cloud PaaS and container platform applications are run is typically more ephemeral in nature.

  • Containerized and serverless applications can expose new security attack vectors that must be understood and mitigated.

These changes to the properties of the deployment fabric should not be a surprise to developers, as the use of many new technologies introduce some form of change (e.g., upgrading the JVM version on which an application is running, deploying Java applications within an application container, and running Java applications in the cloud). The vast majority of these potential issues can be mitigated by augmenting the testing processes within the CD build pipeline.

Design and Continually Test for Failure

Cloud computing has provided amazing opportunities for developers; a decade ago, we could only dream of the hardware that can now be spun up at the touch of a button. But the nature of this type of infrastructure has also introduced new challenges. Because of the networked implementation, commodity costing, and scale of modern cloud computing, performance issues and failures within the platform are inevitable.

The vast majority of I/O operations within a cloud-based platform are going over the wire. For example, elastic block storage that can appear local is typically provided by a storage area network (SAN), and the performance characteristics are considerably different. If you develop an application on your local development machine that consists of three chatty services with intensive access to a database, you can be sure that the network performance of the localhost loopback adapter and direct access to an SSD-based block store will be markedly different than the corresponding cloud operations. This can make or break a project.

Most cloud computing infrastructure is ephemeral in nature, and you are also exposed to failure with much more regularity in comparison with on-premises hardware. Combine this with the fact that many of us are designing inherently distributed systems, and you must design systems that tolerate services disappearing or being redeployed. When many developers think of testing this type of failure, the Netflix Simian Army and Chaos Monkeys jump to mind; however, this type of testing is typically conducted within production. When you are developing a CD build pipeline, you need to also implement a limited (but equally) valuable form of this type of chaos testing, but provided in a more controlled and deterministic fashion. 

Systems that are designed with loose coupling are typically easier to test, as you can isolate components more readily, and high cohesion helps with the mental effort needed to understand what is happening when fixing bugs. The key takeaway from this section is that a continuous delivery pipeline must allow deployment and testing on a realistic production-like environment as soon as possible, and performance and failure scenarios must be simulated and tested.

The Move Toward Small Services

It would appear that every other software development article published today mentions microservices, and so much so that it is often difficult to remember that other architectural styles do exist. Behind the popularity of this architecture, there are, of course, many benefits of decomposing large and complex applications into smaller interconnected services. However, there are also several challenges.

Challenges for Delivering Monolithic Applications

Despite what the software development press may say, nothing is inherently wrong with designing and building a monolithic application. It is simply an architectural style, and as with any architectural approach, it has trade-offs. The increase in adoption and rise in popularity of building service-based applications is primarily due to three constraints imposed with working on a single monolithic application:

  • Scaling development efforts on the codebase and system

  • Isolating subsystems for independent deployability

  • Operationally scaling subsystems within the application (independently, elastically, and on demand)

Let’s examine each of these issues in turn, and discuss how this impacts the implementation of continuous delivery.

Scaling development

When working with a monolithic application, all the developers have to “crowd around” the same codebase. This can lead to developers having to develop an understanding of the entire codebase in order to cultivate the appropriate domain context. During implementation, code merge conflicts are almost inevitable, which leads to rework and lost time. If a monolithic codebase is designed and implemented well—for example, embracing the principles of high cohesion, loose coupling, and modularity—then this shouldn’t be a problem. However, the reality with long-running systems is that they are incrementally evolved, and either by accident or on purpose, the modularity breaks down over time.

Extracting modules from a monolithic codebase and building these as independent subsystem services can lead to clearer domain boundaries and interfaces, which, in turn, facilitates the ability of developers to understand the context. The independent nature of these services also facilitates the distribution of labor over the codebase.

Differing change cadence: Independent deployability

An application that is designed as a single artifact has limited options for independent deployability, and this can be a problem if functionality within the application requires differing change cadence. At a basic level, every time a new piece of functionality is developed within the codebase, the entire application must be deployed. If releasing the application is resource-intensive, on-demand resources may not be practical. Worse still is if the application is highly coupled, as this means that a change in a supposed isolated area of the codebase will require intensive testing to ensure that no hidden dependencies have caused regressions.

By dividing the codebase into independently deployable modules or services, you can schedule the release of functionality independently.

Subsystem scalability and elasticity

An application that is run as a single process (or tightly coupled group of processes) has limited options for scaling. Typically, the only approach is to replicate the entire runnable application instance and load-balance requests across the multiple instances. If you design an application as a series of cohesive subsystems that are loosely coupled, you have many more options for scaling. A subsystem that is under high load can be scaled independently from the rest of the application. 

Microservices: SOA Meets Domain-Driven Design

Building small services that follow the Unix single responsibility principle has clear benefits. If you design services and tools that do one thing and do it well, it is easy to compose these systems to provide more-complicated functionality, and it is also easier to deploy and maintain such systems. Large organizations like Netflix, eBay, and Spotify have also talked publicly about how they are building smaller service-based architectures.

The topic of Domain-Driven Design (DDD) is frequently mentioned alongside discussions of microservices, and although the founding work by Eric Evans in this space was published in 2003 in Domain-Driven Design: Tackling Complexity in the Heart of Software (Addison-Wesley Professional), the technique gained traction only when supporting technologies and methodologies converged: i.e., the combination of the evolution of architectural practices, the emergence of cloud platforms that allowed dynamic provisioning and configuration, and the rise of the DevOps movement that encouraged more collaboration throughout the build and operation of software.

Building Java-based microservices impacts the implementation of CD in several ways:

  • Multiple build pipelines (or branches within a single pipeline) must be created and managed.

  • Deployment of multiple services to an environment now have to be orchestrated, managed, and tracked.

  • Component testing may now have to mock, stub, or virtualize dependent services.

  • End-to-end testing must now orchestrate multiple services (and associated state) before and after executing tests.

  • Processes must be implemented to manage service version control (e.g., the enforcement of allowing the deployment of only compatible interdependent services).

  • Monitoring, metrics, and APM tooling must be adapted to handle multiple services.

Decomposing an existing monolithic application, or creating a new application that provides functionality through a composite of microservices, is a nontrivial task. Techniques such as context mapping, from DDD, can help developers (working alongside stakeholders and the QA team) understand how application/business functionality should be composed as a series of bounded contexts or focused services.

Functions, Lambdas, and Nanoservices

As stated by Mike Roberts on the Martin Fowler blog, there is no one clear view of what serverless is, and this is not helped by people talking about it in regards to two different but overlapping areas:

  • Serverless was first used to describe applications that significantly or fully depend on third-party applications or cloud services to manage server-side logic and state. These are typically thick client applications (think single-page web apps or mobile apps) that use the vast ecosystems of cloud-accessible databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito), etc. These types of services have been previously described as backend as a service (BaaS).

  • Serverless can also mean applications for which some amount of server-side logic is still written by an application developer, but unlike traditional architectures, the code is run in stateless compute containers that are event-triggered, ephemeral (may last for only one invocation), and fully managed by a third party. One way to think of this is as functions as a service (FaaS).

This book focuses on the second type of serverless applications. The challenges of continuously delivering serverless and FaaS applications are much the same as with microservices, although not getting access to the underlying platform can provide additional challenges when testing nonfunctional requirements.

Architecture: “The Stuff That’s Hard to Change”

Fundamentally, architecture can be thought of as the “stuff that is hard to change.” Getting a software system’s architecture correct is a key enabler to facilitating continuous delivery. Following the key principles of designing systems with loose coupling and high cohesion facilitates testing and continuous deployment by allowing services to easily be understood and to be worked with and validated in isolation before being assembled as part of the large systems.

Designing APIs outside-in (and with supporting internal APIs) also facilitates continuous testing of functional and nonfunctional requirements. Developers must now be aware of cloud, PaaS, and container runtimes, and the impact this has on continuous delivery. It can be a fundamental benefit, allowing the dynamic provisioning of resources for testing, but it also changes the characteristics of the underlying infrastructure fabric, and this must be continually tested and assumptions validated.

Summary

In this chapter, you have learned about the considerable effect that architecture has on your ability to continuously deliver a software system:

  • The fundamentals of creating an effective and maintainable architecture consist of designing systems that are highly cohesive and loosely coupled.

  • High cohesion and loose coupling affect the entire CD process: at design time, a cohesive system is easier to reason about; when testing, a loosely coupled system allows the easy substitution of mocks to isolate the functionality being verified; modules or services within a loosely coupled system can be deployed in isolation; and a cohesive system is generally a more observable and understandable system.

  • Bad or casually designed architecture limits both technical and business velocity, and will reduce the effectiveness of a CD pipeline.

  • Designing effective APIs, which are built outside-in, assist with effective testing and CD as they provide an interface for automation.

  • The architectural principles captured within Heroku’s Twelve-Factor App assist with implementing systems that can be continuously delivered.

  • Cultivating mechanical sympathy (learning about the application platform and deployment fabric, alongside designing for failure) are essential skills for a modern Java developer.

  • There is a trend within software development to design systems consisting of small and independently deployable (micro)services. Because of high cohesion and loose coupling, these systems lend themselves to being continuously delivered. These systems also require continuous delivery to ensure that both functional and nonfunctional system-level requirements are met, and to avoid an explosion of complexity.

  • Architecture is “the stuff that is hard to change.” Continuous delivery allows you to codify, test, and monitor core system-quality attributes throughout the lifetime of software systems.

Now that you have a good understanding of the principles of architecture, in the next chapter you will learn about how to effectively build and test Java applications that embody these properties.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.152.136