You can see that the automated DevOps pipeline is at the center of the onion – the center of the model. The layers surrounding it are successive capabilities with which the team is equipped. Each capability has a strategy, method of execution, and method of measurement. Once equipped with all the capabilities, the process happens in very short cycles, and is expected to accelerate with maturity. We will dive deeper into Onion DevOps Architecture later in this book. To continue defining the professional-grade DevOps environment, it’s interesting to reflect on the current state of DevOps in the industry.
The State of DevOps
Several organizations are performing ongoing research into the advancement of DevOps methods across the industry. Puppet and DORA are two that stand out. Microsoft has sponsored the DORA State of DevOps Report. Sam Guckenheimer is the product owner for all Azure DevOps products at Microsoft and contributed to the report. He also spoke about that on his recent interview with the Azure DevOps Podcast, which can be found at http://azuredevopspodcast.clear-measure.com/sam-guckenheimer-on-testing-data-collection-and-the-state-of-devops-report-episode-003 .
A key finding of DORA’s State of DevOps Report was that elite performers take full advantage of automation. From builds to testing to deployments and even security configuration changes, elite performers have a seven times lower change failure rate and over 2,000 times faster time to recover from incidents.
The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win, by Kim, Spafford, and Behr1
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, by Farley and Humble2
The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations, by Kim, Humble, and Debois3
If you are just getting into DevOps, don’t be discouraged. The industry is still figuring out what it is too, but there are now plenty of success stories to learn from.
Removing the Ambiguity from DevOps
In the community of large enterprise software organizations, many define DevOps as development and operations coming together and working together from development through operations. This is likely the case in many organizations, but I want to propose what DevOps will likely be as you look back on this era 20 years from now from a time when your worldview isn’t colored by the problems of today.
In the 1950s there were no operating systems. Therefore, there was no opportunity for multiple programs to run at the same time on a computer. There was no opportunity for one programmer to have a program that interfered with the program of another. There was no need for this notion of operations. The human who wrote the program also loaded the program. That person also ran the program and evaluated its output.
I believe that the DevOps movement is the correction of a software culture problem that began with the mainframe era. Because multiuser computers, soon to be called servers, became relied upon by an increasing number of people, companies had to ensure that they remained operational. This transformed data processing departments into IT departments. All of the IT assets need to run smoothly. Groups that sought to change what was running on them became known as developers, although I still call myself a computer programmer. Those who’re responsible for stable operations of the software in production environments are known as operations, filled with IT professionals, systems engineers, etc.
I believe you’re going to look back at the DevOps era and see that it’s not a new thing you’re creating but an undoing of a big, costly mistake over two or three decades. Instead of bringing together two departments so that they work together, you’ll have eliminated these two distinct departments and will have emerged with one type of persona: the software engineer. Smaller companies, by the way, don’t identify with all the talk of development and operations working together because they never made this split in the first place. There are thousands upon thousands of software organizations that have always been responsible for operating what they build. And with the Azure cloud, any infrastructure operation becomes like electricity and telephone service, which companies have always relied on outside parties to provide.
Program manager
Engineer
I believe this type of consolidation will happen all across the industry. Although there’s always room for specialists in very narrow disciplines, software organizations will require the computer programmer to be able to perform all of the tasks necessary to deliver something they envisioned as it’s built and operated.
A Professional-Grade DevOps Vision
Prioritizing speed causes shortcuts, which causes defects, which causes rework, which depletes speed. Therefore, prioritizing speed achieves neither speed nor quality.
Prioritizing quality reduces defects, which reduces rework, which directs all work capacity to the next feature. Therefore, prioritizing quality achieves speed as well.
Private build
Continuous integration build
Static code analysis
Release candidate versioning and packaging
Environment provisioning and configuration
Minimum of a three-tier deployment pipeline
Production diagnostics managed by development team
Insanely short cycle time through the previous steps
You don’t need an infrastructure like Netflix in order to accomplish this. In fact, you can set this up with a skeleton architecture even before you’ve written your first feature or screen for a new application. And you can retrofit your current software into an environment like this as well. You want to keep in mind the 80/20 rule and gain these new capabilities without adding to much scope or trying to “boil the ocean” in your first iteration.
DevOps Architecture
As I walk through this, I’ll take the stages one at a time.
Version Control
First, you must structure your version control system properly. In today’s world, you’re using Git. Everything that belongs to the application should be stored in source control. That’s the guiding principle. Database schema scripts should be there. PowerShell scripts to configure environments should go there. Documents that outline how to get started developing the application should go there. Once you embrace that principle, you’ll step back and determine what exceptions might apply to your situation. For instance, because Git doesn’t handle differences in binary files very well, you may elect not to store lots and lots of versions of very big Visio files. And if you move to .NET core, where even the framework has become NuGet packages, you may elect to not store your /packages folder like you might have with .NET Framework applications. But the Git repository is the unit of versioning, so if you have to go back in time to last month, you want to ensure that everything from last month is correct when you pull it from the Git repository.
Everything that belongs to the application should be stored in source control. That’s the guiding principle. Database schema scripts should be there.
Private Build
The next step to configure properly is the private build. This should run automated unit tests and component-level integration tests on a local workstation. Only if this private build works properly and passes should you commit and push your changes to the Git server. This private build is the basis of the continuous integration build, so you want it to run in as short a period of time as possible. No more than 10 minutes is widely accepted industry guidance. For new applications that are just getting started, 45 seconds is normal and will show that you’re on the right track. This time should include running two levels of automated test suites: your unit tests and component-level integration tests.
Continuous Integration Build
The continuous integration build is often abbreviated “CI Build.” This build runs all the steps in the private build, for starters. It’s a separate server, away from the nuances of configuration on your local developer workstation. It runs on a server that has the team-determined configuration necessary for your software application. If it breaks at this stage, a team member knows that they need to back out their change and try again. Some teams have a standard to allow for “I forgot to commit a file” build breaks. In this case, the developer has one shot to commit again and fix the build. If this isn’t achieved immediately, the commit is reverted so that the build works again. There’s no downside to this because in Git, you never actually lose a commit. The developer who broke the build can check out the commit they were last working on and try again once the problem is fixed.
The continuous integration build is the first centralized quality gate. Capers Jones’ research,4 referenced earlier, also concludes that three quality control techniques can reliably elevate a team’s defect removal efficiency (DRE) up to 95%. The three quality control techniques are testing, static code analysis, and inspections. Inspections are covered later in a discussion of pull requests, but static code analysis should be included in the continuous integration build. Plenty of options exist, and these options integrate with Azure DevOps Services very easily.
Static Code Analysis
Static code analysis is the technique of running an automated analyzer across compiled code or code in source form in order to find defects. These defects could be noncompliance to established standards. These defects could be patterns known in the industry to result in runtime errors. Security defects can also be found by analyzing known patterns of code or the usage of the library versions with published vulnerabilities. Some of the more popular static code analysis tools are
Visual Studio Code Analysis ( https://docs.microsoft.com/en-us/visualstudio/code-quality/code-analysis-for-managed-code-overview?view=vs-2017
ReSharper command-line tools ( www.jetbrains.com/resharper/download/index.html#section=resharper-clt )
Ndepend ( https://marketplace.visualstudio.com/items?itemName=ndepend.ndependextension )
SonarQube ( https://marketplace.visualstudio.com/items?itemName=SonarSyource.sonarqube )
The CI build also runs as many automated tests as possible in 10 minutes. Frequently, all of the unit tests and component-level integration tests can be included. These integration tests are not full-system tests but are tests and that validate the functionality of one or two components of the application that require code that calls outside of the .NET AppDomain. Examples are code that relies on round trips to the database or code that pushes data onto a queue or file system. This code that crosses an AppDomain or process boundary is orders of magnitude slower than code that keeps only to the AppDomain memory space. This type of test organization heavily impacts CI build times.
Package Management
Because you’re producing release candidate packages, you need a good place to store them. You could use a file system or the simple artifacts capability of Azure DevOps, but using the rock-solid package management infrastructure of NuGet is the best current method for storing these. This method offers the API surface area for downstream deployments and other tools, like Octopus Deploy.
Azure DevOps Services offers a built-in NuGet server as Azure Artifacts. With your MSDN or Visual Studio Enterprise subscription, you already have the license configuration for this service, and I recommend that you use it. It allows you to use the standard ∗.nupkg (pronounced nup-keg) package format, which has a place for the name and a version that can be read programmatically by other tools. It also retains release candidates, so they are always available for deployment. And when you need to go back in time for a hotfix deployment or reproduction of a customer issue, you always have every version.
Test-Driven Development Environment (TDD Environment)
Web UI tests using Selenium
Long-running full-system tests that rely on queues
ADA accessibility tests
Load tests
Endurance tests
Security scanning tests
The TDD environment can be a single instance, or you can create parallel instances in order to run multiple types of test suites at the same time. This is a distinct type of environment, and builds are automatically deployed to this environment type. It’s not meant for humans because it automatically destroys and recreates itself for every successive build, including the SQL Server database and other data stores. This type of environment gives us confidence that you can recreate an environment for your application at any time you need to. That confidence is a big boost when performing disaster recovery planning.
The TDD environment is a distinct type of environment, and builds are automatically deployed to this environment type. It’s not meant for humans.
Manual Test Environment
This is an environment type, not a single environment. Organizations typically have many of these. QA, UAT, and staging are all common names for this environment type, which exists for the manual verification of the release candidate. You provision and deploy to this environment automatically, but you rely on a human to check something and give a report that either the release candidate has issues or that it passed the validations. This type of environment is the first environment available for human testing, and if you need a Demo environment, it would be of this type. It uses a full-size production-like set of data. Note that it should not use production data because doing so likely increases the risk of data breach by exposing sensitive data to an increased pool of personnel. The size and complexity of the data should be similar in scale to production. During deployments of this environment type, data is not reloaded every time, and automated database schema migrations run against the existing database and preserve the data. This configuration ensures that the database deployment process will work against production when deployed there. And because of the nature of this environment’s configuration, it can be appropriate for running some nonfunctional test suites in the background. For instance, it can be useful to run an ongoing set of load tests on this environment as team members are doing their normal manual validation. This can create an anecdotal experience to give the humans involved a sense of whether or not the system feels sluggish from a perception point of view. Finally, this environment type should be configured with similar scale specs as production, including monitoring and alerting. Especially in Azure, it’s not quite affordable to scale up the environment just like production because environments can be turned off on a moment’s notice. The computing resources account for the vast majority of Azure costs; data sets can be preserved for pennies even while the rest of the environment is torn down.
Production Environment
Everyone is familiar with this environment type. It’s the one that’s received all the attention in the past. This environment uses the exact same deployment steps as the manual environment type. Obviously, you preserve all data sets and don’t create them from scratch. The configuration of monitoring alert thresholds will have its own tuning, and alerts from this environment will flow through all communication channels; previous environments wouldn’t have sent out “wake-up call” alerts in the middle of the night if an application component went down. And in this environment, you want to make sure that you’re not doing any new. You don’t want to do anything for the first time on a release candidate. If your software requires a zero-downtime deployment, the previous environment should have also used this method so that nothing is tested for the first time in production. If an off-line job is taken down and transactions need to queue up for a while until that process is back up, a previous environment should include that scenario so that your design for that process has been proven before it runs on production. In short, the deployment to production should be completely boring if all needed capabilities have been tested in upstream environments. That’s the goal.
Deployment to production should be completely boring if all needed capabilities have been tested in upstream environments.
Production Monitoring and Diagnostics
Production monitoring and diagnostics is not an independent state but is a topic that needs to apply to all environments. Monitoring and operating your software in Azure isn’t just a single topic. There is a taxonomy of methods that you need in order to prevent incidents. Recently, Eric Hexter made a presentation on this topic to the Azure DevOps User Group,5 and that video recording can be found at https://youtu.be/6O-17phQMJo . Eric goes through the different types of diagnostics including metrics, centralized logs, error conditions, alerts, and heartbeats.
Tools of the Professional DevOps Environment
In Figure 3-4, you make a sample selection of marketplace tools that complement Azure DevOps Services. The Visual Studio and Azure marketplaces offer a tremendous array of capable products, and you’ll want to select and integrate the ones that fit your software architecture. In this configuration, you see that Azure DevOps Services will be what developers interact with by committing code from their workstations, making changes to work items, and executing pull requests. You are specifying that you’ll have your own virtual machines as build agents in order to provide more speed to the build process. You’ll also use the Release Hub in Azure DevOps in conjunction with Octopus Deploy as your deployment capability. Although Azure Pipeline is increasing its breadth of support for all kinds of deployment architectures, Octopus Deploy was the original deployment server for the .NET ecosystem, and its support is unparalleled in the industry at the moment. You show that you have deployment agents at the servers that represent each of your environments, and that they call back to the deployment server rather than having the deployment server call through the firewall directly into each server. Then you have specified Stackify as an APM tool collecting logs, telemetry, and metrics from each environment. Your developers can then access this information. Obviously, this architecture shows an environment very light on PaaS. Although new applications can easily make heavy use of PaaS, and we encourage it, most of you readers also have an existing system that would require a great deal of work in order to shift the architecture to free itself from VM-based environments. Professional DevOps is not only for greenfield applications. It can be applied to all applications.
Azure DevOps Services
Azure Pipelines: Supports continuous integration builds and automated deployments
Azure Repos: Provides source code hosting for a TFVC repository and any number of Git repositories
Azure Boards: Organizes work and project scope using a combination of backlogs, Kanban boards, and dashboards
Azure Test Plans: Integrates tightly with Azure Boards and Azure Pipelines by providing support for automated and manual full-system testing, along with some very interesting stakeholder feedback tools
Azure Artifacts: Provides the capability to provision your team’s own package feeds using NuGet, Maven, or npm
The independent services that has been receiving lightning-fast adoption since early September is Azure Pipelines. Especially with the acquisition of GitHub, the experience to set up a build for a code base stored in GitHub is simple and quick.
Azure Subscription
In order to set up your DevOps environment, you need an Azure subscription. Even if all your servers are in a local data center, Azure DevOps Services runs connected to your Azure subscription, even if only for billing and Azure Active Directory. Using your Visual Studio Enterprise subscription, you also have a monthly budget for trying out Azure features, so you might as well use it.
A subscription that houses the production environment of a system should not also house an environment with lesser security controls. The subscription will only be as secure as its least secure resource group and access control list.
Pre-production environments may be grouped together in a single subscription but placed in separate resource groups.
A single team may own and use many Azure subscriptions, but a single subscription should not be used by multiple teams.
Resource groups should be created and destroyed rather than individual resources within a resource group.
Just because you’re in the cloud doesn’t mean that you can’t accidentally end up with “pet” resource groups; only create resources through the Azure portal in your own personal subscription that you use as a temporary playground. See Jeffrey Snover’s Pets vs. Cattle at http://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/ .
Resource groups are good for grouping resources that are created and destroyed together. Resources should not be created through handcrafting. The analogy of pets vs. cattle can be applied to Pet Azure subscriptions where things are named and cared for by a person rather by a process or automated system.
The Azure subscription is a significant boundary. If you are putting your application in Azure, you really want to think about the architecture of your subscriptions and your resource groups. There will never be only one subscription for all your applications.
Visual Studio 2019
You can certainly start with Visual Studio Community, but Visual Studio Enterprise will be what you want to use in a professional DevOps environment. You will need to do more than just write C# code. You’ll need to have a holistic tool set for managing your software. As an example, the industry-leading database automation tool SQL Change Automation from Redgate installs right into Visual Studio Enterprise. This makes it a breeze to create automated database migration scripts. You’ll also want to equip your IDE with some extensions from the Visual Studio marketplace. Of course, ReSharper from JetBrains is a favorite.
A DevOps-Centered Application
Once we have created the environment of tools and practices for our team, we must turn our attention to our application. You likely have many existing applications that will need to be modernized, but we will build up an application throughout this book so that you can see how we apply all the concepts in the real world. We start with architecture and how to structure any .NET application conceptually, regardless if the application is to be the only application in the system or whether the application will be one of many in a microservices-based system.
Using Onion Architecture to Enable DevOps
You’ve seen how the Azure DevOps family of products can enable a professional DevOps environment. You have seen how to use Azure Repos to properly store the source for an application. You’ve made all your work visible using Azure Boards, and you’ve modeled your process for tracking work and building quality into each step by designing quality control checks with every stage. You’ve created a quick cycle of automation using Azure Pipelines so that you have a single build deployed to any number of environments, deploying both application components as well as your database. You’ve packaged your release candidates using Azure Artifacts. And you’ve enabled your stakeholders to test the working software as well as providing exploratory feedback using Azure Test Plans.
Each of these areas has required new versioned artifacts that aren’t necessary if DevOps automation isn’t part of the process. For example, you have a build script. You have Azure ARM templates. You have new PowerShell scripts. Architecturally, you have to determine where these live. What owns these new artifacts?
What is Onion Architecture?
The application is built around an independent object model.
Inner layers define interfaces. Outer layers implement interfaces.
The direction of coupling is toward the center.
All application core code can be compiled and run separately from the infrastructure.
Domain model objects are at the very center. They represent real things in the business world. They should be very rich and sophisticated but should be void of any notions of outside layers.
Commands, queries, and events can be defined around the core domain model. These are often convenient to implement using CQRS patterns.
Domain services and interfaces are often the edge of the core in Onion Architecture. These services and interfaces are aware of all objects and commands that the domain model supports, but they still have no notion of interfacing with humans or storage infrastructure.
The core is the notion that most of the application should exist in a cohesive manner with no knowledge of external interfacing technologies. In Visual Studio, this is represented by a projected called “Core.” This project can grow to be quite large, but it remains entirely manageable because no references to infrastructure are allowed. Very strictly, no references to data access or user interface technology is tolerated. The core of the Onion Architecture should be perfectly usable in many different technology stacks and should be mostly portable between technology such as web applications, Windows applications, and even Xamarin mobile apps. Because the project is free from most dependencies, it can be developed targeting .NET Standard (netstandard2.x).
Human interfaces reside in the layer outside the core. This includes web technology and any UI. It’s a sibling layer to data access, and it can know about the layers toward the center but not code that shares its layer. That is, it can’t reference data access technology. That’s a violation of the Onion Architecture. More specifically, an ASP.NET MVC controller isn’t allowed to directly use a DbContext in a controller action. This would require a direct reference, which is a violation of Onion Architecture.
Data interfaces implement abstract types in the core and be injected via IoC (Inversion of Control) or a provider. Often, code in the data interfacing layer has the capability to handle a query that’s defined in the Core. This code depends on SQL Server or ORM types to fulfill the needs of the query and return the appropriate objects.
APIs are yet another interfacing technology that often require heavy framework dependencies. They call types in the core and expose that functionality to other applications that call.
Unit tests exercise all the capabilities of the core and do so without allowing the call stack to venture out of the immediate AppDomain. Because of the dependency-free nature of the core, unit tests in Onion Architecture are very fast and cover a surprisingly high percentage of application functionality.
Integration tests and other full-system tests can integrate multiple outer layers for the purpose of exercising the application with its dependencies fully configured. This layer of tests effectively exercises the complete application.
DevOps automation. This code or sets of scripts knows about the application as a whole, including its test suites, and orchestrates the application code as well as routines in the test suites that are used to set up configuration or data sets for specific purposes. Code in this layer is responsible for the set up and execution of full-system tests. Full-system tests, on the other hand, know nothing of the environment in which they execute and, therefore, have to be orchestrated in order to run and produce test reports.
The preceding is an update on Onion Architecture and how it has fared over the past decade. The tenets have been proven, and teams all over the world have implemented it. It works well for applications in professional DevOps environments, and the preceding model demonstrates how DevOps assets are organized in relation to the rest of the code.
Implementing Onion Architecture in .NET Core
Pay special attention to the DataAccess assembly. Notice that it depends on the core assembly rather than the other way around. Too often, transitive dependencies encourage spaghetti code when user access code references a domain model and the domain model directly references data access code. With this structure, there are no abstractions possible, and only the most disciplined superhuman software engineers have a chance at keeping dependencies from invading the domain model.
Integrating DevOps Assets
/build.ps1: Contains your private build script
/src/Database/DatabaseARM.json: Contains the ARM template to create your SQL Server database in Azure
/src/Database/UpdateAzureSQL.ps1: Contains your automated database migrations command
/src/Database/scripts/Update/∗.sql: Contains a series of database schema change scripts that run in order to produce the correct database schema on any environment
/src/UI/WebsiteARM.json: Contains the ARM template to create your app service and web site in Azure
For the full source of any of these files, you can find them at the included code link for this article. In a professional DevOps environment, each pre-production and production environment must be created and updated from code. These DevOps assets enable the build and environment automation necessary in a professional DevOps environment.
Need for Devops
DevOps arose as a response to dysfunction ingrained within the software development life cycle (SDLC), even within teams using agile methodologies. Since the first multiuser mainframes with networked terminals, organizations have struggled with balancing keeping systems running in a stable fashion with continually changing them to meet additional business scenarios. Over the following decades, the industry formalized a division of roles for people who held these responsibilities. The original computer programmers were split into software developers and systems administrators. As an example of this divide, Microsoft flagship technology conferences were (and sometimes still are) split into sessions designed for “developers” and those designed for “IT professionals.” Today, separate job descriptions and even departments exist for each role. Many large companies have consolidated their IT professionals in order to maintain standards, consistency, and cost efficiency as they strive to operate stable, reliable computing systems. They’ve learned along the way that this imperative is inherently in conflict with the goals and objectives of the developers, whose job it is to move fast, change the systems, and provide new capabilities to users. As modern companies use custom software applications to connect directly with their customers, this makes software a part of strategic revenue generation. Accordingly, speed is more important now than ever.
Wrap Up
Now that you are up to speed with the technology that will be leveraged in this book along with the elements of a complete, professional DevOps environment, the next chapter will dive into the beginning of the process in more detail, starting with tracking work in a way that feeds a high performance DevOps cycle.
Bibliography
Hexter, E. (n.d.). DevOps Diagnostics w/ Eric Hexter (Azure DevOps User Group). Retrieved from www.youtube.com/watch?v=6O-17phQMJo
Humble, J. a. (2010). Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation. Addison-Wesley.
Jones, C. (2016). Exceeding 99% in Defect Removal Efficiency (DRE) for Software. Retrieved from www.ifpug.org/Documents/Toppin99percentDRE2016.pdf
Kim, G., Behr, K., & Spafford, G. (2013). The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win. Retrieved February 18, 2019, from https://amazon.com/phoenix-project-devops-helping-business/dp/0988262592
Kim, G., Debois, P., Willis, J., & Humble, J. (2016). The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations. Retrieved April 19, 2019, from https://amazon.com/devops-handbook-world-class-reliability-organizations/dp/1942788002
Palermo, J. (n.d.). The Onion Architecture. Retrieved March 21, 2019, from http://jeffreypalermo.com/blog/the-onion-architecture-part-1/