CHAPTER 8

DevOps

In this chapter, you will learn about

•   Application life cycle

•   Ending applications

•   Secure coding

•   DevOps cloud transformation

Many companies have combined the distinct functions of development (programming) and IT (operations) into a single group called DevOps. Instead of one team writing software and another deploying and supporting it, a single team is responsible for the entire life cycle. This has led to improved agility and better responsiveness of DevOps teams to the issues faced by end users because the team works directly with them rather than one department removed. This chapter covers some DevOps functions, including resource monitoring, remote access, and life cycle management.

Organizations need to be able to manage the applications that they put in place, and they need to ensure those applications are developed securely. Life cycle management includes everything from the requirements stage to retirement. Each stage has inputs and outputs that connect the stages to one another. The application life cycle enables an organization to manage each of its service offerings as efficiently and effectively as possible and ensure that each of those services continues to provide value throughout its life cycle.

Application Life Cycle

Just as creatures are born and eventually die, applications are created and eventually retired. This process is called the application life cycle, also known as the software development life cycle (SDLC). The application life cycle consists of five phases: specifications, development, testing, deployment, and maintenance. Figure 8-1 shows the application life cycle.

Images

Figure 8-1  Application life cycle

Images

EXAM TIP   No official version of the SDLC exists. There are several versions, some of which have six, seven, or even ten phases. However, each version still covers the same things, just more or less granularly. It is important to understand the things that are taking place in the process no matter how many phases are described.

Phase 1: Specifications

The first phase of the application life cycle is the specifications phase. In this phase, the application’s reason for being is documented. Users and stakeholders explain what they would like the application to do and what problem it will solve for them. For applications that have already been developed, this phase is about identifying improvements or bug fixes.

Roadmap

A software development roadmap is a strategic plan for a development project. It describes the goals and objectives of the application. The roadmap should be aligned with the overall business strategy to show how the application will provide business value.

Next, the roadmap lays out the activities that will be performed throughout the application life cycle. Some of the benefits of a roadmap include:

•   Better communication of goals and objectives

•   Team members understand their responsibilities

•   Communicates the value of the project to stakeholders

Phase 2: Development

In the development phase, project managers turn specifications identified in the first phase into a model for a working program. They map out the requirements for different discrete tasks with the idea of creating the smallest discrete functional set of code that can be tested and validated. In this way, when bugs or flaws are identified, it is easy to pinpoint where they came from. Those tasks are assigned to developers who write that segment. Application developers or programmers then write source code for the current project that has been designed.

Source code is written in a selected programming language. There are a wide variety of programming languages. Some popular languages include C, Java, Python, Perl, and Ruby. Some languages are only supported on certain platforms, such as Windows systems, web platforms, or mobile platforms. Programming languages also differ in how structured they are. Some have very rigid rules, while others give developers a great degree of freedom. They may also differ in how much error checking they perform.

Source code is made up of modules that contain functions and procedures. Within these functions and procedures, the elements of code are written. These include variables that store a value; classes that define program concepts containing properties and attributes; Boolean operations such as AND, OR, and NOR; looping functions that repeat sections of code; and sorting and searching functions.

Developers write source code in an integrated development environment (IDE) that can check the code’s syntax, similar to how spell checking works in a word processing application. Source code is written in a selected language, and each of the developers will use the same language when writing a program together. Cloud development tools allow development teams to easily collaborate on code together. AWS has a whole suite of development tools such as their CodeCommit repository, CodeDeploy deployment suite, or CodePipeline CI/CD system. Similarly, Microsoft has its long-standing Visual Studio IDE that can be tied to many of their cloud tools, such as Azure DevOps Server, a version control, project management, and CI/CD system.

Builds

The source code developed to accomplish the specifications is known as a build. This includes the set of bug fixes or features identified in the first phase. Only small parts of the code are written at a time to keep everything as simple to manage as possible—the more complex the code, the greater the chance that bugs will pop up. Builds go through a process of compilation and linking to allow the source code to run on a computer.

•   Compilation  A process that transforms the source code, programming statements written in a programming language into instructions that a computer can understand. These instructions are known as object code. The compilation process is typically platform-specific, meaning that the source code would be transformed differently for different operating systems or versions. The compilation process will also perform some basic error checking.

•   Linking  A process that combines the object code with related libraries into a program that can be executed on a computer.

Figure 8-2 shows the build process for an application written in the ASP.NET Core app in Microsoft Azure. The build process checks for basic errors and identified an issue with the namespace name.

Images

Figure 8-2  Building an Azure application

The development phase continues until all the elements required in the specifications have been created in the program. However, it is iterative in that small portions of the feature set may be sent to the testing phase and then return to the development phase until the feature set is completed. Developers do some initial testing on the functions they create to ensure that they perform as they were designed to do. Once enough programming elements are gathered together to create a build, they are passed on to the testing phase.

Builds are organized as either trunks or branches. A trunk is the current stable build, while branches contain new features that have yet to be subjected to a full range of testing or integration with other components. Branches are created from a copy of the trunk code. They are rolled back into the trunk when they have been sufficiently tested.

Version Control

In the past, developers kept multiple copies of application code as they worked on a project and might roll back to versions if issues were encountered in testing later on. They might have taken copious notes within the source code to help inform them of its purpose. Still others just relied upon their memory to maintain their code. This may have worked for small development projects with a single developer, but it does not scale to the type of projects worked on today. Furthermore, it made it much harder for developers to maintain software developed by others.

Development teams today often involve many people working in tandem to write pieces of the application. These people may be geographically distributed and need to be able to track and manage source code changes. Software version control (SVC) systems are used to manage code throughout the application life cycle. SVC systems perform the following core tasks:

•   Store software trunks and branches in their various revision states

•   Receive code from multiple developers

•   Track changes made to source code, the developer making the change, and when the change occurred

•   Store related documentation on software versions

Images

NOTE   SVC systems are also known as revision control systems, source code management systems, or code repositories.

Developers are provisioned with accounts on the SVC and then a project for the program source code is created within the SVC. The developers are granted access to that project so that they can publish, or commit, source code to it. There are usually multiple different roles. Some users may be able to post code but not merge branches into trunks, while others may be able to create projects. Each time a change is made to the code, the developers publish the new code to the SVC. Some SVC systems can be used to establish workflow so that newly published code is made available for testing automatically.

Cloud SVC systems may be centralized where the project files and source code remain on a central system or set of systems that each person must access. One example would be the Apache Subversion SVC. In a centralized SVC, source code is published directly to the central repository. On the other hand, distributed SVC systems still have a central repository, but they also keep a local repository on developer machines that developers can use to work on their current branch. This can speed up commit time, and it can allow developers to continue working on tasks even if they do not have a connection to the SVC. However, this can pose a security risk for sensitive development projects, since the code is mirrored to multiple machines that may have different sets of security controls and be subject to potential physical loss or theft.

A company may have multiple versions of the same application that will each need to be updated. There may be versions specific to certain industry segments or user groups that differ from other versions but share much of the core functionality. It is important to be able to track the differences in the source code for these application versions so that new changes can be applied to each of the versions that are affected or would benefit from the change.

Phase 3: Testing

In this phase, the build developed in the development phase is subjected to testing to ensure that it performs according to specifications, without errors, bugs, or security issues. Development code should never be ported directly to the production environment because developers may not be aware of bugs or other problems with the build that the testing phase can reveal. Testers may be other developers, or their full job may be in testing builds. Either way, they are members of the DevOps team and an overall part of delivering the program.

Builds are typically deployed to a development environment, where the build is subjected to automated testing and individual testing by users and groups. This occurs for each distinct element that is coded so that errors can be identified and isolated. Most IDEs have built-in syntax checking, variable validation, and autocomplete to help keep the majority of typos from causing problems.

Some development environments exist locally on the developer’s machine. This streamlines the process for deploying source code changes. Local development environments allow for the code to be executed in the development environment with breakpoints and other objects to find errors and monitor the details of program execution.

Once a build is stable in the development environment, it is moved to a staging or quality assurance (QA) environment. This is when quality assurance kicks in: testers go to staging servers and verify that the code works as intended.

The staging environment needs to be provisioned, or the existing environment needs to be made current. This includes making the patch level consistent with the production environment and installing add-ons or configuration changes that have been made in production since the last release. Testers use information from the change management database on recent production changes to obtain a list of changes to be made in the QA environment.

The testing phase should include people involved in the specifications phase to ensure that the specifications were correctly translated into an application. When bugs or issues are identified with the application, developers fix those issues until the application is ready to be deployed into production.

Lastly, procedures should be documented for how the system will be deployed and used. Deployment documentation will be used in phase 4, and user documentation will be used to train users on how to use the program and its new features appropriately. Documentation should also include a brief list of updates that can be distributed to users for reference.

Phase 4: Deployment

In the deployment phase, the application developed and tested in previous phases is installed and configured in production for stakeholders to use. The first step is to ensure that the resources required for the application deployment are available, including compute, memory, storage resources, and personnel. Teams may need to provision or deprovision cloud resources. Only then can the remainder of the deployment be scheduled.

It can be tempting to automate this process. However, deployment needs to be performed by a human and at a time when others are available to troubleshoot if necessary. Automatic deployments can lead to issues, particularly if someone accidentally triggers an automatic deployment to production.

Developers should do a final check of their code in this phase to remove testing elements such as debugger breakpoints and performance hogs such as verbose logging. Then, project managers work with stakeholders and customers to identify an ideal time to do the deployment. They need to select a time that is convenient for the DevOps team and one that does not fall in peak times when customers have urgent need of the application. DevOps teams may be deploying application releases every other week, so they may need to have a regularly scheduled downtime for this. Other options include blue-green deployments where the site is deployed and tested and then a switch is quickly made to make that site live with minimal disruption to the user base. See Chapter 9 for more information on deployment methodologies.

It is critical to take a backup or snapshots of the application servers before rolling out the new version. Despite all the testing in the previous phase, problems still can arise, and you will want to have a good clean version to go back to if a problem arises. Sometimes issues are encountered when deploying to production, despite the testing that occurred in previous phases of the application life cycle. In such cases, DevOps teams may need to roll back to the snapshot if they cannot fix the deployment issues in a timely manner.

Once the application is deployed, users need to be trained on the new features or changes of the release. This could be as simple as sending out release notes or including links to video tutorials and bundling tooltips into the application, or it could be more involved such as requiring one-on-one training.

Software Release Management Systems

Software release management systems (SRMS) are software used to manage the deployment process and streamline the workflow surrounding moving releases through deployment to different environments such as QA, beta, or production. The SRMS serves as the orchestration hub for deployments and can efficiently deploy systems using deployment scripts. Deployments can be easily customized so that an update to code affecting multiple sites with differing configurations can be deployed with a few clicks. Some SRMS tools include CloudBees, Jenkins for Google Cloud, and the open-source tool Electric Cloud.

You may also choose to fully or partially automate deployment steps using an SRMS. Deployment can occur based on triggering events in a process known as software delivery automation (SDA). Detailed workflows for managing development life cycle tasks can greatly reduce the time it takes to get new releases out the door. SRMS are sometimes combined with SVC systems so that builds and releases can be tracked and managed in one central tool.

SRMS can deploy releases to a variety of different test or production systems. Some standard categories of software releases have arisen based on the level of testing the release has undergone. These categories include canary, beta, stable, and long-term support (LTS).

Canary

Canary builds get their name from the canaries coal miners used to take down with them into mines. Canaries are more susceptible to harmful gases than people are. Coal miners would observe the canary and listen for its singing. If the canary stopped singing or showed signs of illness, the coal miners would see that as a warning that dangerous gases might be present. This often saved the lives of coal miners. In a similar way, developers release a canary build to customers to see what issues they find with the software so that they can fix it for the next release. The customers of the canary build are used as an extension of the company’s own testing team. There is usually very little gap between development and release of the canary build.

Canary builds usually have quite a few bugs in them, and they are not recommended for production use. Canary builds are quickly replaced with new versions, sometimes daily. Cloud customers do not need to reinstall canary builds because the developers do all the work for them. They just connect to the site and use the application. One example of an SRMS tool that can be used for canary releases is LaunchDarkly. It aids DevOps teams with deploying canary releases and obtaining bug information so that issues can be quickly identified and corrected with updated releases for the testers for the canary release.

Customers will often adopt a canary release to try out new features or generate interest in the latest version. Developers will build in functions to send back error information to the DevOps team so that they can identify, categorize, and prioritize bug files for later releases.

Beta

Beta builds, similar to canary builds, are still likely to have some bugs in them, but they are much more stable than the canary release. Beta builds are the last release before the stable build is released. Beta builds are often desired by consumers who want the latest feature and are willing to tolerate a few bugs in the process.

Stable

Stable builds are those that have gone through the canary and beta testing process and emerged with their bugs fixed. The company labels it stable because it has a reasonable level of confidence in the build’s stability or ability to operate without encountering bugs. However, the true test of stability is time, and that is why there is another type of build called long-term support. The support offered for stable releases is often limited. It is usually less than one year, so companies will need to keep updating to newer stable versions if they want to continue receiving support.

Long-Term Support

LTS builds are those that have been exposed to normal usage as part of a stable build for some time. For this reason, enterprises can have a much higher degree of confidence in the build’s stability, and the software manufacturer offers much longer support agreements for LTS software. These agreements can range from three to five years. Companies that favor stability over features will opt for the LTS software build.

Phase 5: Maintenance

Along the way, after deployment, various patches will be required to keep the application, or the resources it relies upon, functioning correctly. This phase is called the maintenance phase. Here the DevOps team will fix small user-reported bugs or configuration issues, field tickets from users, and measure and tweak performance to keep the application running smoothly.

There is an element of maintenance that involves a micro version of the entire application life cycle because small issues will result in specifications, which will lead to development and testing, and ultimately deployment into the production environment. Some of the tasks listed previously fall into this category. Suffice to say, this element is somewhat recursive.

Before adding new code, changing code, or performing other maintenance tasks that might affect application availability, the DevOps team works with stakeholders and customers to identify an ideal time to perform the activity. Some deployments may be urgent, such as fixes to address a critical vulnerability that was just discovered or a serious issue that users or customers are experiencing. In these cases, these limitations may be waived due to the criticality of the update.

It is important to note the difference between patches and releases here. Releases offer new features, while patches fix bugs or security problems.

Life Cycle Management

Life cycle management is the process or processes put in place by an organization to assist in the management, coordination, control, delivery, and support of its configuration items from the requirements stage to retirement. ITIL has established a framework for implementing life cycle management, and Microsoft has a framework that is based on ITIL.

ITIL

Information Technology Infrastructure Library (ITIL) provides a framework for implementing life cycle management. ITIL’s model is a continuum consisting of the following five phases:

1.   Service strategy

2.   Service design

3.   Service transition

4.   Service operation

5.   Continual service improvement

Each phase has inputs and outputs that connect the stages to one another, and continual improvement is recognized via multiple trips through the life cycle. Each time through, improvements are documented and then implemented based on feedback from each of the life cycle phases. These improvements enable the organization to execute each of its service offerings as efficiently and effectively as possible and ensure that each of those services provides as much value to its users as possible. Figure 8-3 shows the ITIL model. The three inner processes are cyclical, while the strategy and continuous improvement processes encapsulate them.

Images

Figure 8-3  ITIL life cycle management continuum

Microsoft Operations Framework

Microsoft Operations Framework (MOF) is based on ITIL. MOF has shortened the life cycle to four phases:

1.   Plan

2.   Deliver

3.   Operate

4.   Manage

These phases are usually depicted graphically in a continuum, as we see in Figure 8-4. This continuum represents the cyclical nature of process improvement, with a structured system of inputs and outputs that leads to continual improvement.

Images

Figure 8-4  A representation of the MOF life cycle continuum

Ending Applications

Applications do not last forever. In addition to the software development frameworks, DevOps professionals should understand the functions involved in application replacement, retirement, migration, and changes in feature use.

Application Replacement

Eventually, the application reaches a point where significant changes are required to make it useful. This could be due to shifts in the environment, other business processes, or the underlying technology that the application was built upon. At this point, specifications are drafted for a replacement application or a newer version of the application, and the cycle begins anew.

Old/Current/New Versions

During the application life cycle, you may have multiple versions of an application running simultaneously. These versions include old systems that are no longer used for day-to-day activities, current systems that are in active use, and new versions on which the company is migrating toward.

It is not always possible, nor advisable, to simply cut over from one system to another. Users need to be trained on the new system, standard operating procedures may need to be documented, or dependencies may need to be resolved on the client side before the new system can fully take the place of its replacement.

Companies often run the current and new versions side by side when replacing the application. Keeping data consistent between these versions can be challenging. In some of the worst cases, users are forced to enter data into two systems during this process. As you can imagine, this is burdensome for users and should be avoided by automating the process of keeping the data consistent between the current and new versions.

The easiest and oftentimes most effective method to keep data consistent would be to utilize an application from the application vendor. However, vendors do not always have such a program, so companies are forced to seek a solution elsewhere. This requires configuring scripts or other tools to keep the data synchronized.

Most application data will be contained in a database. However, if your data consists of only files, not transactional database data, you can use file-based or block-based storage replication tools. For more information on these methods, see the sections on storage replication in Chapter 3.

There are various database-specific synchronization tools, but they may or may not be feasible, depending on how the database architecture has changed between the current and new versions. Some of these include transactional replication using a publisher-subscriber model, snapshot replication, log shipping, or merge replication. Another option might be to configure triggers on database fields that will initiate updates on their corresponding fields in the other system. However, this method is dependent upon the database architecture, such as how you are creating primary keys.

Images

CAUTION   Be careful not to create circular triggers when synchronizing database updates. This can occur if you have triggers configured on both sides that automatically initiate upon updates to the table. An update on one side would trigger an update on the other side, which would then trigger another update. This would continue indefinitely, creating a large amount of junk data.

A third option might be to configure a daily job that queries tables from both databases to identify new unique records and then creates an update script for each database to bring it up to date with the changes made to the corresponding database. As you can see, there are quite a variety of ways to keep the data synchronized and many more besides the ones mentioned here. The method you select depends largely on how the current and new systems are designed.

You will also encounter scenarios where old and current systems are running side by side. Companies may keep an old system around when they have not migrated data from that system to a new system. They may choose not to perform this migration due to the costs to migrate or because their need for regular or recurring access to that data is very low. For example, a company may move to a new order tracking system but only move the last 12 months of data from their old solution to the new one due to the cloud provider’s costs for maintaining that data. They may determine that they only need to generate data from the old system if there is a specific request for it or for financial auditing purposes, so the old system is kept around just to run reports against it, and the new system is used for day-to-day activities. The primary challenge with this situation is that maintenance must still be performed on the old system so that it is available when needed and kept secure from new security threats.

Application Retirement

All applications eventually reach a point where they no longer provide value to the organization. At this point, the application should be retired. Program procedures and processes are documented, and the resources the application uses are released for some other purpose.

Deprecation

Application components are often retired due to deprecation. Deprecation is a process where functions or APIs are declared unsupported or no longer safe for use. These components are then no longer included in new versions of the software.

Deprecation typically occurs when significant security vulnerabilities are discovered and the company behind the software determines it is better to replace the component with a re-engineered one rather than trying to fix the vulnerabilities. Another reason for deprecation is when new methods of operation are discovered or largely adopted, making the old methods obsolete.

New software versions will replace components that have been deprecated unless those functions are part of some dependency. You will often need to update source code or scripts that reference deprecated functions for your code to work once the new software has been deployed.

Images

NOTE   Be sure to read the release notes for new software versions to identify if components have been deprecated. Many administrators have learned the hard way when they did not read the release notes and then found parts of their systems did not work following the upgrade. It is far better to identify these issues ahead of time and then rewrite the code that relies upon the deprecated functions before installing the new software.

End of Life

All software eventually reaches a point where it is no longer supported or supported in only a limited fashion, often at a much higher cost to consumers. This is known as end of life (EOL). Software that is EOL will no longer have updates and patches released for it, so consumers using that software may be at risk of exploitation of new vulnerabilities identified for that EOL software. For this reason, it is not advisable to continue using such software.

Software manufacturers will issue many notices when an application is nearing the end of life to ensure that customers understand and to give them the opportunity to purchase newer versions of the software.

Many companies have a formal process for classifying their software as EOL. They communicate this policy to their customers so that EOL announcements do not come as a surprise. For example, Microsoft has some products that are under their fixed life cycle policy. For these products, Microsoft offers mainstream support for five years and then extended support for the next five years. Such products are declared EOL after ten years.

Application Migration

Applications may need to be migrated to different hardware as their usage increases or decreases. The application may need to be migrated as the underlying technology environment changes. For example, the organization may move to more powerful hardware or from one cloud provider to another or from one virtualization platform to a competing platform.

Application Feature Use (Increase/Decrease)

The use of an application will change over time. Application use will be relatively limited as it is introduced because it takes a while for users to adopt a new solution. However, as the value of the program is demonstrated and more people tell others about it, the use will gradually increase. Eventually, application use will plateau and stay at a relatively consistent level for some time before it finally diminishes when other applications or newer processes supplant the application or as business conditions change. A major thing touted in new releases is what new features the release adds.

Secure Coding

Whether you are developing a new application or maintaining an existing one, secure coding is one of the most important things to know. Companies spend vast sums of money each year to resolve vulnerabilities discovered in applications. Some of these vulnerabilities could have been avoided if secure coding techniques had been utilized earlier in the project life cycle.

Security is important throughout the SDLC. It should be built in during the specifications and development phase and then tested. Vulnerabilities will be identified in the operational phase, and this should feed back into the development phase to correct these issues.

A good place to start in secure coding is to review the OWASP top ten application security risks. You can find them here: https://owasp.org/www-project-top-ten/. Some of the most common security mistakes take place around authentication and authorization. Authentication is the process of identifying the identify of user or system, while authorization is the process of giving accounts access to the resources they have a right to access. This section will introduce some of the coding basics of secure authentication and authorization, including

•   Avoid hardcoded passwords

•   Service account best practices

•   Utilizing password vaults

•   Key-based authentication

Avoid Hardcoded Passwords

Hardcoded passwords are those that are embedded into the application code. For example, a developer might code a database connection string into the application so that the software can issue queries to the database without having to set up the connection and account during installation or deployment of the application. This simplifies the deployment but comes with a serious security risk.

The issue with this is that the password cannot be customized for a specific deployment or customer installation. Attackers who discover the password in the application through methods such as disassembling the application could then use that password on any other installation, since it is built into the software. The only way to change this password would be for the developers to release a patch.

Sometimes hardcoded passwords are placed into source code by developers for testing, with the intention to later remove it. This is still a bad practice because developers may forget to remove these features, leaving the application vulnerable until someone discovers the issue and informs them of it.

Service Account Best Practices

Service accounts are used by application components to access system or network resources such as files on the local machine or network, databases, or cloud systems. Each function should have clearly defined access requirements, and the service account should be configured with the rights to perform those functions only and nothing else. This is known as least privilege. This makes it more difficult for a compromised service account to be used to perform malicious actions.

As an example, suppose a service account is used to allow access to a server data drive so that files can be uploaded or downloaded through the application. Permissions on this account should only grant access to the specific folder where these files exist. This prevents an exploit from within the application from changing the directory to somewhere else on the system to send attackers potentially sensitive information.

Specific access rights for each service account should be well documented. It also makes it easier for security operations to monitor for inappropriate access because they have clearly defined access expectations. It is possible in some cases for attackers to escalate privileges for an account to perform other actions, and that is why it is best to both limit the privileges and monitor for noncompliance.

The best practice for service accounts is to establish an individual service account for discrete access functions rather than creating one service account per application. To demonstrate this practice, Microsoft’s database application, SQL Server, has up to 13 service accounts that must be created, based on the database components the company elects to install on the server. Some of these include the account for running the database service, an account to run the agent service for monitoring jobs and sending alerts, a service for full-text searching, a service to provide database connection name resolution, and a service for backup operations. Consider the functions your application will need to perform and then allow for the specification of service accounts for those functions. If you are deploying the application, create these service accounts following the vendor’s guidelines.

Password Vaults

As you can see, a single application can potentially have many accounts associated with it. These passwords are created when the service accounts are established, and they are needed when the application is configured on the server. Therefore, they must be documented, but the documentation should be secure.

It is far too commonplace for companies to store passwords in an Excel file, Word document, or another nonencrypted file. The best practice is to store passwords in a password vault. A password vault is an encrypted repository where usernames and passwords can be entered. Multiple administrators may need access to the password vault, and they will use a very complex master password to access it and retrieve credentials.

Many password vaults have features to keep copied passwords in the clipboard for a short period so that they are not accidentally pasted into a form or document later or scraped by another application or remote session.

Password vaults can also be used for services to obtain their credentials. Password vaults associated with a cloud service can store keys in their vault and allow applications to invoke the keys by referencing their Uniform Resource Identifier (URI), a unique character string used to represent the key. Homogenous key vaults ensure that keys do not leave their cloud, so it reduces the threat of key interception over the Internet. Key vaults can generate new keys, or users can import their own existing keys into them.

Let’s look at an example of Azure Key Vault. We can create a secret that will be used by an app within Key Vault with these steps. After logging into the Azure Cloud Shell CLI, we create a secret with the following code. Please note that this requires a key vault to have been created. In this example, our key vault is named CloudTest. We will create a secret to be used by an app, and the name will be CloudTestSecret with a value of Testing4FunKeys. The output from this command is shown in Figure 8-5.

Images

Figure 8-5  Creating a secret in the Azure Key Vault

Images

Cloud key vaults can automatically rotate secrets stored there for associated identities. This reduces the maintenance required for maintaining secure rotating secrets.

Key-Based Authentication

Key-based authentication can be used as an alternative to authenticating using a username and password. This form of authentication uses public key cryptography, where the account has a private key that is used to authenticate it to the system. The private key is paired to a public key that can decrypt data encrypted with the private key and vice versa. Common authentication systems will encrypt a number or a string with the public key and then have the account decrypt the value to confirm its identity. Only the user or service with the private key would be able to accomplish this, so it is an effective way of establishing authentication.

Exercise 8-1: Setting Up SSH Keys for Key-Based Authentication

In this exercise, we will create an SSH key pair for authentication to an Ubuntu Linux server. You will be able to use this to remotely connect to the server.

1.   Start by creating a key pair on your machine. Open the Terminal application and type the following command to create a 2048-bit RSA key:

ssh-keygen

2.   You will receive the output shown next. Press ENTER to save the SSH key in the default directory, the .ssh subdirectory in your home directory.

Images

3.   Now enter a passphrase. The passphrase will be used to encrypt the key on the local machine. Choose something secure. Type the passphrase and then press ENTER. You will then need to enter the passphrase again and press ENTER a second time to confirm the passphrase. You will then be shown a screen showing you the fingerprint of the key and its randomart image, as shown here:

Images

4.   Now that you have the key, you will need to copy it to the server so that it can be used. Type the following command to display the public key, as shown in the illustration:

cat ~/.ssh/id_rsa.pub

Images

5.   Establish a shell connection with the remote computer.

6.   Place the public key in the authorized_keys file with the following command, substituting the key shown here with the one you obtained from step 4:

Images

7.   Lastly, assign ownership to the file for the user account that will authenticate with it. In this example, the user is eric, but you will need to substitute your username for it.

chown -R eric:eric ~/.ssh

DevOps Cloud Transformation

DevOps continues to accelerate its pace of development as expectations increase from consumers and internal teams to release new versions more quickly, dynamically scale for increased demand, and efficiently track and resolve bugs or identified security vulnerabilities. DevOps teams must be familiar with the processes and technologies to enable them to work in this rapidly changing environment.

These technologies and processes are remarkably suited for the cloud. The cloud is transforming DevOps into an increasingly agile process that can reduce development timelines, gain pipeline efficiencies, and save money. A key element of this transformation is the concept of continuous integration/continuous delivery (CI/CD). This is supported by technological refinements such as Infrastructure as Code (IaC).

Business Needs Change

Business needs change, sometimes quite rapidly. Business change can be frustrating for application developers who have spent much time putting an application together, hoping that it will have a long shelf life. However, businesses must adapt to their environment and meet continually changing customer needs and wants. Adaptation necessitates changes to applications, tools, systems, and personnel to adjust the technology environment to these new requirements.

Cloud technologies make such changes much easier. Cloud services can be expanded, contracted, or terminated simply by contacting the cloud vendor and requesting the change. Some services may have a contract period, with a penalty for breaking the contract, but many services are flexible.

It is also important to note that personnel changes are not as dramatic when using cloud services. If a company decides to stop using one cloud service and adopt two others, this might result in no changes to personnel. By contrast, if these were on-premises applications, one person might need to go through training to be familiar with the new application, and another person may need to be hired to support the new one.

Some business changes that affect cloud professionals include

•   Mergers, acquisitions, and divestitures

•   Cloud service requirement changes

•   Regulatory and legal changes

Mergers, Acquisitions, and Divestitures

When two companies decide to combine their businesses, this is known as a merger. One company can also choose to purchase another in an acquisition. Some companies decide to change direction and sell off a portion of their business through divestiture.

Every company has a unique technology infrastructure that supports its business, and any company that goes through a merger, acquisition, or divestiture has to adapt its infrastructure to meet the requirements of the new corporate form. Physical hardware must be moved from one site to another; possibly rebranded; and adapted to meet the policies, standards, and compliance requirements of the new company.

Cloud services streamline this operation. Cloud accounts can be moved from one company to another and still reside with the same provider. Cloud integrations can be migrated from one system to another if necessary. Cloud migrations are easiest if cloud service integrations are performed using standard APIs.

Cloud Service Requirement Changes

Cloud service providers are continually upgrading their systems to take advantage of new technologies and improvements in processes. Unlike on-premises solutions where new software versions must be deployed by IT staff, cloud updates are deployed by the cloud provider on the organization’s behalf. It is important to stay on top of what changes are made by the cloud provider in case changes are needed on the organizational side as well.

Companies may also request changes to their cloud services. In some cases, a cloud provider might offer a buffet of services, and customers can add or remove these services at will. Other solutions may be highly customized for the customer, in which case negotiation with the provider and adequate lead time are necessary for changes to be made to the environment to meet the customer requirements.

Regulatory and Legal Changes

The regulatory and legal environment is also an area that sees frequent change. When a country promulgates a new regulation or law (or changes an existing one) that affects business practices, companies that do business in that country must adhere to the new or changed law or regulation. Global organizations often have entire teams of people who monitor changes in the global regulatory environment and then work with organizational departments to implement appropriate changes.

Corporate legal teams often work with vendor management or IT to coordinate with cloud providers to ensure that legal and regulatory requirements are met. One way to validate that a cloud provider supports new regulatory requirements is through a vendor risk assessment. This involves sending a document listing each of the requirements to the cloud provider and asking the cloud provider to verify that it has the appropriate controls and measures in place to meet the compliance requirements.

Vendor risk assessments take time to administer and time to complete. It is important to give vendors plenty of time to work through the details. Some vendors will need to implement additional procedures or controls to meet the requirements, and others may decide that they do not want to make changes, forcing your organization to move its business from one cloud provider to another to remain compliant.

Continuous Integration/Continuous Delivery

CI/CD are DevOps processes that aim to improve efficiencies and reduce costs through greater automation of the SDLC. The CI portion is concerned with planning, regularly building, testing, and merging new branches of code into the trunk. The CD portion is concerned with deploying the application to stakeholders or customers, managing the application, and monitoring for issues. When issues are identified, those are fed back into the planning step of the CI process. Figure 8-6 shows the CI/CD processes. The CI tasks begin with planning, then building, testing, and merging. This leads into the CD processes of deployment, maintaining, and monitoring.

Images

Figure 8-6  Continuous integration/continuous delivery processes

Cloud tools such as AWS CodePipeline or Azure Pipeline can automate development tasks such as building, testing, and releasing code. Workflow for the entire set of CI/CD processes are reflected in such tools. Planning is aided with visualization and modeling tools. These tools allow DevOps teams to outline the workflow that will be accomplished at each point in the pipeline. The cloud development platform then guides developers through the workflow and tracks progress so that team leads can manage the project tasks effectively.

Standardization is key to ensuring that each build is tested using the right criteria. Establish your testing workflow, and cloud tools can execute that workflow for each build automatically when new changes are committed to the repository. This may involve human tasks as well, but the tools notify testers or approvers to perform their role as soon as their step in the workflow is reached. This helps reduce delays in the process. Errors and other issues are quickly identified and sent back to developers so that they can fix them while the code is still top of mind.

Infrastructure as Code

Virtualization and the cloud have gone a long way toward abstracting the underlying hardware components of a solution, paving the way for new innovations in software-defined networks, storage, and data centers. These concepts were introduced in Chapter 6, and DevOps utilizes these software-defined processes to take the task of provisioning networking, storage, server, and application resources and automate it through software. IaC automates the provisioning of the resources required for an application using definition files that outline how the systems will be provisioned and configured. IaC is an essential part of continuous deployment.

IaC automation aids in configuring test and production environments similarly to ensure that testing accurately reflects the situations encountered in the production environment. It also aids in deploying many systems quickly or simultaneously. Consider a situation where your team has developed a patch for your application and you need to deploy it for 3,500 customer sites. IaC automation could update each site with minimal effort.

IaC configurations can be stored beside the code. Changes to the configuration are tracked within the code repository right along with the associated code for the build.

Microsoft Azure implements IaC with Azure Resource Manager (ARM) templates, and Amazon has AWS CloudFormation. Both systems use JavaScript Object Notation (JSON) statements that are written to describe the configuration logic behind what you want to set up. Configurations can be broken out into modular parts that are shared among similar projects. This makes updating elements simpler because you only need to update it in one place. Templates are provided for commonly configured items so that you do not need to start from scratch.

Infrastructure Components and Their Integration

Prior to IaC, infrastructure components ended up with very different custom configurations. The industry calls these snowflakes because each snowflake is unique. Snowflakes are hard to support because each configuration might require different steps or additional coding to address future issues. IaC aids in establishing a standard and using that each time new systems are deployed. Some of the infrastructure components you should configure with IaC include system storage, virtual machines, virtual networks, security segments and ACLs, and application configuration settings. These components work together to provide the solution. If one of these is misconfigured, the solution will not likely work.

Chapter Review

DevOps teams, the combination of software development and IT operations, are responsible for developing, deploying, and maintaining applications. As such, DevOps is responsible for the entire application life cycle. This includes creating the program specifications, development, testing, deployment, and maintenance.

There is a point where the application no longer serves a purpose and will be retired or replaced. At this point, DevOps will remove those applications or create new replacement applications and potentially migrate data to them.

DevOps teams need to be cognizant of security throughout the SDLC so that it is built into the applications they develop, evaluated in testing phases, and monitored in operational phases. Many applications suffer from insecure authentication or authorization functions. Authentication is the process of identifying the identity of a user or system, while authorization is the process of giving accounts access to the resources they have a right to access. The best practice is to never use hardcoded passwords, define service accounts for specific application functions, limit service account permissions to the minimum necessary, document permissions, store passwords in password vaults, and implement key-based authentication when possible.

The cloud has transformed DevOps to make it much more agile. This aids DevOps teams in adjusting to changes quickly. CI/CD processes improve DevOps efficiency through automation of the SDLC, while IaC automates the deployment of applications and application updates. IaC is powerful and can significantly enhance DevOps agility and avoid snowflakes (unique system configurations), but it must be implemented correctly. There are many components to an application that will need to be correctly configured. Some of these components include storage, virtual machines, virtual networks, security segments and ACLs, and application configuration settings.

Questions

The following questions will help you gauge your understanding of the material in this chapter. Read all the answers carefully because there might be more than one correct answer. Choose the best response(s) for each question.

1.   You have just finished development on a new version of your company’s primary application, and you would like to have a small group test the features to identify potential bugs. Which type of release would this be?

A.   Stable

B.   Canary

C.   LTS

D.   Beta

2.   Choose the answer that lists the SDLC phases in the correct order.

A.   Specifications, Testing, Development, Maintenance, Deployment

B.   Development, Specifications, Deployment, Testing Maintenance

C.   Specifications, Development, Testing, Deployment, Maintenance

D.   Specifications, Development, Deployment, Maintenance, Testing

3.   What is the desired end result of ITIL?

A.   CAB

B.   Continual service improvement

C.   Service strategy

D.   Service operation

4.   Which of the following terms best describes life cycle management?

A.   Baseline

B.   Finite

C.   Linear

D.   Continuum

5.   You are planning to migrate to a new ERP system. You will be running the old and new systems simultaneously for six to nine months and want to ensure that the data is kept consistent between the versions. Which method would not work to accomplish this?

A.   Have users enter the data into both systems

B.   Use a tool from the software vendor to keep the data synchronized

C.   Configure database synchronization jobs

D.   Point both old and new applications to the same database

6.   Which scenario would be the best case in which to hardcode the password into the application?

A.   The username and password will not change

B.   Customers should not know the password

C.   You want to decrease deployment time

D.   None of the above

7.   Which of the following is not a service account best practice?

A.   Configure a service account for each key application.

B.   Restrict services by creating only local accounts on the systems instead of domain accounts.

C.   Limit access rights to only those absolutely necessary for the service’s role.

D.   Monitor service account access based on documented privileges.

8.   Which one of these tasks is not a continuous integration task?

A.   Plan

B.   Build

C.   Merge

D.   Deploy

9.   Which one of these tasks is not a continuous delivery task?

A.   Monitor

B.   Test

C.   Maintain

D.   Deploy

10.   Which technology aids in provisioning the systems and resources required for deploying an application?

A.   Software version control

B.   Infrastructure as code

C.   Compiler

D.   Continuous integration

Answers

1.   B. The canary build is used when testing features immediately after it has been developed.

2.   C. The order is Specifications, Development, Testing, Deployment, Maintenance.

3.   B. The end result of each cycle within ITIL is to identify opportunities for improvement that can be incorporated into the service to make it more efficient, effective, and profitable.

4.   D. Life cycle management is a continuum with feedback loops going back into itself to enable better management and continual improvement.

5.   D. You cannot keep the data in sync by pointing both the new and old applications to the same database.

6.   D. You should never hardcode credentials into an application.

7.   B. Creating local accounts instead of domain accounts is not a best practice. This may seem like it is limiting the scope of the account, but local accounts cannot be managed centrally and are more susceptible to tampering.

8.   D. Deploy is a continuous delivery task, not a continuous integration task.

9.   B. Test is a continuous integration task, not a continuous delivery task.

10.   B. Infrastructure as code automates the provisioning of the resources required for an application using definition files that outline how the systems will be provisioned and configured.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset