© Andrew Davis 2019
A. DavisMastering Salesforce DevOps https://doi.org/10.1007/978-1-4842-5473-8_9

9. Deploying

Andrew Davis1 
(1)
San Diego, CA, USA
 

Deploying means moving software and configuration between environments. Deployment allows software to be built in a development environment, tested in one or more test environment, and then released to one or more production environments.

In the case of traditional software, deployments send that software to a server such as an EC2 host on AWS. In the case of Salesforce, deployments are changes to the configuration of a Salesforce instance. In both cases, deployments have a reputation for being painful and challenging, and have been one of the driving reasons behind the development of DevOps approaches.

Why are deployments so challenging? First of all, any nontrivial piece of software is complex, includes extensive logic, and varies its behavior based on user input, changing data and other conditions. Second, it’s very hard to fully encapsulate software, since it depends on the server and network infrastructure on which it runs, and typically interacts with other applications.

The complexity of the application itself is understandable and somewhat unavoidable. But deployment problems related to variations in server and network infrastructure are enormously frustrating for developers. A difference in a proxy setting, database configuration, environment variable, or version of a piece of server software can make the difference between an application running fine and an application which fails to run. Even worse, the application may run but experience strange behavior for some users or occasional performance issues. Replicating such problems can require hours or days of developer time, and resolving the behavior may depend on server-wide changes that cause impacts for other applications as well. If there’s a deep divide between development and operations teams, developers might even adopt a glib attitude that “it worked in dev,” even when the operation team is struggling.

The immense frustration of attempting to debug an application that “worked fine in dev” but doesn’t work in a test or production environment has been a driving force behind the rising use of containers (principally Docker containers) as the execution environment for applications. Containers are lightweight execution environments that can be created quickly, are identical every time they are created, and are isolated from their surrounding environment. Containers guarantee a consistent execution environment, which makes them extremely attractive platforms for deploying and running applications. The ability to define (and update) containers using simple configuration files that can be stored and modified as source code makes them even more valuable.

Because Salesforce abstracts away the underlying server, database, and networking infrastructure, Salesforce developers don’t experience the same infrastructure problems that plague developers creating traditional applications. (The Salesforce employees who actually build Salesforce may experience these pains, however!) Nevertheless, there is an analogous problem faced by Salesforce developers in that they are deploying their application into an Org which has its own customizations and competing applications. It is equally possible for Salesforce developers to experience mysterious problems with their production application that simply never appear in the development environment. It is for these reasons that DevOps, and in particular continuous delivery, is so important for the Salesforce world as well—and why it’s important to gradually bring your entire org and all its applications into a well-defined delivery pipeline that provides you precise visibility into what’s in each org and the differences between them.

At the heart of continuous delivery is automating deployments. People have been doing deployments between Salesforce environments since the platform was created, but there has been a tendency to do ad hoc deployments or manually modify target environments. This chapter introduces the deployment technologies available, including commercial tools, the process of resolving deployment errors, how to set up continuous delivery, and how to manage org differences and multi-org scenarios. We conclude with a discussion on how to analyze dependencies and the risks associated with deployments.

Deployment Technologies

What are the different techniques or technologies available to deploy to a Salesforce instance?

The Underlying Options

There are only four underlying ways of deploying Salesforce metadata: using change sets, using the Metadata API, using packages, and manually recreating the configuration of one environment in another environment.

The change set UI is built into Salesforce and provides a simple graphical interface that can be used to deploy metadata between a production org and its related sandboxes.

The Metadata API provides API-based access to read and update most Salesforce configuration. This API is the tool that is most relevant to the task of continuous delivery. This is also the foundation for all of the third-party release management tools. The Metadata API also includes some limited capabilities for working with change sets.

Using packages as a mechanism for deployment has long been the approach for ISVs to make applications available to customers. There are now several varieties of packaging available on the Salesforce platform, and the use of unlocked packages is a core part of the Salesforce DX workflow, described in detail later.

Manually recreating changes is a fallback that is still surprisingly common. As of this writing, there are still many types of metadata that can’t be deployed in any automated way. Fortunately, most of this “undeployable” metadata relates to minor aspects of configuration that don’t need to change often. Almost all aspects of an org’s configuration can be deployed automatically, but the gaps requiring manual configuration are persistent and frustrating.

Another reason that manually recreating changes across environments is common is lack of developer education on how to automate deployments. Fortunately, this gap is easier for companies to address by themselves, and hopefully this book can help.

A surprising number of Salesforce developers are uneducated about the capabilities of change sets and the Metadata API and may rely on manually recreating configuration to “deploy” functionality that could easily be automated. Even very senior Salesforce developers may be hanging on to the outdated view that “much” or “most” configuration can’t be automatically deployed. The growing number of customers successfully implementing continuous delivery is a proof that automated deployments are achievable.

Manual Changes

The Metadata Coverage Report1 and the Unsupported Metadata Types page2 describe the limitations of what Salesforce metadata can be deployed. Salesforce developers should bookmark these pages and use these as the definitive reference for what can and cannot be automatically deployed.

Salesforce has championed an “API first” approach for many years. For example, the Salesforce Lightning Experience that began rolling out in 2015 was built on updates to the Tooling API that allowed Salesforce to query its own APIs from the web browser to retrieve information like lists of picklist values. Those responsible for doing Salesforce deployments, however, have often felt that the promise of “complete metadata coverage” was a mirage that never got any closer.

Despite annual improvements to the Metadata API, the pace of Salesforce development meant that new features were regularly rolling out that could not be automatically deployed. With each release, some of the Metadata API’s backlog would be retired, but new Salesforce capabilities keep being released, and so the backlog has been growing almost as fast as it was being retired.

As mentioned in the introduction, this entire book deals with the “Salesforce Core” product, and those parts of Salesforce such as Marketing Cloud and Commerce Cloud that were the result of acquisitions require entirely separate processes to manage deployments. Even on the core platform, there have been some notable and massive gaps in the Metadata API. Community Cloud is built on the Core platform and has been a major focus for Salesforce in recent years. Metadata API support for Community customizations is still limited, as of this writing, but that is scheduled to be addressed with the ExperienceBundle metadata type in the Winter ’20 release.

Fortunately, there are now processes in place to ensure that any new capabilities on the Salesforce core platform must be supported by the Metadata API. The Metadata Coverage Report mentioned earlier is generated automatically by the build process that builds Salesforce. And a quality check now ensures that any new capabilities created by product teams at Salesforce must be accessible through this API.

The moral of this is that teams should continually strive to reduce their reliance on manual “deployments,” but that certain edge cases will need to be handled manually for the foreseeable future. For this reason, teams should maintain notes on necessary manual steps in whatever system they use to track work items.

One workaround for automating these manual steps is the use of UI automation to change configuration. Both AutoRABIT and Copado enable users to configure pre- and postdeployment steps using Selenium. In fact, any UI test automation tools that work on Salesforce can be scripted to perform this process. For example, to automate the configuration of Account Teams, your script can navigate to the appropriate place in the Setup UI and confirm that all of the appropriate Account Team Roles are configured, and if any are missing, the script can add them. This kind of scripting requires significantly more work to set up and maintain than automating this behavior declaratively using the Metadata API. In particular, you need to ensure those scripts are idempotent (don’t unintentionally create duplicate functionality if they are run more than once) and guard against unintended behavior. Your scripts are also vulnerable to breaking if Salesforce updates parts of the UI.

Change Sets

Change sets are the only “clicks-not-code” deployment option built into Salesforce. This is the default approach to deployment for most Salesforce admins and many Salesforce developers. Nevertheless, change sets suffer from many limitations and have not been improved much since they were initially introduced.

Change sets are specifically for managing deployments between a production org and its related sandboxes. Change sets require that you first create inbound and outbound deployment connections between the orgs that will be the sources and destinations for each change. For security, these connections need to be configured in each org, so that a particular “Dev” org might have an outbound connection to a “QA” org and that “QA” org might have an inbound connection from “Dev” and outbound connections to “UAT” and “Prod” orgs.

Once these deployment connections have been made, you can build a change set in a source org by selecting the metadata items that you want to be included in that deployment. Change sets provide a very helpful capability to “view dependent metadata.” This means that you can, for example, select a single Lightning Application and then view its dependencies to pull in the related Apex controller and any custom fields that controller might reference. Once built, the change set can be uploaded to its target org.

Once a change set has been uploaded to the target org, you need to log in to that target org to perform a validation of that change set. The validation ensures that there is no missing metadata or other conflicts that would prevent a successful deployment. Once validated, the change set can be deployed into the target org.

It is significant that the change sets are not directly deployed to the target org, rather they are simply uploaded and made available in the target org. The actual deployment needs to be performed by an administrator from inside that target org. This helps to fulfill a compliance requirement of laws such as Sarbanes Oxley (SOX) that the people responsible for developing applications should not directly have the power to deploy those applications to the target org. This separation of duties is important in theory, but problematic with change sets in that the person deploying them can only see the names of the metadata items contained, and not their details. With both Salesforce and traditional IT applications, approving admins generally lack the time and knowledge necessary for detailed review of what they are installing in the target system. A change set is more or less a black box, and admin approval is more or less a rubber stamp. Compliance requirements are better met by using version control and continuous delivery.

One benefit of requiring admins to trigger the final installation, however, is that the target org can receive many change sets from different developers and install them all at an allotted time after first notifying affected users. This still creates a bottleneck where the developers need to handoff installation responsibilities to a busy admin. If that admin is maintaining multiple sandboxes, this can cause delays when the development teams and end users (or testers) need something installed but that admin is not available.

The main limitation of change sets is that they are tedious to build if you are managing large volumes of changes. Tools like Gearset and Copado provide very nice metadata pickers that allow users to sort and filter metadata by type, name, last modified date, and last modified by. But the change set UI requires you to navigate to each metadata type one by one and select the metadata items to be deployed. If you happen to navigate to the next page without clicking “Add,” your selections are lost. There is no indication in that UI of who last modified an item or when it was last modified, which makes selecting changes a painstaking and error-prone process.

Some companies do not allow change sets to be uploaded to production directly from development, which means the change set must first be uploaded to a testing environment and then manually recreated in that testing environment and uploaded to the production org.

Another limitation of change sets is that they don’t cover many types of metadata. Of the 240 types of Salesforce metadata, change sets support only 53% of them, whereas the Metadata API supports 93% of them. Change sets also don’t support removing metadata from the target org, only adding or updating it.

Finally, change sets can only facilitate deployments between a single production org and its related sandboxes. You cannot use change sets to deploy to multiple production orgs.

These limitations of change sets have been a boon for the creators of commercial deployment tools. The various commercial tools listed in the following provide vastly more functionality than change sets, and most of them have far better user interfaces. ClickDeploy’s marketing pitch emphasizes the superiority of their tool to change sets: “Deploy Salesforce 10x faster than change sets. … Deploy metadata types beyond the ones supported by change set. … Know exactly what you are deploying via instant line-by-line diff viewer. … No more tedious, manual rebuild of inbound change sets. Clone & reuse inbound change list in a single click.”3

ClickDeploy has built an easy-to-use alternative to change sets. But at least some of the benefits that they offer—deployment speed, supported metadata types, and line-by-line visibility—are equally true of any tool that is based on the Metadata API.

The Metadata API

The Metadata API performs deployments many times faster than Change sets do and also supports a far larger set of metadata. Every tool that supports Salesforce release management is built on the Metadata API, so in theory all of these tools can claim to be faster than change sets and to support more types of metadata. The speed of the Metadata API (how fast metadata can be retrieved and deployed) is the upper limit for all Salesforce release management tools; no tool can operate faster than the Metadata API allows, although some of them are definitely far slower.

The Metadata API also defines the upper limit of which types of metadata a tool can support. If something is not supported by the Metadata API, it is not deployable on Salesforce. But not all third-party tools support all of the metadata types supported by the Metadata API. For example, the now deprecated Force.com IDE based on Eclipse supported only a limited subset of metadata. The most flexible tools use the Metadata API’s “describe” calls to dynamically query a Salesforce org to determine which types of metadata are supported, and then permit all of those types. Tools that have not built in such dynamic logic are likely to always lag behind the Metadata API and to support only a limited subset of metadata.

Org configuration that is not deployable using the Metadata API can only be set manually. However some tools such as Copado and AutoRABIT have a clever capability whereby they use Selenium automation to dynamically log in and check or change org configuration. Selenium is normally only used for UI testing, but this kind of automation allows org setting changes to be propagated in an automated way.

With Salesforce DX, several new capabilities have been released or are in Pilot that allow changes that otherwise wouldn’t be possible through the Metadata API. Sandbox cloning is a new capability that allows all of the configuration (and data) in a sandbox to be replicated to another sandbox. Scratch org definition files allow developers to define scratch org features that are beyond the scope of the Metadata API. And the forthcoming Org Shape and Scratch Org Snapshots provide capabilities similar to sandbox cloning whereby characteristics of scratch orgs can be defined that are beyond the scope of the Metadata API. All of these capabilities are in the context of provisioning new orgs however, so they are not actually deployments.

The Metadata API remains the defining mechanism that both provides and limits the capabilities of all other tools. All of the following tools simply provide different user interfaces and different types of metadata storage and processing on top of the Metadata API. Importantly, the Metadata API also allows retrieving and deploying metadata from and to any Salesforce org, as long as you have authorization on that org. This makes it a far more versatile tool than change sets, especially for companies with multiple production orgs.

Deploying Using Packages

Packaging means making discrete collections of code and configuration into a single bundle or package.

Salesforce enables several different types of packages, but they all function in a similar way. Packages allow a developer to specify various metadata items and take a snapshot of them which is uploaded to Salesforce and made available for installation in other Salesforce orgs using a package installation URL such as https://login.salesforce.com/packaging/installPackage.apexp?p0=04tB0000000O0Ad .

There are currently four types of Salesforce packages: classic unmanaged packages, classic managed packages, second-generation managed packages, and unlocked packages. Of these, this book deals mostly with unlocked packages. Although there are differences in how these package types are created and their characteristics, they all allow you to create package versions with an ID beginning with 04t that allow that version to be installed in a target org using a package installation URL like the one earlier.

Deploying Using an IDE

All the various Salesforce IDEs use the Metadata API in the background to make metadata available to be created, read, updated, and deleted. Salesforce was originally a clicks-not-code system, with no IDEs available or required for development. With the introduction of Salesforce domain–specific languages such as Apex, Visualforce, Aura, and Lightning Web Components, it became necessary to have a rich development environment.

The Dev Console created by Salesforce only allows for editing code files. But most other IDEs give access to every type of XML-based metadata available in the Metadata API. Because these tools already enable retrieving and deploying this metadata to the development environment, many of them also allow this metadata to be deployed to other environments as well.

One challenge with deployments is that they typically involve many interrelated pieces of metadata. For example, an Apex class may depend on a particular field, and that field may depend in turn on another field or object. IDEs typically work on one code file at a time, so for an IDE to truly support deployments, it needs to provide a metadata picker with which you can select multiple related pieces of metadata and then specify an environment to deploy them to. For this reason, not all IDEs have built useful tools for doing deployments.

Even when IDEs do allow for developers to do deployments directly, there are risks in allowing them to do so. First and foremost, in the absence of version control, it is difficult for an individual developer to ensure that they have the latest version of all affected metadata, and that their deployment won’t overwrite customizations in the target org and cause unintended consequences. Moreover, this behavior does not provide any traceability about what a developer deployed, when, or why. This may violate compliance requirements and certainly makes changes difficult to trace or roll back.

While the mechanism used to deploy from a developer’s IDE is essentially the same as that used by the following scripted methods, this approach is increasingly problematic as teams scale and as the sensitivity of the production environment increases. A last minute deployment from a developer leaving for a camping trip far out of cellphone coverage can lead to an enormously stressful and expensive production outage. The team struggling to understand and recover from this disaster will have no visibility into what exactly changed, making their cleanup process far more painful than it has to be.

Command-Line Scripts

Every operating system makes available a command line—a text-based interface for interacting with the machine and its applications. Generally, everyone finds graphical user interfaces (GUIs) to be easier to understand and interact with than command-line interfaces (CLIs), but nevertheless the use and importance of CLIs persist and have continued to grow. Why is this?

While GUIs provide simplicity and clarity, and allow complex phenomena to be visualized and acted upon, there are tradeoffs in that approach. A CLI allows infinite flexibility in what commands are used, with which parameters, and in which order. Importantly, they also allow commands to be tied together in sophisticated ways using programming languages to define loops, modules, variables, and more. And once written, these commands can be shared and run in an automated fashion. Creating GUIs requires a development team to make decisions about what options to give users and what information to show or hide. CLIs allow individuals and teams to choose what information to access, what actions to take, and what logic should tie these interactions together. In short, they allow the same flexibility as natural languages or programming languages, but can be used to orchestrate processes across multiple applications and multiple systems, making them immensely powerful.

Even if you’re not accustomed to working on the command line, it’s important to recognize that command-line scripts are the go-to tool for classic system administrators and for developers looking to build automation into their workflow. Importantly, CLIs are also the foundation used by DevOps tools for automating testing, analysis, builds, deployments, and reporting.

Introduction to Scripting

If you have rarely used the command line and never written command-line scripts, this process can seem daunting at first. As with anything, the initial steps you take are the hardest. But if you follow clear guides, it’s easy to create a “Hello World” script. From there you can gradually improve that script and build confidence. Before long you can have a robust set of tools to help in all aspects of the software development and delivery process.

What’s a script? The term script generally refers to small pieces of code where the source code is readily visible and modifiable by its users. It’s a common simplification to divide high-level programming languages into three categories: compiled languages like C++ are compiled into machine code that can be executed directly; interpreted languages like JavaScript, Ruby, Python, and Perl are executed using an interpreter; and scripting languages like Bash and PowerShell are executed using a shell. Most scripts are based on the latter two types of language so that they can be modified and executed on the fly, without having to be compiled.

Windows has two built-in scripting languages: Batch files and PowerShell. Unix and Mac environments typically have many available shells, but the most common is Bash. Even if you and your team are Windows users, it’s important to be aware of what kinds of scripts can run on Linux environments. This is because the world’s servers are predominantly Unix-based,4 and it is increasingly common for CI/CD systems to use Docker containers based on Linux. One reason that Macs have historically been attractive to developers is that they allow the use of Unix commands and scripts. Fortunately, Windows 10 and Server 2019 recently introduced the Windows Subsystem for Linux which finally allows Linux shell scripts to run on Windows machines without installing third-party tools.5

For these reasons, if you’re just starting out on the command line, I would recommend learning to use the Unix-style commands and shells. If you’re on a Windows machine, you’ll first need to install the Windows Subsystem for Linux or use tools like Cygwin or GitBash. But once you’ve done that, you can navigate any other computer in the universe: Windows, Mac, or ∗nix. Unix command-line syntax and shells provide an enormous array of tools for working with your filesystem and automating your workflow. PowerShell will only ever be useful when working on Windows machines.

To avoid these kinds of platform incompatibilities altogether, and to get the benefit of more sophisticated programming languages, many teams use JavaScript, Python, Perl, or Ruby to write their scripts. These languages each provide interpreters to ensure consistent cross-platform execution.

One beauty of Unix-compatible systems is that you can quickly and easily combine and use scripts written in a variety of different languages. By convention, the first line in Unix scripts specifies which interpreter should be used for that script. For example, I wrote this book in Scrivener and used a set of scripts called Scrivomatic to automatically process the raw files and convert those to DOCX and PDF. The scripts were contributed by different users over time and are written in a mix of Python, Bash, and Ruby. I can run and modify any of those scripts with equal ease, without having to recompile them.

Listing 9-1 shows a sample Ruby script, while Listing 9-2 shows a simple Bash script. Assuming Ruby is installed, both of these scripts are executable as is on any Unix or Mac environment.
  #!/usr/bin/env ruby
  input = $stdin.read
  puts input.gsub(/Alpha/, 'Beta')
Listing 9-1

A simple Ruby script

  #!/bin/bash
  cd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
Listing 9-2

A simple Bash script

The first line in these files starts with a “shebang” (the “sharp” character # and the “bang” character !) followed by the shell or interpreter that should be used to interpret the remaining lines. These scripts can both be executed from the command line in the same way, but they will use the appropriate shell or interpreter to run.

Shell scripts are typically just lists of commands, just as you might type on a command line, with the possible addition of some simple variables, loops, conditions, and functions. They are most useful when you are simply combining multiple command-line instructions, with a bit of added logic.

Interpreted languages allow for more sophisticated logic, such as importing modules, using data structures like objects and arrays, and using object-oriented principles.

Old School Salesforce Scripting

Salesforce itself is written in Java, which was the most promising up-and-coming programming language when Salesforce began in 1999. These Java roots explain why Salesforce metadata is expressed in XML, and the tools to support the development lifecycle have traditionally been written in Java.

There are two command-line tools that have traditionally been key for the Salesforce development lifecycle: the Ant Migration Tool (aka “Force.com Migration Tool”) and the Salesforce Data Loader.

The Ant Migration Tool allows users to interact with the Metadata API using Ant. Ant is the original Java build tool, released in 2001. At the time, Ant was state of the art and used XML to define “targets” or actions that could be run in a particular order. The Ant Migration Tool is written in Java and allows users to define Ant targets to retrieve or deploy metadata, run tests, and so on. To use this, you first need to install Java, Ant, and the Ant Migration Tool on your local machine and then define a build.xml file that defines the commands you want to run.

Listing 9-3 shows a simple Ant build configuration that defines an Ant target that you can run by executing ant retrieveDev. It depends on the Ant Migration Tool being present in the local directory as lib/ant-salesforce_46.jar and the credentials for the org being stored as a file called build.properties. Storing the Migration Tool (and any other scripts you depend on) in version control is an important way to ensure that those tools are available to everyone on your team and can be upgraded for everyone simultaneously. By contrast, storing credentials in the separate build.properties file allows these to be excluded from version control and instead live only on developers’ machines or be injected by a CI/CD tool.
  <project name="AntClassProject" basedir="." xmlns:sf="antlib:com.salesforce">
     <!-- this taskdef helps locate the ant-salesforce jar in the project -->
     <taskdef
       resource="com/salesforce/antlib.xml"
       classPath="lib/ant-salesforce_46.jar"
       uri="antlib:com.salesforce"/>
     <property file="build.properties" />
     <tstamp>
       <format property="date" pattern="yyyy-MM-dd" />
       <format property="dateTime" pattern="yyyy-MM-dd_kk-mm-ss" />
     </tstamp>
     <property name="projectSource" value='../src' />
     <property name="entireProject" value="${projectSource}/package.xml" />
     <property name="sourceDev" value='${basedir}/source/dev' />
     <property name="logFile" value="${basedir}/log/${dateTime}.txt" />
     <target name="retrieveDev">
       <mkdir dir="log" />
       <record name="${logFile}" action="start"/>
       <echo>Retrieving from Dev...</echo>
       <delete dir="${sourceDev}" />
       <mkdir dir="${sourceDev}" />
       <sf:retrieve username="${dev.username}"
                    password="${dev.password}"
                    serverurl="${dev.serverurl}"
                    retrieveTarget="${sourceDev}"
                    unpackaged="${entireProject}"
                    pollWaitMillis="10000"
                    maxPoll="5000" />
        <record name="${logFile}" action="stop"/>
     </target>
  </project>
Listing 9-3

A simple Ant build.xml configuration file using the Ant Migration Tool

Ant scripts constitute “Old School Salesforce Scripting.” If you aren’t using these already, don’t start. First of all, Ant is not the build tool of choice for modern Java developers. Ant was replaced by Maven and now by Gradle as the Java build tool of choice. Maven made it easy to include external modules to help with common tasks, and Gradle made it easy to write very readable build scripts. If you want to do anything outside of executing basic commands, XML is an absolutely terrible language to write in. And it’s generally not very readable.

If you are inheriting existing Ant scripts, you can easily import them into Gradle and then benefit from Gradle’s rich and readable syntax. For example, Listing 9-4 shows a brief Gradle snippet that imports an existing Ant script but then defines dependent tasks in a very readable way. Executing gradle deploy2QA will trigger the Ant targets deployAndDestroyQA and then deployProjectToQA. Before the release of the Salesforce CLI, Gradle was the main language I used for build scripts.
  logging.level = LogLevel.INFO
  ant.importBuild 'ant/build.xml'
  task deploy2QA (dependsOn: ['deployAndDestroyQA', 'deployProjectToQA'])
  task deploy2Full (dependsOn: ['deployAndDestroyFull', 'deployProjectToFull'])
  task deploy2Training (dependsOn: ['deployAndDestroyTraining', 'deployProjectToTraining'])
  task deploy2Prod (dependsOn: ['deployAndDestroyProd', 'deployProjectToProd'])
Listing 9-4

A simple Gradle script that imports existing Ant targets

The Salesforce Data Loader is a frontend for the Bulk API, used to retrieve and load large volumes of Salesforce records. There is a GUI for the Data Loader, but it can also be executed from the command line if you’re on Windows, making it an excellent companion to the Ant Migration Tool.

There are some other tools that have been written to support the Salesforce development lifecycle such as Solenopsis6 and Force-Dev-Tool,7 but they are not as commonly used as the tools mentioned earlier.

If you’re inheriting existing scripts, expect to see these ones I’ve mentioned. If you’re getting started from scratch, focus on the following tools.

Salesforce CLI

The Salesforce CLI is one of the flagship innovations of Salesforce DX. The Salesforce CLI is a unified wrapper around the Salesforce APIs that adds sophisticated capabilities for managing the Salesforce software development lifecycle. Its capabilities are extensive and growing, but here is a subset of some of the most notable:
  • All the new capabilities of Salesforce DX are available through this tool; no new capabilities are being added to the Ant Migration Tool, which is scheduled for deprecation this year.

  • It securely manages credentials for all the orgs you need to access.

  • It provides concise commands for creating and managing scratch orgs, packages, and projects.

  • It automatically converts metadata from the native Metadata API format to the more usable “Source format.”

  • It tracks the metadata in target orgs to allow quick synchronization of changes between source and the org.

  • It provides convenient commands to execute queries, anonymous Apex, data loads, and more.

  • It provides access to the Bulk API for data retrieval and loading.

  • It supports the development of plugins.

  • It allows for command output to be formatted as JSON which makes it easier to parse and chain commands.

The Salesforce CLI is now based on a generic CLI engine called OCLIF, the Open CLI Framework, which itself is based on the Heroku CLI. OCLIF is still relatively new, but it provides a mechanism to build custom CLI tools in Node.js that can support plugins and auto-updating, among other capabilities.

There are many reasons why Node.js makes a compelling foundation for writing the Salesforce CLI. First, JavaScript is now the dominant language used by both professional and amateur developers8; Node.js allows you to write backend code such as web servers and CLIs using JavaScript. Second, the Node package manager (NPM) provides the world’s largest collection of reusable software modules. Finally, JavaScript is already familiar to Salesforce developers who build Lightning Components or client-side JavaScript. VS Code and its extensions are also written in JavaScript (technically, TypeScript), which allows developers to use the same tools and libraries for both.

Creating Salesforce CLI Plugins

The Salesforce CLI allows you to build or install plugins that contribute new functionalities and take advantage of the many capabilities that the CLI offers. As Salesforce did in so many other areas, they have made the CLI into a platform that allows teams to build custom tools and lets ISVs and open source contributors build and share powerful add-on capabilities.

From the beginning, the Salesforce CLI was designed with plugins in mind. The standard Salesforce commands all exist in the force:... namespace to ensure that plugins could offer commands like sfdx acme:org:list without interfering with standard commands like sfdx force:org:list.

Salesforce now offers an official Salesforce CLI Plug-In Developer Guide9 that provides instructions on how to build plugins. OCLIF, mentioned earlier, provides a generic foundation for building CLI tools that handles much of the complexity associated with building a command-line toolbelt. OCLIF enabled capabilities like accepting parameters, auto-updating, and more. Salesforce CLI plugins go further by giving developers access to many of the same libraries, parameters, and data used in the Salesforce CLI itself.

Plugins are developed in JavaScript or TypeScript and can make use of NPM libraries like @salesforce/core and @salesforce/command to handle org authentication and other actions. The Salesforce CLI handles parameters, logging, JSON output, and most of the other “boilerplate” activities, so you can focus on building the commands you need.

This is a growing area of development. One of the most promising capabilities is the possibility of creating “hooks”10 into standard Salesforce commands. While not possible as of this writing, hooks would allow a plugin to execute code before or after standard Salesforce CLI commands are run. Imagine running a command sfdx force:org:create to create a scratch org and having a plugin automatically notify your project management tool that you now have a new environment. The possibilities are vast, and Salesforce is working on the foundation to enable secure, signed plugins that can be distributed and executed in a trusted fashion.

Free Salesforce Tools

The Salesforce CLI is an officially supported command-line tool managed by the Salesforce DX team. There are also many free scripts or CLI tools that you might find helpful. Some, like force-dev-tool, have been around for many years. Some like SFDX-falcon are much newer. And some like CumulusCI are actually supported by teams within Salesforce.

This is an ever-changing field, and I don’t have deep familiarity with most of these tools, but some of the best known are listed here for your benefit.

CumulusCI 11 is probably the best developed of these tools, but is unfortunately not very well known. This project is managed by the Salesforce.org team who produces the nonprofit success pack and other nonprofit resources. Over the course of several years, they have built a highly sophisticated set of tools in Python to automate many aspects of release management. They’ve even built tools based on the Robot Framework to make it easier to perform Selenium UI testing on Salesforce.

SFDX-Falcon 12 has become well known from the Salesforce DX Trailblazer Community as one of the first full project templates for Salesforce DX. The tool is optimized to help ISVs build managed packages and has evolved from simple Bash scripts to being a full Salesforce CLI plugin.

Force-dev-tool 13 was one of the earlier CLI tools to help with Salesforce development. It’s now in “reduced maintenance” mode, since the Salesforce CLI was launched, but still receives updates occasionally. Appirio DX makes use of this project behind the scenes to aid with parsing and managing Salesforce’s XML metadata.

The Salesforce Toolkit 14 created by Ben Edwards is a nicely designed group of tools to help with common Salesforce challenges such as comparing org permissions. The project is no longer maintained, but the apps are still fully functional and run on Heroku. The source code is available so that they can also be forked and a private and trusted instance can be created within your own company.

These community-contributed tools are all works of love from their developers and maintainers. Some now suffer from neglect, and I’m sure I’ve overlooked many others, but many of these free tools provide powerful and effective solutions to development and release management challenges.

Using package.json to Store Command Snippets

Despite my droning on about the benefits of command-line tools, no one actually likes to remember complex sequences of commands and parameters. Command-line instructions allow for infinite flexibility and combinations and are a lifesaver in solving complex challenges. But once you’ve invested 5 minutes (or 5 hours) getting a sequence of commands just right, you should save that somewhere so you and others can reuse it.

If you don’t already have strong opinions about where to save such commands, do the following:
  1. 1.

    Install Node.js (which comes with npm).

     
  2. 2.

    In your project folder, run npm init to initialize a new project.

     
  3. 3.

    Unless you have ambitious plans to actually write code in Node.js, just accept the defaults. This will create a file called package.json in your project directory.

     
  4. 4.

    Edit that file, ignoring everything except for the scripts section. Begin to curate the scripts section so that it contains a helpful collection of common commands.

     
For example, the Trailhead Sample App lwc-recipes15 contains a package.json file with the scripts block shown in Listing 9-5. The five “scripts” shown here are actually just command-line sequences. To run any of them, just execute npm run scriptname (e.g., npm run lint) from a terminal prompt anywhere inside that project folder. There are numerous benefits of defining scripts in a package.json in this way:
  1. 1.

    They are stored in version control and shared with the team.

     
  2. 2.

    They can be run easily using npm run ...

     
  3. 3.

    They always run from within the project root folder, no matter which folder you have navigated to in the terminal.

     
  4. 4.

    You can chain these commands, for example, the lint script in turn calls lint:lwc and lint:aura.

     
  5. 5.

    You can pass parameters to these commands. After the name of the script, append -- followed by any parameters you want to pass through. For example, running npm run test:unit -- --watch will pass --watch as a parameter, which is equivalent to lwc-jest --watch.

     
  "scripts": {
      "lint": "npm run lint:lwc && npm run lint:aura",
      "lint:lwc": "eslint */lwc/**",
      "lint:aura": "sfdx force:lightning:lint force-app/main/default/aura --exit",
      "test": "npm run lint && npm run test:unit",
      "test:unit": "lwc-jest",
      ...
  },
Listing 9-5

The scripts section from a package.json file, showing some common script commands

Other Scripting Techniques

When writing scripts, it is important to have a way to parse the outputs from each command, so that they can be passed as inputs to subsequent commands. The most straightforward way to start scripting using the Salesforce CLI is to pass the --json parameter to each command and then to parse their output using the lightweight JSON parsing utility jq.16 JQ allows you to read, query, and transform JSON output.

Andrew Fawcett wrote a helpful blog post of the different methods to build scripting around Salesforce DX.17 Restating some of the points he shared there, piping the output of a Salesforce DX command into jq provides a formatted output as shown in Listing 9-6, which you can then refine further with queries and filters as shown in Listings 9-7 and 9-8.
  $ sfdx force:config:list --json | jq
  {
    "status": 0,
    "result": [
      {
        "key": "defaultdevhubusername",
        "location": "Local",
        "value": "MyDevHub"
      },
      {
        "key": "defaultusername",
        "location": "Local",
        "value": "[email protected]"
      }
    ]
  }
Listing 9-6

Simply piping sfdx JSON output into jq provides a nicely formatted output

  $ sfdx force:config:list --json | jq '.result[0]'
  {
    "key": "defaultdevhubusername",
    "location": "Local",
    "value": "MyDevHub"
  }
Listing 9-7

JQ allows you to go further by querying the results

  $ sfdx force:config:list --json |
      jq '.result[] | select(.key == "defaultdevhubusername").value'
  "MyDevHub"
Listing 9-8

JQ provides many sophisticated filtering options

You can then stitch together complex sequences of commands using Bash scripts and variables. Listing 9-9 shows us querying the alias of our default Dev Hub and saving the result in a variable DEFAULT_DEVHUB. We then use this variable as the targetusername for a SOQL query that simply lists the top ten creators of scratch orgs in that Dev Hub.
  #!/bin/bash
  DEFAULT_DEVHUB=$(sfdx force:config:list --json |
      jq --raw-output '.result[] | select(.key == "defaultdevhubusername").value')
  sfdx force:data:soql:query --query
    'SELECT CreatedBy.Name, Count(Id) FROM ScratchOrgInfo
     GROUP BY CreatedBy.Name
     ORDER BY Count(Id) DESC
     LIMIT 10'
    --targetusername $DEFAULT_DEVHUB
Listing 9-9

This runs a simple query to show which users have created the most scratch orgs on our default Dev Hub. Bash allows commands to be strung together easily. Note the use of to allow commands to span multiple lines

Bash scripts and JQ can take you a long way down the path of custom scripting. But for full control over the process, you may want to move to using Node.js or another programming language. Node.js is particularly convenient for scripting Salesforce DX, since you can more easily dig into the Salesforce DX internals if needed.

If you’re using Node.js, you can also take advantage of the Salesforce Core API ( https://forcedotcom.github.io/sfdx-core/ ). The Salesforce Core API is not a standard REST or SOAP API. Rather it’s a public API for accessing Salesforce DX functionality programmatically from your local system. It contains a wide variety of commands, but we’ve most commonly used it to access and run commands against the Salesforce orgs that have been authorized by the user. This means that users can securely authorize orgs one time, and then your scripts can make calls to Salesforce Core to access and perform commands against those orgs.

Salesforce Core is helpful, but doesn’t make all of the Salesforce CLI commands available in Node. My colleague, Bryan Leboff, wrote an NPM module sfdx-node18 as a wrapper around the Salesforce CLI. You can use this command to access Salesforce CLI commands directly from your Node.js code. In Listing 9-10, we pass a configuration object into sfdx.auth.webLogin({...}). This is the equivalent of running sfdx force:auth:web:login --setdefaultdevhubusername ... from the command line to authorize a new org.
  const { SfdxProjectJson, Org } = require('@salesforce/core');
  const sfdx = require('sfdx-node');
  const authWeb = async (destination, isDevHub) => {
    if (!isDevHub) {
      try {
        const orgObj = await Org.create(destination);
        return orgObj;
      } catch (e) {
        // Do nothing
      }
    }
    return sfdx.auth.webLogin({
      setdefaultdevhubusername: isDevHub,
      setalias: destination,
    });
  };
  module.exports = {
    authWeb,
  };
Listing 9-10

This Node.js code snippet makes use of both the official salesforce/core module and the unofficial sfdx-node module to authorize an org

Writing scripts in this way is very powerful since you can use the rich, expressive syntax of JavaScript, choose from any of the 800,000 NPM modules that might assist with common challenges, and mix in Salesforce DX commands to accomplish any build process you might require. Such scripting takes time and experimentation to build, but it can be created and refined gradually as your processes evolve.

Commercial Salesforce Tools

Build vs. buy is a classic decision. Salesforce DX has been made freely available “as a downpayment on our debt to developers” in the words of Jim Wunderlich from Salesforce.19 However the Salesforce DX team is focused on building the underlying capabilities rather than solving every common use case. For example, they have not thus far released an admin-friendly user interface for managing the development lifecycle.

There are many commercial vendors who have built tools to help with the Salesforce release management process. Most of these tools were built before Salesforce DX was released, however, and so still emphasize the org-based workflow of selecting, retrieving, and deploying individual pieces of metadata. They also emphasize a click-based workflow, similar to Salesforce’s declarative tools.

These tools greatly reduce the pain of org-based deployments and may provide benefit for your team. But the movement to Salesforce DX is a deep shift, so I would encourage you to focus on achieving the goal of real source-based development and use these tools to help your adoption of Salesforce DX, rather than just reducing the pain of org-based development.

One caveat for using any of these commercial tools. I’ve seen numerous companies adopt these tools, but attempt to limit costs by limiting the number of users who are given licenses. The goal of your DevOps processes should be to empower developers and remove bottlenecks in your process while at the same time establishing traceability and automated testing. Having a small number of users use a commercial tool to deploy the work of a larger development team will save license costs, but at the expense of making the entire process less efficient. If you choose to use a commercial tool, be generous in equipping all of your developers and admins to make use of it.

Appirio DX

Full disclosure: I’m the original architect and product manager for Appirio DX, and DiXie (Figure 9-1) was drawn by my wife :-)
../images/482403_1_En_9_Chapter/482403_1_En_9_Fig1_HTML.jpg
Figure 9-1

Appirio DX’s mascot, DiXie

Appirio DX is a suite of tools that Appirio developed to help their consultants and customers develop and deliver Salesforce more effectively. Appirio has been one of the top Salesforce consulting partners since its inception in 2006. In 2019, Appirio DX was made available as a commercial product.

Appirio DX aims to make CI/CD and Salesforce DX easier to adopt. It is similar to Salesforce DX in that it includes a CLI that can be run locally or as part of an automated job. It removes or reduces the need for teams to write custom Salesforce DX scripts, by providing commands and project templates for scenarios like initializing scratch orgs and publishing package versions.

Appirio DX includes a desktop app that allows click-based developers to work with Git branches, create scratch orgs, and synchronize changes from those orgs back into version control. The desktop app also eases the installation and configuration of developer-focused tools like Git and VS Code and provides capabilities like setting and toggling proxy settings across these tools.

Appirio DX provides an instance of GitLab and SonarQube that customers can use if they don’t want to provide their own DevOps stack. But the tools will run on any CI platform, and you can supplement the workflow with your own command-line tools.

Appirio DX’s CI/CD process is defined using whatever config files are standard for that CI platform, such as .gitlab-ci.yml or bitbucket-pipelines.yml files. As a result, the process is set up and managed in the same way that a pure-code solution would be, and is not obscured behind a GUI. For those who are comfortable with developer tooling, this gives them visibility and the flexibility to bring their own tools. But those more accustomed to click-based GUIs may find this daunting.

For teams using GitLab as their CI engine, Appirio DX can set up and configure the complete CI/CD pipeline for you in minutes. As of this writing, other CI engines have to be set up manually, but the process is straightforward. Appirio DX offers a Docker image appirio/dx that provides a consistent, predefined execution environment in any of the CI tools that support running jobs in Docker.

For teams wanting the control and visibility that other DevOps tools provide, Appirio DX provides a readymade solution that allows you to get started quickly with Salesforce DX.

Released: 2018

Architecture: Node.js and Docker, bring your own CI servers, or use Appirio DX’s GitLab

Benefits:
  • Similar to CI/CD tools on other technologies

  • GitLab provided, but works with any Git-based version control host

  • GitLab CI provided, but works with any CI server

  • SonarQube static analysis provided, but allows the use of any third-party tools

  • Three development modes:
    • CI/CD using the pre-SFDX Metadata API format

    • SFDX package development process

    • SFDX Org development process

  • Includes an admin-friendly UI for syncing scratch org changes to Git

  • A good fit for professional developers or DevOps specialists

Disadvantages:
  • Click-friendly capabilities are limited.

  • Feature set is limited compared to some of the more mature Salesforce RM tools.

  • Not a SaaS product (like Salesforce DX, some parts of Appirio DX run on the desktop). Software can be installed and configured automatically, but IT security restrictions might limit what tools can be installed.

AutoRABIT

AutoRABIT provides a SaaS-based suite of tools that allows companies to manage the complexities of the Salesforce release management process. One of their biggest customers is Schneider Electric, which is one of the world’s largest Salesforce tenants. AutoRABIT claims over 40 Fortune 500 customers, including 20 in the highly regulated finance and healthcare industries. If needed, AutoRABIT can be deployed behind corporate firewalls as an on-premise solution to satisfy corporate security and compliance policies.

AutoRABIT allows users to connect multiple orgs, capture metadata differences, and deploy those differences between orgs. They also support Salesforce DX capabilities like creating scratch orgs. They have a powerful data loader that can be used to deploy large volumes of data between orgs while preserving relationships. They have built-in Selenium integration, including the ability to use Selenium to change org settings as part of a deployment process. AutoRABIT acts as the CI engine that allows teams to customize and orchestrate these processes according to their specific needs.

AutoRABIT has recently added a data backup and recovery solution, Vault, to their product suite. Vault automates the capabilities of AutoRABIT’s Data Loader Pro to make ongoing incremental backups of production orgs and sandboxes and to allow data recovery that preserves references across objects. Vault backs up both data and metadata, including Chatter messages and attachments, and provides unlimited storage. This backup data can also be used to seed test environments, using a data masking capability to maintain the security and privacy of user data.

AutoRABIT has a large range of capabilities, and their professional services team can integrate with most other third-party tools (such as Jira, CheckMarx, and test automation tools). They also offer a managed services option for ongoing support.

Common user complaints are that the UI is slow and inflexible. Their metadata picker doesn’t have the sorting and filtering capabilities of Copado or Gearset, which makes manually selecting metadata a more tedious process.

AutoRABIT implementations take more time to provision (typically a month) and also require their professional services team to be involved (professional services hours are bundled with the up-front installation costs). Contrast this with Copado or Flosum, which are downloadable from the AppExchange, or with ClickDeploy, which provides easy OAuth-based single sign-on from your Salesforce org. This implies more lead time and commitment from customers wishing to implement AutoRABIT, although the learning curve on the tool is not necessarily steeper than most of the other commercial tools.

Released: 2014

Architecture: Built on OpenStack using Java

Benefits:
  • SaaS-based, on public or private clouds. They also offer an on-premise option.

  • Hierarchical data migration (DataLoader Pro).

  • Several prebuilt integrations into common tools (Jira, CheckMarx, test automation tools, etc.).

Disadvantages:
  • Takes a long time to install and train users.

  • Clumsy metadata picker.

  • UI can’t be customized.

Blue Canvas

Blue Canvas is another newer release management tool for Salesforce. Blue Canvas uses Git and Salesforce DX behind the scenes while providing a simple user interface for authenticating to orgs and managing deployments between them.

At the heart of Blue Canvas is a system to take regular metadata snapshots of connected orgs and record changes in Git, along with the user who made that change. This allows you to use Git as a type of setup audit trail that provides more detail on the nature of each change compared to Salesforce’s built-in audit trail. This is what my colleague, Kyle Bowerman, referred to as “defensive version control”: passively tracking changes made through the admin interface. Blue Canvas also supports “offensive version control,” where changes tracked in version control are automatically deployed to further environments.

Based on this underlying Git tracking, Blue Canvas allows you to compare the metadata in any two orgs. Once the comparison has been made, you can select metadata in your source org that you want to deploy to the target org. Blue Canvas will check for merge conflicts and run a validation to ensure that changes can be deployed. These deployment requests can then be grouped into a larger release and be released at once.

Blue Canvas also allows you to connect external Git repositories like GitHub so that you can mirror the Blue Canvas repository into those.

Blue Canvas is still relatively early in their development. They recently added the capability to run Provar tests after deployments. Provar is a Salesforce-specific tool for doing UI testing that allows you to perform regression testing to ensure that your deployment has not broken functionality. They plan to allow for a wider variety of postdeploy actions to be run.

Released: 2016

Architecture: AWS, Auth0, Go, Git, Salesforce DX

Benefits:
  • Git is built in to the tool, providing fast metadata comparisons and deployments.

  • Changes are tracked in Git in near real time and specify who made the change.

Disadvantages:
  • Blue Canvas doesn’t currently track profiles or permission sets in their main tool, although they provide a very nice free tool to compare and deploy permissions.

  • No support for data migrations or Selenium-driven manual setup steps.

ClickDeploy

ClickDeploy is one of the newest of the commercial release management tools and one of the easiest to get started with. ClickDeploy is truly SaaS hosted in that there are no downloadable tools and no managed packages to install in your org. They also offer a free tier that allows up to 15 deploys per month, enough to serve a small customer or do a POC. You can use your existing Salesforce credentials to sign in to ClickDeploy and use OAuth to connect to any number of Salesforce orgs.

For those with simpler release management needs, ClickDeploy provides an easy and superior alternative to change sets. You can connect to your source org, easily sort, filter, and select metadata, and then deploy it to one or more target orgs. ClickDeploy can be used to support multiple production orgs, something that is not possible with change sets.

As teams mature, they can upgrade to the Pro version which provides unlimited deployments and the ability to collaborate as a team. Team collaboration is fairly basic as of this writing. Every user associated with the same production org is grouped together into a team. Members of a team can collaborate around deployments, viewing, cloning, modifying, validating, or executing a deployment. This provides team-level visibility into the history of deployments.

ClickDeploy’s Enterprise version allows teams to collaborate using version control. You can connect to all the common Git hosting providers to track the evolution of metadata across your orgs. ClickDeploy provides a Salesforce-aware frontend for Git to allow users to select metadata and commit it to a repository. You can compare metadata between Git and a Salesforce org, and you can deploy metadata directly from the code repository. Deployments can be based on the complete metadata in a branch, differences between two branches, or an arbitrary subset of metadata from that branch.

Git support enables several capabilities. First, ClickDeploy allows you to build a scheduled backup of your orgs to a Git repository. Importantly, you can customize the metadata that is included in this backup. Incremental metadata changes will then be recorded as Git commits each time the backup job runs. The other capability this enables is to automate deployments from Git based on a schedule, or each time a commit is pushed to the repository. This allows for continuous delivery without the need for a separate CI tool.

ClickDeploy supports the Salesforce DX source format and can retrieve or deploy metadata from or to scratch orgs. As of this writing, they do not support the creation of scratch orgs or package versions.

The user interface is simple to understand and use and provides the essential tools needed to manage deployments.

Released: 2017

Architecture: AWS

Benefits:
  • Easy to get started with

  • Free tier

  • Nice metadata selection capabilities

  • Metadata comparisons (org-to-org, org-to-Git, Git-to-org, Git branch-to-branch)

  • Git integration, including an admin-friendly UI to make commits, and automated deployments from version control

Disadvantages:
  • Doesn’t automate scratch org or package creation.

  • No data migration tools.

  • No support for Selenium testing or UI automation.

  • Team access controls are somewhat limited.

Copado

Full disclosure: I’m currently a product manager for Copado.

Copado was founded in 2013 by two European Salesforce release managers based in Madrid, Spain, to ease the pain, complexity, and risk of the Salesforce deployment process. Since then they have retained growth capital from Salesforce Ventures and Insight Ventures, attracted over 150 global customers, and brought on a seasoned US senior leadership team to build their US business.

Copado uses Salesforce as its user interface, for authentication, and to store data on orgs, metadata, and deployments. But (unlike Flosum) it delegates backend processing to Heroku. That allows Copado to leverage Heroku’s power and speed to handle metadata retrieval, processing, and deployments. This architecture allows customers to customize aspects of the Copado frontend and tap into its data and business logic in Salesforce.

Interestingly, Copado doesn’t store any data on Heroku; instead Heroku dynos are created on an as-needed basis to perform metadata operations. Information about that metadata (is that called “meta-metadata”?) is then stored in Salesforce. While Copado boasts that this eases security reviews since dynos are never persisted, it also has a startup cost. If Heroku is being used to deploy metadata from a code repository, it has to clone that metadata first. If a deployment is being made based on org metadata, the metadata is never cached in Heroku; it has to be retrieved each time. This leads to some performance cost for each job. Copado claims to have optimized this process, fetching only the minimal amount of history to enable the merge.

Unlike most competing tools, Copado includes its own Salesforce-based ALM (Application Lifecycle Management) tools for creating stories, bugs, and so on. This allows metadata changes to be associated with particular features or bugs in the ALM tool and for deployments to be made at a feature-level granularity. This is somewhat similar to GitHub issues, GitLab issues, or Jira-Bitbucket integration, where each Jira issue can show which commits made reference to it. Copado includes native integration with Jira, Azure Devops, VersionOne, and Rally to sync stories from existing ALM tools and update their status.

Copado offers a Selenium recorder that simplifies the creation of UI tests. It hosts the Selenium tool in Heroku and can orchestrate functional testing as part of their quality gates. The Selenium scripts can even be used to automate “manual” setup steps in an org. Copado also offers a compliance tool to ensure that excessive permissions are not deployed as part of the release process. Companies can write their own rules to match their policies.

Copado is priced on a per user per month basis with two levels of licensing: one for developers and the other for release managers. Additional CI functionality is currently tied to a Branch Management license. The Selenium Test and Compliance Hub products are also licensed separately. Copado uses a credit system, similar to Salesforce governor limits, to enforce a “fair usage” policy. Copado claims that in practice customers never hit these limits, but these ensure that usage remains proportional to the number of licenses purchased.

Released: 2013

Architecture: Salesforce with Heroku as a processing engine

Benefits:
  • Nice metadata picker

  • A good choice for those already using Salesforce itself to manage their Salesforce development

  • Rich suite of tools, including ALM, data migrations, and Selenium testing

Disadvantages:
  • All logs and other files are stored as attachments in the Salesforce package, making them hard to read.

  • The UI is built on Salesforce and looks slightly awkward. For example, notifications about job results are not very obvious.

  • Jobs are run on Heroku but not stored on Heroku. This means that each job takes a nontrivial amount of time to start (such as cloning the repository).

Flosum

Flosum is a release management app for Salesforce that is built entirely on the Salesforce core platform. They have the highest number of positive reviews among release management tools on the AppExchange.

One benefit that Flosum derives from being built on Salesforce is that no additional security reviews are required in companies that take a long time to whitelist new software.

Flosum claims that their tool allows people to add custom automation such as approval processes using familiar Salesforce tools. But since Flosum doesn’t live in your main production org, it won’t have access to integrate with your users and data. So any automation you build on top of Flosum will be disconnected from the rest of your business processes.

Based on increasing demands from customers to support version control and CI/CD, Flosum has built basic version control and CI/CD capabilities into their tool. They’ve also built an integration with Git. But because all of these capabilities are built using native Apex, they are very slow compared to other tools.

Flosum has done an impressive job of building version control, metadata management, and CI/CD capabilities on the Salesforce platform. Their choice of architecture greatly limits their speed and ability to integrate third-party tools, but they have many satisfied customers and provide a vastly superior alternative to change sets.

Released: 2014

Architecture: Built on Salesforce

Benefits:
  • Business logic can be customized using Salesforce mechanisms.

  • Built on Salesforce, so no additional platforms to pass through security review.

Disadvantages:
  • Large operations are very slow.

  • Expensive .

  • Can’t integrate standard DevOps tools.

Gearset

Gearset is a UK-based company founded by Redgate Software. Their Salesforce release management product is based on Redgate’s experience building release management tools for SQL Server, .NET, and Oracle.20 Their aim has been to emphasize ease of use to allow users of all technical backgrounds to adopt modern DevOps best practices.

Gearset is a SaaS tool that provides a full array of release management and DevOps capabilities. Their enterprise customers include McKesson, IBM, and even Salesforce themselves. They have a fast and easy-to-navigate user interface, allowing quick selection of the metadata to deploy and making it easy to build up more complex deployments including changes to things like profiles and permission sets.

Gearset has built intelligence into their comparison engine to automatically fix common deployment issues like missing dependencies or obsolete flows, before pushing your changes to Salesforce. This means deployments with Gearset are more likely to work the first time, avoiding the time-consuming iteration cycle of fixing failed deployments.

This intelligent comparison engine is used for manual deployments, as well as automated deployments triggered from CI jobs. This means a higher deployment success rate and less time spent iterating and fixing repetitive deployment failures.

Gearset’s Pro tier offers a drop-in replacement for change sets and is ideal for admins and low-code developers. Connect any number and type of orgs, compare them to see a detailed breakdown of their differences, explore dependencies between metadata components and automatically include them in your deployment, and finally push your changes between orgs. Gearset integrates with all of the major Git hosting providers and allows you to connect to any Git repo, making it easy to run comparisons and deployments with Git branches, just as you would with orgs.

For larger teams, the Enterprise tier includes a variety of automation features, including org monitoring to alert you to any changes made to your orgs, and scheduled metadata backup. Gearset also comes with built-in continuous integration to monitor Git branches and push any detected changes to your orgs. Finally, Gearset offers a data deployment feature, making it easy to deploy hierarchical data between orgs, preserving any relationships.

Gearset’s pricing is per user, allowing you to connect as many orgs as you like and run unlimited comparisons and deployments. Support is included in the price. Interestingly, Gearset doesn’t have a distinct support team, so questions and issues are managed by the Gearset development team itself, likely yielding higher-quality initial responses.

Released: 2016

Architecture: .NET and C# on AWS

Benefits:
  • Nice UI.

  • Quick navigation of metadata.

  • Metadata comparisons (org-to-org, org-to-git, git-to-org, git branch-to-branch).

  • Comparison engine automatically fixes common deployment errors.

  • SaaS-based.

  • Git integration with an admin-friendly UI, allowing admins and developers to all work from version control together.

  • Full Salesforce DX support, including scratch org creation.

  • Hierarchical data migration.

Disadvantages:
  • No support for Selenium testing or UI automation.

  • UI can’t be customized.

  • Can’t mix in third-party DevOps tools.

Metazoa Snapshot

Snapshot is a desktop-based change and release management tool for Salesforce. It was first launched by DreamFactory in 2006, but the makers of Snapshot spun it off under a separate company, Metazoa, in 2018.

Snapshot is written in Visual C++ and runs as a desktop app. The user interface looks extremely dated, but it runs on Mac or Windows and has been updated recently to include some Salesforce DX capabilities.

Snapshot is built around the concept of visual workspaces. Each workspace allows the user to arrange snapshots (metadata retrieved from an org) and projects (local folders containing metadata) graphically. Those snapshots and projects can then be connected to build out a pipeline view that flows from development to testing to production. This pipeline automatically batches metadata retrieval and deployment, allowing it to bypass the 10,000 metadata item limits of the Metadata API.

Each snapshot or project allows you to perform actions on it by right-clicking and selecting from the menu. Actions typically involve running reports on that org, and Snapshot boasts over 40 reports that can be run, such as “generate a data dictionary.”

The connections between snapshots/projects enable actions such as doing comparisons, deployments, or rollbacks. Chaining together snapshot connections from development to production allows for continuous delivery, where changes can be deployed from org to org in an automated way. Snapshot also provides support for connecting to code repositories including Git, SVN, and TFS.

Snapshot runs on the user’s desktop, but allows users to synchronize workspaces with other team members. For security purposes, org credentials are not stored online or shared between team members. Admins can enforce controls on the activities of other Metazoa users, for example, enforcing code quality gateways on deployments.

Snapshot also supports extracting and loading data while keeping complex data relationships intact. It can scramble data fields, making it useful for seeding new sandboxes while scrubbing sensitive data.

In short, Snapshot provides a versatile, admin-friendly toolkit with many commands and reports that are not present in competing tools. Despite the “retro” user interface, the underlying capabilities are robust and powerful.

Released: 2006

Architecture: Desktop app written in Visual C++

Benefits:
  • Quick to download, install, and experiment with

  • Allows management of multiple orgs

  • Contains many reports that are not present in competing products

  • Supports Salesforce DX metadata format

  • Works relatively quickly (limited by the speed of your local machine and the Salesforce APIs)

Disadvantages:
  • Old-looking UI

  • Doesn’t automate the scratch org or package creation process

  • Not cloud-based, but configuration can be synced across teams

Packaging

Modular architecture is an important software architecture pattern that helps make applications more manageable and easier to understand. Packaging is a form of modular architecture that allows you to develop and deploy code and configuration in discrete bundles. That makes software development and delivery far easier. As mentioned earlier, packaging is a critical part of Salesforce DX and provides a superior method to manage deployments.

Classic Packaging

For completeness, we’ll briefly discuss classic packaging. But if you’re looking for a quick recommendation on how to build Salesforce packages, skip to “Second-Generation Packaging” section.

Although Developer Edition orgs have no sandboxes and thus can’t make use of change sets, all orgs are able to create packages. Until the recent release of unlocked packages, the main audience for package-based deployments were ISVs producing apps for the AppExchange. The Salesforce AppExchange is a business “app store” which provides over 5,000 Salesforce apps, 40% of which are free.21 The vast majority of these apps are actually managed or unmanaged packages.

Unmanaged packages and classic managed packages are actually based on the same technology as change sets and have a similar user interface for building them. You begin by giving a name and description for the package and then proceed to creating your first version of the package by adding metadata to it. Package versions are named and numbered, and you can set a password to prevent unauthorized individuals from installing this metadata. In the case of unmanaged packages, you can optionally link release notes and post-installation instructions and specify required dependencies in the target org such as enabled features and object-level configuration like record types.

Once a package version is uploaded, it is given a unique 04t ID and is thus available for installation into any org. If the package is published on the AppExchange, you can then make this package version available using the AppExchange publisher tools.

One significant limitation of unmanaged packages compared to the other three types of packaging is that once an unmanaged package is installed, its metadata is no longer associated with that package. It is as if the unmanaged package is a cardboard shipping container that is discarded after opening. This makes them useful for deployment, but not at all useful for modularizing your code architecture.

Classic managed packages are similar to unmanaged packages in most ways, but require the use of a namespace which is prepended onto metadata names like myMgdPkg__packageContents__c. Partly for this reason, managed packages must be developed and published from a Developer Edition org. The major benefits of managed packages are that
  • Package components such as custom code cannot be inspected in the org in which they’re installed, which helps to protect the intellectual property of the publisher.

  • Managed packages are upgradeable.

  • Package components remain associated with the source package.

  • Components are distinguished by their namespace.

  • Package metadata has its own set of governor limits above and beyond those in the installation org.

For these reasons, commercial AppExchange apps are almost always managed packages, while free AppExchange apps are almost always unmanaged packages. Unmanaged packages are far easier to create, but don’t obscure their contents or allow for upgrading. That makes them far simpler to maintain, but also harder to build a business around.

Managed package development requires an additional layer of sophistication, one which I’m not well qualified to comment on. In my view, managed package development is a dark art, but there are many thriving ISVs who have successfully navigated the challenges in building, upgrading, and supporting managed packages. See the Salesforce developer documentation on managed packages and Andrew Fawcett’s excellent Force.com Enterprise Architecture22 for more detailed discussion on their development.

Unlike change sets, which give you the option to include dependencies, unmanaged and classic managed packages automatically add dependencies to the package metadata. This is because packages by definition need to be self-contained so they can be installed in any org. Change sets, by contrast, can only be installed in related sandboxes which necessarily share similar metadata. Excluding dependent metadata from a change set limits the scope of changes (the blast radius) and means that change sets don’t automatically upload the latest version of all dependencies from the development sandbox.

Second-Generation Packaging

Salesforce DX brought a new type of packages, sometimes called second-generation packages. Whereas unmanaged packages and classic managed packages are artifacts created from org-based development, this new type of packaging is designed for source-driven development. Unlocked packages are a type of second-generation package that are well suited to enterprise development (building and migrating functionality for use within a single enterprise). Second-generation managed packages are intended to be the successor to classic managed packages and are intended to simplify the development process for managed packages.

Second-generation package publishing is a key part of the Salesforce DX workflow, and we’ve already discussed “Branching for Package Publishing” and “CI Jobs for Package Publishing” in Chapter 7: The Delivery Pipeline. The concepts are similar to the concepts for managed and unmanaged packages, but second-generation packages are defined using configuration files, published using the Salesforce CLI, and can easily express dependencies on other packages as well as org-level features and settings.

Unlocked packages are discussed implicitly and explicitly throughout this book, since our main focus is Salesforce DX development for the enterprise. A major improvement over unmanaged packages is that metadata remains associated with the unlocked package that included it, and that package deployments cannot overwrite metadata that is included in another package.

One of the trickiest aspects of classic managed package development is the use of namespaces, since each namespace is tightly bound to one and only one Developer Edition org. Salesforce DX now allows a single Dev Hub to be associated with multiple namespaces so that scratch orgs can be created that use any of those namespaces. Second-generation managed packages can now also be published to the AppExchange. It is a great relief that the enterprise workflow can now be united with the ISV workflow and Salesforce DX technology can be used similarly for both.

Unlocked Packages

Change sets, the Metadata API, and most of the commercial Salesforce release management tools are built around the concept of hand-selecting individual pieces of metadata and deploying them between different environments. Deploying unpackaged metadata in this way has many disadvantages. First, it puts the burden on the person doing the deployment to ensure that they are not including too much or too little metadata. Second, combining metadata from multiple developers requires some Salesforce-specific XML processing. Third, the process is error-prone, and it’s hard to ensure that metadata is being deployed consistently across environments. The use of version control helps tremendously in this process, but still requires developers to pick through metadata changes to determine which changes to commit.

Imagine if deploying your Salesforce customizations were as easy as installing a new package from the AppExchange. Unlocked packages make this possible. These allow you to bundle customizations into one or more packages and install them automatically or manually.

Unlocked packages are stored on the Dev Hub. Thus a team building unlocked packages should collaborate on the same Dev Hub so that they can contribute to the same packages.

To build and publish unlocked packages:
  1. 1.

    First ensure that packaging is enabled in the Dev Hub.

     
  2. 2.

    Packages are basically a container for metadata. The sfdx-project.json file has a “packageDirectories” section that contains the configuration for each folder that will hold your package metadata. When you first create a Salesforce DX project using sfdx force:project:create, this file is initialized for you and contains a single force-app folder. Update this file if needed so that it points to the folder that holds your metadata.

     
  3. 3.

    Then create the package on your Dev Hub, specifying the name and definition of your package by executing sfdx force:package:create along with appropriate options. This step defines the package and gives it an ID that begins with 0Ho, but does not actually add any metadata to the package.

     
  4. 4.

    When this command completes, the packageAliases section is given a new alias pertaining to the newly created package, and the packageDirectories section is given a new object corresponding to the newly created package.

     
  5. 5.

    Having created the package, you can then begin creating package versions using sfdx force:package:version:create along with the appropriate options. These package versions encapsulate the metadata in the package folder so that it can then be installed in another org. The result of running this command is that the packageAliases section will be given a new entry containing the 04t ID for the package version. That is the same ID that can be used to publish the package.

     

Initial package creation is a one-time process, but the package version publishing should be scripted as part of your CI process so that it will run every time the code in the master branch for that package is updated. If you’re building branch versions of your package using the --branch flag, it’s a good practice to automatically set that parameter based on the Git branch you’re publishing from and to automatically add a Git tag to the repository when a new version is published, as described in Chapter 7: The Delivery Pipeline. This makes your Git repository a comprehensive reference to the version and change history of your packages. You can add other automation such as only publishing versions when a particular keyword such as “#publish” is included in the commit message.

Although there is a little bit of setup required, once built, this simultaneously makes your code architecture cleaner and deployments easier. By dividing your metadata into subfolders, the application’s structure is made clear. By publishing package versions, you can then deploy updated versions to any org using a single ID, instead of trying to deal with hundreds or thousands of metadata files. There is no chance of including too much or too little metadata, and the results are identical with every installation.

Package Dependencies

Salesforce DX allows you to specify package dependencies on other packages (unlocked or managed) and on particular org configuration. Specifying such dependencies is one of the most important aspects of packaging. Refactoring metadata so that it can be built into unlocked packages is the single most challenging aspect of adopting Salesforce DX, but it is also the most beneficial. This topic is addressed in more detail in the section on "Packaging Code" in Chapter 5: Application Architecture.

Adding and Removing Metadata from Packages
Unlocked packages have a number of helpful characteristics that make it easier to adopt packaging gradually. First, these packages can take ownership of existing metadata. For example, imagine you have a custom object called MyObject__c in a particular org, and you then build a package version that contains MyObject__c. When you install that package in your org, it will take ownership of that custom object. The custom object will then display a notice (shown in Figure 9-2) that it is part of an unlocked package and that changes to it will be overwritten if the package is updated.
../images/482403_1_En_9_Chapter/482403_1_En_9_Fig2_HTML.jpg
Figure 9-2

When viewing metadata that is part of an unlocked package, users see an indication that the metadata is part of a package

This behavior allows you to create small unlocked packages that can gradually subsume existing metadata and make it part of the package. There is no data loss or interruption to business logic when doing this. Although you can add a namespace to unlocked packages, doing so would prevent your package from taking ownership of existing metadata, since the API name of the packaged metadata would actually be myNamespace__MyObject__c and so wouldn’t match the existing metadata.

Unfortunately, if you attempt to update that metadata using the Metadata API, you will not receive such a warning, so teams should put some additional automated checks in place to ensure that there is no overlap between the metadata in their various packages and their unpackaged metadata.

Similarly, it’s also possible to remove metadata from unlocked packages, either because it’s no longer needed or to move it to another package. Propagating metadata deletions has long been challenging in Salesforce. Deletions are not supported at all in change sets, and with the Metadata API, deletions need to be explicitly listed in a destructivechanges.xml file, which requires separate logic be built if tools want to automate the deletion process.

Metadata that is deleted from an unlocked package will be removed from the target org or marked as deprecated. In particular, metadata that contains data like custom fields or custom objects is not deleted, since that could cause data loss. Instead, this metadata is flagged as deprecated. This is a best practice recommended in the classic book Refactoring Databases.23 This allows data to be preserved and copied over to new data structures. Care is needed however to update any integrations that point to the old data structures and to ensure that data is replicated between the old and new structures during that transition.

If you want to migrate metadata from one unlocked package to another, you can simply move the metadata files from one package to the other. Publish new versions of both packages. Then install the new version of the package that previously contained the metadata in your target org using the command sfdx force:package:version:install --upgradetype DeprecateOnly .... The DeprecateOnly flag ensures that metadata which is removed from one package will be deprecated rather than removed. You can then install the new version of the package which now contains that metadata, and it will assume ownership and undeprecate that metadata without causing any change to the data model, business logic, or UI.

Resolving Deployment Errors

Deployment errors are extremely common when using CI/CD in legacy Salesforce projects since it is very easy for the metadata tracked in version control to become inconsistent. By enabling source synchronization, Salesforce DX greatly reduces the frequency of deployment errors, although they are still a fact of life for Salesforce development teams.

One very attractive capability of Gearset is their problem analyzers, which automatically identify and fix problems like missing dependencies before the deployment is performed. Copado includes a version of this which addresses the common challenge of profile deployment errors by automatically modifying and redeploying profiles.

General Approach to Debugging Deployment Errors

Resolving deployment errors is actually what consumes most of the time during deployments. Resolving these errors quickly depends on an understanding of Salesforce, its different types of metadata, and how they interdepend. I probably could have written an entire book on how to tackle the various kinds of deployment errors, but I’m grateful that you’ve even gotten to this point in this book, and I don’t want to press my luck. What follows is a concise set of suggestions:
  1. 1.

    Don’t panic. On large deployments, it’s not uncommon to get hundreds of deployment errors. In many cases, these errors are closely related to one another and resolving one issue can resolve dozens of related issues.

     
  2. 2.

    If there are a large number of deployment errors, and you’re not using a tool that organizes them for you, I recommend you copy the list into a spreadsheet to make it easier to manage and work through the list.

     
  3. 3.

    Deployment errors can cascade to cause other errors; this means that errors later in the list can be caused by errors earlier in the list. Therefore the order of the errors is an important clue to resolving them. For example, if a new field can’t be deployed, that can cause a class that uses that field to fail. That class failure can cause the failure of other classes. That in turn can cause the failure of Visualforce pages, which can themselves cause other errors. All of those errors will be resolved once that field is deployed.

     
  4. 4.

    Begin by identifying and deleting any duplicate errors from your list. They will all be resolved in the same way.

     
  5. 5.

    Then identify and delete any dependent errors that are actually caused by earlier errors.

     
  6. 6.

    Then work through the errors from the top down. Take note of any clues such as files, metadata names, or line numbers that are mentioned in the message.

     
  7. 7.

    Being able to view the metadata line by line allows you to take steps like temporarily commenting out lines of metadata that are causing deployment errors so that you can get the main body of the deployment to succeed.

     
  8. 8.

    In the case of large deployments, it can help to temporarily remove pieces of metadata that give persistent errors, so that the main deployment can go through. After deploying the main body of the metadata, you can quickly iterate on the small number of problematic metadata items you’ve isolated. This allows for faster trial-and-error deployments as you work toward a resolution.

     
  9. 9.

    The type of error that is most challenging to debug is in the form “An unexpected error occurred. Please include this ErrorId if you contact support: 94477506-8488 (-1165391008)” This error reflects an internal “gack” or unhandled exception in Salesforce itself. The error number shown is a number from Salesforce’s internal logs, so to get any insight into it, you’ll need to file a case with Salesforce and request their Customer-Centered Engineering team to look that up in Splunk. In the meantime, you’ll need to do some sleuthing to figure out what caused this error. This debugging is far easier if you’re doing frequent small deployments, since that immediately narrows down the cause. Rather than getting stuck for days, follow the recommendation in point 8 and deploy your metadata in subgroups until you have isolated the source of the problem.

     

Getting Help

I highly recommend the Salesforce Stack Exchange group ( https://salesforce.stackexchange.com/questions/tagged/deployment ) for finding and resolving more obscure deployment errors.

General Tips for Reducing Deployment Errors

To reduce the frequency of deployment errors, focus on deploying small batches of changes frequently. In the case of org-based development, ensure that developers are making use of feature branches that run validations of the metadata in their branch against the next higher org (e.g., QA). If the metadata in a feature branch validates successfully, it is likely to also deploy successfully when merged with the main branches and deployed to higher orgs.

As mentioned earlier, using Salesforce DX scratch orgs for development greatly simplifies the development process since it removes the need to handpick metadata items from a source org, a very error-prone process. Instead, Salesforce DX works through pushing and pulling metadata to and from a scratch org in its entirety. In most cases, this ensures that the metadata is coherent.

Continuous Delivery

According to Jez Humble,

Continuous Delivery is the ability to get changes of all types—including new features, configuration changes, bug fixes and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.

Our goal is to make deployments—whether of a large-scale distributed system, a complex production environment, an embedded system, or an app—predictable, routine affairs that can be performed on demand.

We achieve all this by ensuring our code is always in a deployable state, even in the face of teams of thousands of developers making changes on a daily basis. We thus completely eliminate the integration, testing and hardening phases that traditionally followed ‘dev complete’, as well as code freezes. 24

Continuous delivery is thus a maturation from the practice of making ad hoc deployments to a state where deployments are happening on an ongoing basis. Separating deployments from releases, as described later, allows you to practice continuous delivery even if features should not be immediately released to users.

Why Continuous Delivery?

Continuous delivery builds on the practice of continuous integration, adding the additional layer of ensuring that code is actually deployable from trunk at any time. In Salesforce, the best way of doing this is to validate or deploy metadata to a target environment whenever code changes on trunk. Your exact process may vary depending on your needs, but assuming that you have two testing environments (QA and UAT) prior to your production environment, a good default is to automatically deploy metadata from your main branch to QA and then (if that succeeds) to immediately trigger a validation (a deployment with the check-only flag set) of that metadata against UAT. This ensures that there is no delay in your QA testers getting access to the latest functionality from developers (or giving feedback if developers have broken something). It also helps ensure that code is also deployable to UAT and that no one has made any “out of band” changes to that environment that would interfere with your eventual releases.

Why perform deployments continually in this way? Consider the alternative, batching deployments at the end of each week or each sprint. Such infrequent releases mean that testers and users are continually waiting, and deployments are massive and accompanied by massive numbers of deployment errors. In a typical team, one person might be delegated to do the release, meaning that they have to lose half a day of work to resolve errors, and have to make imprecise judgment calls, adding or removing metadata from the deployment to get it to go through. They’re also typically under stress and time pressure to complete the deployment within a particular window or outside normal working hours. You might call this alternative approach “continuous waiting” or “periodic stress.”

Continuous delivery distributes deployments into small batches across time and across the development team. This ensures that deployment challenges can be addressed in small chunks, and distributes expertise in resolving deployment errors over the entire team, which helps them to prevent these errors in the first place. If everyone on your team did a perfect job of ensuring the metadata they commit to version control was accurate and comprehensive, there would be no deployment errors. The best way to give members of the team feedback on how well they’re doing that is if they are actually shown deployment results from each change they make.

Automating Deployments

Implicit in continuous delivery is the use of automated scripts or tools to perform deployments. Most of the commercial Salesforce release management tools offer continuous delivery capabilities in the sense that they can perform ongoing automated deployments from version control. That’s also something that can be accomplished through scripts run in traditional CI tools, which is the approach that Appirio DX takes.

Reducing the Size of Deployments

When automating deployments, one key is to be able to make deployments small and fast while still having visibility into the state of the metadata in each org. Making deployments small is important in reducing the risk and impact of each deployment. It also helps to not change the lastModifiedDates of Salesforce metadata that has not actually changed. Making deployments fast is important so that fixes and updates can be released and tested quickly. It’s also important in case there are deployment errors, since debugging and resolving those requires rerunning deployments repeatedly. The time required to resolve all errors is proportional to the time required for a single deployment.

If you’re building your own CI/CD process, one technique I’ve used with great success for org-based deployments is to use Git tags to mark the points in time when deployments were made successfully, and then to use Git diffs to determine what has changed since that time. Tags are labels or “refs” which are used to mark a particular point in a chain of Git commits. You may have more than one of these tags on the same commit.

The branching for org-level configuration shows how to manage multiple orgs from one repository. In this case, we use tags based on the org name and a timestamp. Figure 9-3 shows an example of this with tags indicating that particular commits were successfully deployed to int, uat, and prod environments.
../images/482403_1_En_9_Chapter/482403_1_En_9_Fig3_HTML.jpg
Figure 9-3

This diagram shows the use of tags to track successful deployments. In this case, there are tags pertaining to uat, int, and prod environments

Tagging a commit with the org name after a deployment succeeds allows us to determine what metadata has changed since the last deployment. The basic approach is as follows.

Different branches have different rules that apply to them. When a commit is made on a branch that governs the UAT environment, for example, we first use git describe as shown in Listing 9-11 to determine the last successful deployment to the UAT environment.
  $ git describe --tags --match "uat-*" HEAD
Listing 9-11

A Git describe command to find a tag that matches “uat-”

Having found that tag, we then use that as the input into a git diff command as shown in Listing 9-12 to determine what files have changed since that time.
  $ git diff --name-only --ignore-all-space [name of the tag found above]
Listing 9-12

A Git diff command to find files that have changed since a particular point in time

This command gives a list of changed files that you can then copy into a new directory and use as the basis for your “differential deployment.” If you’re using the Metadata API format and not doing any further XML processing, you’ll be limited to deploying entire .object files, which can be massive. Even short of adopting other Salesforce DX practices, using the “Source” format for metadata makes it easier to deploy smaller subsets of metadata such as particular fields instead of complete objects.

If this subset of changed files deploys successfully, you can then tag the repository with uat-[timestamp] to mark this commit as the new state of the repository.

Deploying Configuration Data

As explained in the section “Configuration Data Management” in Chapter 4: Developing on Salesforce, using data to store configuration requires a thoughtful approach to ensure that configuration can be easily migrated.

Wherever possible, you should use Custom Metadata instead of using Custom Settings or Custom Objects to store configuration data. One main reason for this is that Custom Metadata is deployable using the Metadata API along with the rest of your configuration, so it does not require any special management process.

Deploying configuration that is stored as data (either in Custom Settings or in Custom Objects) requires that data to be extracted from one org and loaded into another org. You should store this configuration in version control, along with the scripts used for extracting and loading it. You may also need to transform that data if it includes IDs or other data that are org-specific. Some of the commercial release management tools like AutoRABIT, Copado, Gearset, and Metazoa have built-in capabilities for doing this. If you want to build this capability yourself, you’ll be relying on Salesforce’s REST API (or Bulk API if the configuration data is massive).

Some AppExchange apps like CPQ solutions and FinancialForce involve extremely detailed configuration data. Vlocity built a sophisticated tool specifically to help their customers extract and load their data packs25 as part of a CI/CD process.

Continuous Delivery Rituals

The term “continuous delivery” is often used to refer simply to automating deployments. But there are several additional behaviors that truly characterize this practice. I’ve referred to these as “rituals” here, to emphasize that these behaviors need to be internalized to the point that they become automatic and need to be reinforced as “sacred” to fully achieve the benefits of continuous delivery.

Continuous delivery evolved out of continuous integration and is based on the same behavioral rituals. Those rituals are
  • Code is developed on a single trunk, with feature branches not persisting more than a day.

  • Every commit to that trunk triggers a set of automated tests.

  • If the build breaks, the team’s highest priority is to fix the build within 10 minutes (either by making a fix or reverting the changes).

In particular, paying attention to the build status and regarding it as critical to the team’s operations is a learned behavior that needs to be reinforced by team leadership and by the individual members of the team.

Examples of malpractices that contradict this ethic are committing on top of a broken build. If the build is broken, everyone else should refrain from pushing their commits to the trunk and if necessary swarm to help resolve the broken build.

Continuous delivery takes this process further by automatically performing deployments or validations from trunk to one or more target environments with every change. This allows for an additional layer of automated tests: unit tests that accompany the deployment and postdeployment UI tests.

The DevOps literature is filled with references to teams enacting elaborate release processes and automation, but failing to pay attention to the build status over time. The second law of thermodynamics in physics states that the entropy of any isolated system can never decrease. In other words, things fall apart unless you continually apply effort. The rituals of continuous delivery treat a green build as sacred, meaning that it is the top priority of the team to ensure that they always have a clear path for any member of the team to make a next deployment. Such behaviors are learned, but become entirely natural once ingrained.

Deploying Across Multiple Production Orgs

Sandboxes, whether for development, testing, or training, are all related to a production org. The implication is that the metadata in that production org and its related sandboxes should always remain relatively similar, and any differences are meant to be temporary. Thus deploying across sandboxes to a single production org is actually a process of making those orgs more consistent with one another and resolving any metadata differences that interfere with deployments.

It is an entirely different challenge when deploying across multiple production orgs, where the metadata is generally meant to be different. Salesforce provides methods to segregate data access within a single production org, so data isolation is not normally a reason to have more than one production org. Companies who adopt multiple production orgs generally do so because they need to serve independent and incompatible needs across different business units within their organization. See “Multiple Production Orgs” in Chapter 6: Environment Management for more information.

Nevertheless, it’s common for teams with multiple production orgs to want to share certain functionality across orgs. If that functionality is available in a managed package created by a third-party ISV, the problem is mostly solved. Managed packages ensure consistent metadata across each installation. All that remains is ensuring that the package is configured consistently and upgraded simultaneously across those orgs.

Prior to the arrival of unlocked packages, there was no easy way for enterprises to syndicate metadata across multiple production orgs and still keep it in sync. “Configuration drift” is a risk for any IT system, and since Salesforce customizations are basically 100% configuration, Salesforce orgs are often the ultimate nightmare in terms of configuration drift. A team might start by introducing a set of code and configuration from one org into another org, but differences arise and increase continually as time wears on.

Building and maintaining unlocked packages (or finding an alternative managed package solution) is the only option I would recommend for organizations who need to maintain similar functionality across more than one production org. Needless to say, they also help maintain consistency across sandboxes.

Managing Org Differences

Perhaps a corollary of the second law of thermodynamics is that the differences between any two Salesforce orgs will always increase unless you apply energy to keep them in sync. User and API interactions with a Salesforce org generally lead to data changes, and some activities such as creating or modifying reports or list views also lead to metadata changes. Some of these org differences don’t matter from the point of view of the development lifecycle; see “What’s Safe to Change Directly in Production?” in Chapter 12: Making It Better for examples.

Significant metadata differences between a related set of orgs can be divided into intentional and unintentional differences. The role of governance is to eliminate significant unintentional differences between orgs. Within the intentional differences, some are temporary while others are meant to be long-term differences.

The earlier section on “CI Jobs for Org-Level Management” in Chapter 7: The Delivery Pipeline provides an overview of how to practically manage both types of intentional difference.

To summarize, temporary differences between orgs are due to features and fixes being gradually promoted and tested. When using packages, temporary differences simply mean that there are different versions of a package installed in different orgs. The expectation is that the testing orgs will contain the latest version of a package, while the production org may lag a few versions behind while testing is in progress.

When using org-based development, temporary differences are best managed by the Git branching process, with the branches corresponding to the testing orgs carrying metadata differences that have not yet been merged into the master branch, corresponding to the production org. There is of course a contradiction between using such a branching model and following true continuous integration or trunk-based development, which is why it’s so important to gradually refactor your metadata into packages, each of which can be developed on a single trunk.

Orgs have long-term differences related to integration endpoints, org-wide email addresses, and other org-specific configuration. While the goal of version control is to gain visibility into these similarities and differences, the goal of CI/CD is to enforce control over the orgs.

Even within unlocked packages, it’s possible to accommodate org-specific differences to some degree. The most effective approaches I’ve seen use custom metadata records that cross-reference the org ID to look up org-specific data. In Apex, you can call UserInfo.getOrganizationId, and in formula fields such as workflow rules, you can reference {!$Organization.Id}. You can then perform dynamic lookups such as the one shown in Listing 9-13 to determine integration endpoints (for example).
  public static String getEndpoint(String serviceName) {
    String orgId = UserInfo.getOrganizationId();
    API_Endpoint__mdt endpoint = [
      SELECT URL__c
      FROM API_Endpoint__mdt
      WHERE OrgId__c = :orgId
        AND ServiceName__c = :serviceName
        AND isActive__c=true
      LIMIT 1];
    return endpoint;
  }
    
Listing 9-13

An example of looking up Custom Metadata records based on an Org ID

When managing org-level metadata, you can use that same custom metadata approach. In addition, you can dynamically filter and replace values as part of the deployment process.

XSLT is the most common syntax for searching and replacing across XML documents. See Listing 9-14 for an example of the XML from a Salesforce Approval Process and Listing 9-15 for an example of an XSLT transform. XSLT is a fairly obscure and challenging syntax and requires dealing with XML namespaces (xmlns). Parsing the XML using higher-level languages such as Node, Java, Python, or Perl may make this task easier. It’s also possible to use standard Unix tools such as sed for this purpose, although they are less precise.

The good news is that once you’ve figured out the initial syntax for your replacements, subsequent replacements are easy. Listing 9-15 is not indicating that you should maintain extensive collections of XSLT. If you choose to use XSLT, it is more maintainable to autogenerate repetitive XSLT on the fly using simpler config files to define your search terms and the replacement values.
  <?xml version="1.0" encoding="UTF-8"?>
  <ApprovalProcess xmlns="http://soap.sforce.com/2006/04/metadata">
      <!-- ... -->
      <approvalStep>
          <allowDelegate>false</allowDelegate>
          <assignedApprover>
              <approver>
                  <name>[email protected]</name>
                  <type>user</type>
              </approver>
              <whenMultipleApprovers>FirstResponse</whenMultipleApprovers>
          </assignedApprover>
          <label>Step 1</label>
          <name>Step_1</name>
      </approvalStep>
      <!-- ... -->
  </ApprovalProcess>
Listing 9-14

An excerpt of the XML for an Approval Process referencing the user [email protected]

  <?xml version="1.0" encoding="UTF-8"?>
  <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
      version="2.0"
      xmlns:sf="http://soap.sforce.com/2006/04/metadata"
      exclude-result-prefixes="sf">
      <xsl:template match="sf:approvalStep/sf:assignedApprover/sf:approver/sf:name/text()">
          <xsl:value-of select="replace(., '([email protected])', '[email protected]')"/>
      </xsl:template>
      <!-- By default, leave everything else as it is -->
      <xsl:output exclude-result-prefixes="#all"  omit-xml-declaration="yes" indent="yes"/>
      <xsl:template match="@*|node()">
          <xsl:copy>
              <xsl:apply-templates select="@*|node()"/>
          </xsl:copy>
      </xsl:template>
  </xsl:stylesheet>
Listing 9-15

An example XSLT transformation to replace the user [email protected] with [email protected] in an approval process

The example in Listings 9-14 and 9-15 is a bit contrived, since Salesforce has built-in logic to translate sandbox usernames into production usernames for most metadata types. For example, if you deploy metadata containing references to [email protected] to a sandbox called “qa,” Salesforce will automatically look for a user named [email protected] and update the metadata appropriately. But you will encounter errors if there is no such user, and this automatic replacement does not happen in references to org-wide email addresses or for certain metadata references like reports shared to particular users. Salesforce is working on a resolution by allowing “Aliases” in metadata that can vary on a per-org basis, but that is not available as of this writing.

The need to replace email addresses is the most common and most tedious replacement you’re likely to encounter, but there are a variety of situations where having automatic replacements is beneficial. Another example is during the transition between Salesforce versions, when it’s possible to download metadata that has tags which are not yet supported in your production org. Being able to strip those out on the fly is extremely helpful.

Dependency and Risk Analysis

As your process matures, one area that you might consider exploring is dynamically assessing the risk that may be posed by particular changes. Some changes pose a bigger risk to your org than others and might warrant careful review before they are made.

Some tooling providers such as Panaya26 and Strongpoint27 have released tools for Salesforce based on similar tools for other languages. Their tool assesses metadata dependencies and rates proposed metadata changes based on their potential risk to the org. For example, adding a validation rule on a heavily used field could interfere with peoples’ work or automated process if it’s not well tested.

It’s worth noting the research from the 2018 State of DevOps Report that change approval processes have not been shown to increase org stability and definitely decrease deployment velocity. This holds true even for selective change approval processes that only apply to high-risk changes.

In my opinion, the most useful step you can take to limit the risk of deployments is to track each change in version control and make frequent small deployments from version control so that the impact of any single deployment is minimized, and any resulting problems can easily be diagnosed and remedied. On this basis, your critical business processes should be validated by automated tests tied to every deployment to ensure they are never compromised.

Summary

Deployment is the heart of innovation delivery. I often liken the deployment process to the shipping logistics managed by companies like UPS and FedEx. Whereas there’s a lot of variation in the amount of time required to develop features and resolve bugs, the process of deployment can be made into a fast and predictable process. The irony of release management is that it’s not a high-value process; it doesn’t add much value compared to other aspects of software development. It’s thus important that your team minimizes the time, effort, and pain involved in deployments by automating that process and developing a steady cadence.

This chapter has outlined a variety of techniques you can use to build your own release automation. And we’ve also introduced many of the excellent tools that have been built to help with this process. In the next chapter, we’ll discuss releasing as a separate activity from deploying. This distinction is extremely helpful since it allows you to make the innovation delivery process as fast and fluid as possible, without exposing your end users to ongoing unexpected changes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.79.147