© Andrew Davis 2019
A. DavisMastering Salesforce DevOps https://doi.org/10.1007/978-1-4842-5473-8_10

10. Releasing to Users

Andrew Davis1 
(1)
San Diego, CA, USA
 

Deploying and releasing are different. Deploying refers to moving code and configuration from one environment to another. Releasing means making that code and configuration available to users. Releasing depends on deploying: if capabilities are not moved to the environment that users are working in, they have no way to use them. But it’s possible to deploy without releasing, by simply “hiding” those capabilities from users until they’re ready to use them.

As an analogy, when I was a child my parents would buy presents for my brother and me in the weeks and months leading up to Christmas. But they would hide the presents in the house until Christmas morning to surprise us. Their buying the presents and bringing them to our house is like deploying. Their giving them to us on Christmas morning is like releasing.

This practice of “decoupling” deployments from releases is also known as “dark deploying” and is a highly recommended practice. The influential software consultancy ThoughtWorks has recommended this practice1 for many years. This is closely related to the concepts of Canary Deployments and of Feature Flags.

There are several reasons why this practice is so useful. First, it reduces the stress and risk associated with deployments. Deployments are often complicated affairs and can involve careful timing, monitoring, and coordination between different teams. When deployments also imply releasing to users, that simply adds to the stress and risk of the process. When features can be deployed to an environment without having any risk or impact on users, it allows those deployments to be done during normal business hours (as opposed to on weekends or evenings) since there’s not the concern of interfering with people’s work.

Second, this makes releasing far simpler. Even if a deployment is complex, if releasing to users is simply a matter of changing a flag or permission, it can be done at any time, perhaps by an Admin, in coordination with announcements to users or customers. If something doesn’t work, the feature can be disabled just as easily as it was enabled.

Third, this allows for real in situ testing. You can enable the feature first for administrators, then perhaps for a few testers or power users. Even if you’ve tested extensively in a staging environment with full data, there’s no better confirmation that your features will work in production than actually seeing them working in production!

Fourth, this allows you to deploy functionality that isn’t finished yet. This “benefit” will seem shocking to most teams, but is actually a requirement for practicing continuous deployment. For your team to be able to regularly check their code into a common trunk, there has to be a mechanism for them to hide work that is not yet ready for release. Rather than working in a feature branch or delaying deployment, it is perfectly safe to deploy that functionality, as long as you’re confident it is disabled.

This also allows for Canary Deployments. Canary Deployments refer to releasing capabilities to a subset of users before releasing to all users. These early users act as testers (sometimes without realizing they’re doing so) to reduce the risk of a major change simultaneously affecting all users. The analogy comes from the old practice of bringing canaries into coal mines to warn miners if natural gas was released into the mine. The canaries died quickly if natural gas filled the mine. Their death would alert the miners, hopefully giving them time to escape. Please never expose your users to anything quite that dramatic.

Finally, although it’s important to get feedback from real production users as early as possible, it’s more common that releases to users are made only periodically. Even when sophisticated techniques are available so that functionality could be released more often, it’s important not to confuse or overwhelm users with continuous changes to the user interface or functionality. Releasing, for example, on a monthly schedule allows for an orderly announcement of the features to be made, which your users are more likely to greet and read with enthusiasm.

Having explained the extensive benefits of separating releases from deployments, it’s important to determine whether that’s the right approach for each particular situation. In some cases, it’s overkill. The point is to know what you’re doing, and why.

Releasing by Deploying

The default approach to releasing functionality is to release by deploying. This means that prior to a deployment, certain functionality isn’t available to users, and once the deployment completes successfully, the feature is immediately available to users. A great example of when this is the correct approach is deploying bug fixes. Once they’ve been tested, there is no reason to delay the release of a bug fix in any way. It should be deployed and made available to users as early as possible.

Another case where functionality can and should be released immediately with a deployment is for the minor tweaks to UIs or the creation of new database fields that would traditionally be done by admins directly in production. As we discuss in the section “Locking Everybody Out” in Chapter 12: Making It Better, it is critically important that even Salesforce sysadmins not be allowed to modify the database, business logic, or UI directly in production. The reason is that even seemingly small and safe changes (like adding a new field to a page layout) mean that production becomes out of sync with version control. When production gets out of sync with version control, those changes do not get propagated to development or testing environments and will be overwritten the next time developers deploy updates to that functionality in production.

The key is to recognize that “Salesforce admin” actually has a dual meaning: the traditional meaning of “someone who manages a production org” and the distinctively Salesforce meaning of someone who builds and modifies applications using clicks not code. Those in the latter role are more correctly termed “App Builders.” And they should not be allowed anywhere close to your production org, except as users.

The alternative is to involve App Builders in the same process used by code-based developers to deploy changes, namely, the Delivery Pipeline. That means that App Builders should be making their changes in scratch orgs or development sandboxes, tracking them in version control, and letting the delivery pipeline do what it does best: deliver those changes to production. Read more about how to do this in the section “An Admin’s Guide to Doing DevOps” in Chapter 12: Making It Better.

Revoking your admins “System Administrator” privileges in production forces them to make even minor application updates using version control. But that doesn’t mean that there should have to be a long lead time to releasing those features. On the contrary, reducing lead time is one of our key goals. We don’t want to burden admins unnecessarily, we just want them to track their changes and keep them in sync with developers. So minor updates (minor layout changes, creating new report types, adding fields to an object, etc.) can all be released immediately with a deployment without requiring an additional layer of hiding and releasing.

Separating Deployments from Releases

But what about bigger changes? If there’s any chance that a new feature might negatively impact users, cause confusion that requires training or announcements, or modify business logic in a way that requires coordination with other groups, you should hide or disable that feature when you deploy it. One important consequence of hiding features is that you can then deploy continuously to production. Continuous deployment refers to deploying features to production as soon as they’re built and tested. The benefit of this is that you can make each deployment very small.

Small “batch” sizes are a key concept of lean manufacturing, and small deployments are the lean software development equivalent. Small, frequent deployments minimize the risk of each individual deployment and allow bug fixes and high-priority deployments to be expedited.

The opposite, large infrequent deployments, implies that even bug fixes and high-value features have to wait in line with every other change to the system. That delays the delivery of value to end users. It also means that if a large deployment causes a problem, it’s far more challenging to discern which part of that deployment caused the problem. When a critical problem occurs as a result of a large deployment, you may have to roll back the entire deployment until the team is able to identify the root of the failure. And depending on the nature of the deployment, it may not be easy to roll back.

Separating deployments from releases is a key capability to enable DevOps for your team. The reason is that it forces the team to think about how features can be deployed without impacting the system. This forces good design, designing in such a way that changes are reversible and that the impact of changes can be controlled. As people in the DevOps community describe it, you’re “reducing the blast radius” of each change.

Let’s look at how this can be implemented.

Permissions

One of the oldest and simplest mechanisms to separate deployments from releases is simply to not assign users the permission to see functionality until it’s ready. As mentioned before, you should emphasize the use of Permission Sets rather than Profiles. And this is an area where Permission Sets really shine.

Each package you’re developing should have at least one Permission Set associated with it that gives access to the capabilities in that package. When you first install that package in your target org, the Permission Set is included, but it is not automatically assigned to any users. This means that the Permission Set will need to be assigned to users for them to get access to those capabilities. Voila! You’ve separated the deployment from the release.

When an Admin decides that it’s time to release the feature, they simply assign the Permission Set to the appropriate users. Want to do a Canary Deployment? Assign the Permission Set to just a subset of your users. How easy was that??

But imagine if the Permission Set associated with your package has already been assigned to all the appropriate production users. And you’re now rolling out new functionality that you don’t immediately want to expose to all users. You have several options.

If the changes you’re making might never be appropriate for some of the existing users of your package, you should create a new Permission Set. For example, if your package provides capabilities to customer support representatives, but you’re adding some features that would only be used by Live Agent chat users, you might create a new Permission Set that is specifically for chat users. That allows Admins to assign the Chat User Permission Set only when they’re ready to release those capabilities.

There is some risk, however, that in doing that your permission landscape becomes unnecessarily complex. Complex permissions create a security and maintenance burden for admins, so you should be sure that there’s a legitimate use case for a new Permission Set before you create one.

It is of course possible to simply not add your new permissions to the Permission Set and to then add those permissions manually in each environment. While that accomplishes the goal of separating deployments from releases, it means that you’re reverting to a manual workflow that can lead to errors and which isn’t being tracked in version control. Plus, as discussed in the section “Locking Everybody Out” in Chapter 12: Making It Better, a wise team won’t even permit this kind of manual modification. Therefore you need to consider one of the other options in the following.

Layouts

Another time-honored method for revealing features selectively to some users and not to others is the use of Layouts. Salesforce’s page layouts allow a single object to have multiple alternative layouts for the View and Edit screens. Which layout a user sees is determined dynamically at runtime based on the user’s Profile and on the record type of that record.

Each Layout defines not only which fields of an object are shown but also which fields are read-only and which Quick Actions, Related Lists, and other embedded functionality are shown. This allows a capability such as a new Quick Action for an object to be hidden from users simply by being excluded from the Layout those users are assigned.

Similar to the Chat User Permission Set case earlier, if you are rolling out capabilities for a new team, you might want to create a new Layout to serve the needs of that team. But the same caveats mentioned earlier apply. Be extremely careful about creating a layout (or anything else) that you don’t think has a long-term purpose for the org. Everything you create adds complexity, and capabilities built for short-term use have a nasty habit of making their way into production and remaining for a very long time.

Dynamic Lightning Pages

One very nice capability that is now available in the Lightning App Builder is the ability to have Dynamic Lightning Pages. These dynamic pages allow certain components to be shown or not shown based on filters. Filters use formulas that can reference information on a particular record (show this component when the value is greater than 100) or based on attributes of the User viewing the data (show this component when the User has the “BetaUser__c” flag enabled).

On Salesforce’s roadmap is the ability to add much more flexibility into App Builder, including the ability to show and hide specific fields or layout sections in the same way. App Builder’s capabilities provide a rich way to hide functionality until it’s ready.

Feature Flags

While modifying Layouts or using Dynamic Lightning Pages allows releases to be controlled at the UI level, it’s also possible to enable/disable functionality at the level of business logic. Feature Flags (aka Feature Toggles) are another practice recommended by Martin Fowler2 at ThoughtWorks. Although “Feature Flags” may be new to you, they’ve actually been around since time immemorial (time began in 1970 in the Unix universe) in the form of “settings.”

Yes, settings. You can make your own settings.

Why not. Salesforce does it. The Salesforce Setup UI contains thousands of settings that are in effect Feature Toggles that just turn certain capabilities on and off. When Salesforce wants to roll out a prerelease feature to a select group of Pilot users, they simply have their provisioning team enable a hidden feature flag in the Pilot users’ org which makes new capabilities available. Voila! Similar hidden capabilities (like enabling Person Accounts) are known as “black tab” settings that Salesforce support agents can enable at administrators’ request.

As Martin Fowler mentions, the use of Feature Flags should be a last resort, since it requires some design and adds a bit of complexity. But it’s vastly preferable to releasing by deploying if there’s any chance that a feature could cause risk or impact to users.

So how do you enable a Feature Flag? There are many possibilities.

A feature flag is simply a Boolean on/off setting that is checked at some point in your logic and that might be enabled or disabled for different users or on different orgs.

First determine where you want this Feature Flag to be checked. If the feature should be enabled or disabled in a Formula (such as selectively rolling out a new Approval Process), you can only use Custom Metadata, Custom Settings, or Custom Permissions. Features that will be checked from Flows or Process Builders can also use logic defined in Apex Invocable Methods and REST External Services. If the feature will be enabled/disabled inside code, you have even more options for how to implement the check.

Custom Metadata, Custom Settings, and Custom Permissions have the benefit that they are stored in Salesforce’s Platform Cache and so can be accessed quickly and without risking exceeding governor limits. They are also available in the formulas used in Formula Fields, Validation Rules, Approval Process conditions, and elsewhere.

It’s worth mentioning that originally, none of these three capabilities were available, and application configuration was stored as data in Custom Objects. That’s still the case with some kinds of applications like CPQ where “configuration data” is far more complex and extensive than can be accommodated by those mechanisms. If you’re responsible for architecting an older application that is still using Custom Objects for simple configuration, you should consider migrating to using Custom Metadata instead. Otherwise, in addition to managing metadata in version control, you will need a mechanism for tracking and deploying configuration data. See “Configuration Data Management” in Chapter 4: Developing on Salesforce for more.

Flows and Processes can also call Apex Invocable Methods to calculate the value of a flag and can even use External REST Services. For example, they could reach out to an External Service defined using an OpenAPI-compliant REST service. This kind of capability can be used to create a cross-system feature flag that can enable capabilities both in Salesforce and in other integrated systems such as SAP or Oracle.

Feature Flags that will be checked by code such as Apex can use any conceivable mechanism or calculation to determine whether to enable or disable a feature. Again, Feature Flags are meant for short-term use so you should avoid over-engineering. But code does allow for a wider variety of mechanisms, even in frontend code. For example, a Lightning Component or a Visualforce page could set a cookie in a user’s browser to determine whether to display a particular feature. This could even be a user-selectable option, such as “enable compact display” that allows you to solicit user feedback.

One of the most important qualities of Custom Metadata is that their values can be deployed between orgs, just like other Metadata. This means that developers can define these values and push them out as part of a package. But Custom Metadata values can also be overridden in a target org (unless they are marked as “Protected” and deployed as part of a managed package). This means that a Custom Metadata record can be used to enable a feature but can be turned off by default. It can then be selectively enabled in an org to release that feature.

However, in general Custom Metadata should always be used instead of Custom Settings. One exception is in the use of Hierarchical Custom Settings. Hierarchical Custom Settings are ideal for use with feature flags, since they dynamically change their value based on the environment, Profile, or user. For example, a setting can be turned off at the org level, but enabled for all users with a particular Profile. Hierarchical custom settings can even be overridden at the level of an individual user to allow that user to test a feature.

One final note about Feature Flags is that they generally should be removed once they’ve served their purpose. One of the risks of using Feature Flags is that their designers forget to remove them when they’re no longer needed. They then just become a useless if statement in your logic and a type of technical debt. Remove Feature Flags once the feature is stable.

Remember, as John Byrd once said, “Good programmers write good code. Great programmers write no code. Zen programmers delete code.”3

Branching by Abstraction

There’s no problem in Computer Science that can’t be solved by adding another layer of abstraction to it (except for the problem of too many layers of abstraction).

—Fundamental Theorem of Software Engineering (frequently attributed to John Wheeler)

The basic idea of separating deployments from releases is to hide work from end users, either because it’s not ready for them or because they’re not ready for it.

So far, we’ve discussed multiple increasingly sophisticated ways to hide functionality from users. Branching by abstraction has the fanciest name of any of these, but is conceptually simple to understand. This approach was promoted in the book Continuous Delivery, but, as indicated by the preceding quote, the concept dates back to the earliest days of computer science.

Branching by abstraction provides a way to gradually transition from an old version of a component to a new version of a component by adding an abstraction layer in between. This technique is useful when that transition is risky or might take some time to implement, since it allows you to make the transition gradually without branching in version control or delaying deployments. This technique can be used in any technology where one component can delegate processing to another component. In Salesforce this means you can use it inside code, Flows, or Processes.

Figure 10-1 illustrates how this works. If you decide that a certain component needs to be replaced, you create an abstraction layer that can be called instead of referencing the component directly. Initially, that abstraction layer simply passes all requests on to the component, making the initial implementation trivial and safe. As you then begin work on a new version of the component, you can add some decision criteria into that abstraction layer that allows you to delegate processing either to the old component or to the new component.
../images/482403_1_En_10_Chapter/482403_1_En_10_Fig1_HTML.jpg
Figure 10-1

Branching by abstraction

A practical example of how this works is if you want to transition from using a Custom Object to store configuration data to using Custom Metadata. Let’s say your current code includes many SOQL queries that look up values in the Custom Object to determine the appropriate behavior. You’ve determined that you’ll benefit from using Custom Metadata to keep that configuration consistent across environments. This simple example shows how you could branch by abstraction and gradually refactor your codebase to support the new approach.
  1. 1.

    Listing 10-1 shows the initial version of CallingCode.cls that directly makes a SOQL lookup to your configuration object. You might have similar blocks of code scattered throughout your codebase.

     
  2. 2.

    Create a class called ConfigurationService.cls as shown in Listing 10-2 with a method called isFeatureEnabled(). This is your abstraction layer.

     
  3. 3.

    Have isFeatureEnabled call another method called theOldWay() in that class that performs your original SOQL query to access the configuration data in the original configuration object.

     
  4. 4.

    Refactor CallingCode.cls as shown in Listing 10-3 to call ConfigService instead of directly querying your configuration object. You have now added a layer of abstraction without changing the underlying logic.

     
  5. 5.

    Now add a method to ConfigService called theNewWay() that implements your new way of getting that configuration information, in this case by querying Custom Metadata instead.

     
  6. 6.

    You can now develop and test the new way of accessing that configuration and gradually implement that change across the entire codebase. You can use any logic you want to selectively implement the new way. You have now branched by abstraction.

     
  public with sharing class CallingCode {
    public CallingCode() {
      String product = 'myProduct';
      Boolean enabled = [SELECT feature_enabled__c
                         FROM Configuration_Object__c
                         WHERE product__c = :product][0]
        .feature_enabled__c;
    }
  }
Listing 10-1

The original state of CallingCode.cls, which directly performs SOQL queries of a configuration object

  public with sharing abstract class ConfigService {
    public static Boolean isFeatureEnabled(String product) {
      // return theOldWay(product);
      return theNewWay(product);
    }
    private static Boolean theOldWay(String product) {
      return [SELECT feature_enabled__c
              FROM Configuration_Object__c
              WHERE product__c = :product][0]
        .feature_enabled__c;
    }
    private static Boolean theNewWay(String product) {
      return [SELECT feature_enabled__c
              FROM Configuration_Metadata__mdt
              WHERE product__c = :product][0]
        .feature_enabled__c;
    }
  }
Listing 10-2

ConfigService.cls, an abstraction layer that contains both the old and the new ways of accessing configuration data

  public with sharing class CallingCode {
    public CallingCode() {
      String product = 'myProduct';
      Boolean enabled = ConfigService.isFeatureEnabled(product);
    }
  }
Listing 10-3

CallingCode.cls after adding the abstraction layer instead of directly accessing configuration data

In most cases, branching by abstraction should be a temporary solution, and the abstraction layer can be removed after the change has been completely tested. In this simple example, ConfigService is a useful way of avoiding repetitive SOQL queries and so is beneficial to keep in place.

Summary

Releasing is often confused with deploying. The purpose of this chapter is to highlight the differences between them and to outline techniques for deploying without immediately releasing. In many cases, this means building your customizations so they can be hidden or exposed dynamically. There is some cost in terms of time and complexity in adding such a layer, but in many cases there are enormous benefits in doing so. Such practices allow you to control the risk of deploying changes by enabling you to turn functionality on or off as needed. Fast rollbacks, selective rollouts, A/B testing, and more become possible once you’ve implemented this practice.

This concludes the main body of this book, the practice of Innovation Delivery. Nestled in between development and operations, the software delivery phase has historically been a viper’s nest of risk, pain, inefficiency, and confusion. The Salesforce platform and those who build on it have evolved to the level of sophistication that it is now critical to tame the delivery process.

The logistics companies who handle global and local trade (from container ships to drone delivery) are unsung heroes in our modern economy. From hand-carried parcels, to the Pony Express, to modern expedited delivery, the practice of shipping and delivering physical goods has become steadily more reliable over hundreds of years.

At the heart of the DevOps movement is applying similar process improvements to software delivery, enabling it to become steadily faster, safer, and more predictable. By implementing the processes described here, and leveraging the many excellent Salesforce deployment, tracking, and testing tools, your team can begin to free up more and more of their time for creative, high-value work. It’s my sincere wish that all Salesforce development teams can eventually learn to “run the tightest ship in the shipping business.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.168.255