As mentioned, the security industry hasn’t been a part of the DevOps journey. As shown in Figure 2-1, security processes tend to gate the continuous process instead of merging into it. Notably, security processes are incapable of the following:
Having security work against the business motivation of speed and independence can’t end well. Development teams must choose between slowing down, which hurts business outcomes, and circumventing the security controls, which introduces significant risk. Neither of these is a viable long-term option, so businesses must change their security practices to match the DevOps reality.
To secure the business without slowing it down, companies must adopt a dev-first approach to security.
Security programs must always start with an understanding of the risks you face. If you don’t know what you’re looking to secure or who you’re protecting yourself from, you’re likely to place your guards in the wrong place. However, past that understanding, you have a choice to make: do you prioritize the need to track that risk, or the need to protect against it?
Most security organizations, whether consciously or not, choose the former. The teams tasked to secure the organization (and holding the budget to do so) are the ones in charge of audit and governance, and they naturally place the need to understand your risk at the top. This results in products that are focused on finding problems and are attuned to a security person’s needs and understanding. These tools are then retrofitted to be put into the development pipelines but are unable to deliver on what developers need.
As its name implies, dev-first security means reversing the order: putting the developer’s needs at the top of the priority list when looking to secure your applications. It means asking yourself, “If I’m a developer looking to build a secure application, what do I need to do so successfully?” These tools still care about the security team’s needs and understanding your security posture, but they focus on helping developers secure what they build.
This is a completely different angle and results in building entirely different solutions to the same security threat. Let’s review a few key differences.
Developers operate in a different context and have different expertise than security folks do.
The most obvious difference is the lower level of security expertise. Most developers won’t know what Common Vulnerabilities and Exposures (CVE) or Common Vulnerability Scoring System (CVSS) means,1 or that they should ask whether a known vulnerability has a published exploit. These details must be simplified in the security tools developers are given: for instance, simplifying CVSS to three or four levels or highlighting important attributes like exploit maturity. Figure 2-2 shows an example of such built-in expertise.
A less-known difference is that developers see vulnerabilities through the lens of the app, not through risk. When a security person sees a vulnerable library, they see risk. If they zoom out, they’ll want to see other vulnerable libraries or other apps with the same flaw. Therefore, most security tools focus on governance dashboards that show lists of all vulnerable assets.
For developers, though, the first question isn’t about risk, but rather, about how the library connects to and affects the app. They don’t look for other vulnerable libraries, but for other properties of the same library, such as how outdated it is, so they can weigh the impact of upgrading or replacing it. Figure 2-3 shows the difference between a security-focused flat list of vulnerabilities, which focuses on the risk, and a dev-first tree view of vulnerabilities, which focuses on relation to the app.
Last, although developers may lack security expertise, their knowledge of the application is far better than that of the security team. For instance, a developer is likely to know whether a library is only used during development, or whether certain functionality is only accessible to administrative users. A good dev-first security solution can leverage this knowledge for better usability and results.
No tool lives in isolation. Developers and security people alike judge tools in the context of other tools they use around them and expect solutions to be a good citizen in their local ecosystem. A tool that deviates from the norms requires more attention and thought, requires more time to become proficient in it, and introduces cognitive load whenever you switch to it. Unless you are very motivated to use them, such odd ducks are best avoided.
Although this statement is true for both teams, those neighboring tools are massively different. Security people use a large number of auditing, governance, and compliance tools. They therefore expect other solutions to work well in an audit context, offering functionality such as rich listing of results, exports to PDF, and integration with risk dashboards. They have high tolerance for long tasks and typically assume an expert user who wants to see all the info.
For developers, the surrounding tools are build tools. They focus on helping individuals write code faster, identify and resolve problems locally, and collaborate with teammates in a version-controlled fashion. They assume highly technical users, and look for automated and fast tests to achieve yes/no answers so that they can be added to the build.
These are just a few of many areas of difference. Developer tools also offer different user experience (UX) patterns than security tools do, are more often available to try self-serve, and typically offer rich and well-documented APIs. Beyond the product, developer tooling companies behave differently, leaning toward more community collaboration and transparency, focusing on building versus risk reduction, and so on.
Dev-first security solutions need to embrace the dev tooling ecosystem as their peers. This requirement applies to commercial, open source, and home-grown tools alike, because at the end of the day, they all need to be a natural part of a developer’s daily routine. These tools also need to satisfy the needs of the security realm, offering the right views and integrations, but the priority has to be clear: developer experience comes first.
“Whereas an auditor’s job is to find and prioritize issues, a developer’s job is to fix them.”
Beyond context and experience, a developer looking at a vulnerability has a different job than the security person has. Security leaders are tasked with understanding the flaws and risks in the system and helping prioritize and act on that understanding. As a result, security tools and practices excel at finding security flaws, assessing their technical and business risks, and managing this list of vulnerabilities over time.
Whereas an auditor’s job is to find and prioritize issues, a developer’s job is to fix them. Good auditors and security teams also aspire to have issues fixed, but they have little control over accomplishing this. In fact, many security tools advertise logging a bug-tracking ticket as a remediation action, despite that it doesn’t actually fix anything. For developers, a tool that reports problems without helping to resolve them isn’t seen in a favorable light.
A dev-first security solution must therefore have a strong focus on fixing issues. For every reported issue, you should ask yourself what a developer needs to do to resolve the issue. How can the tool help simplify this task? The answers will help you walk the extra steps from the auditor’s point of view to the developer’s needs.
Ideally, the tool would be able to remediate the problem automatically, saving the developer precious time. When that’s not possible, consider what scaffolding you can still offer to simplify remediation: for instance, by prompting developers to approve or decide on the fix but still automating the process itself. See Figure 2-4.
One note of caution, though: there’s a difference between automating remediation and commandeering control. The security world has a long history of solutions that aim to detect and block attacks unilaterally, ranging from intrusion prevention systems (IPSs) to web app firewalls (WAFs). More recently, runtime application self-protection (RASP) solutions have been making the same claim, instrumenting applications to build security controls into them automatically.
These solutions can be very valuable in reducing risk, but they are not developer-friendly tools. They modify the originally coded functionality (typically after testing) and carry a real risk of breaking legitimate user actions. More important, they take control away from the developer to prevent or fix such functionality breakage. Instead of helping developers secure their apps, they convey a message that the apps are bound to be broken and aim to patch them after the fact.
Dev-first security solutions should simplify remediation but aim to do so as part of the developer’s job, instead of taking over the developer’s responsibilities.
Now that we understand what dev-first security means, let’s discuss how it relates to two other common terms in this field: shift left and DevSecOps.
The term shift left has been used by the AppSec industry for decades. It originates from a waterfall development process visual like the one in Figure 2-5, depicting a left-to-right release process, starting with design and coding and proceeding through building and testing to deployment and operation.
In this flow, security audits were typically done only during deployment (or in production). Security findings required going all the way back to the start, making them costly and time consuming to fix. The call to shift left advocated moving security controls earlier so that issues could be found and remediated sooner and thus with lower costs. This term is not reserved for security and is often touted in other aspects of software quality.
In the DevOps era, shifting left isn’t quite as clear. The core concept behind it is as valid as ever: the shorter the gap between writing a bug and finding it, the cheaper it is to fix it. However, two key parts are missing.
First, there is no “left” in a continuous process, which is rightfully depicted as an infinite process. DevOps accepts that certain bugs will only be found in production and is willing to sacrifice some level of verification in favor of a faster delivery cycle. It relies on methodologies like observability to help find such issues post-deployment and doesn’t necessarily see that as inferior to earlier detection. In other words, it’s often better to find an issue shortly after deployment than to add a costly and slow security test to continuous integration/continuous delivery (CI/CD) pipelines, even if it’s “further left.”2
Second, shift left doesn’t reflect the change in ownership and drive for independent teams. The truly important change isn’t whether you shift the technical testing left, but rather, whether you shift the ownership of such testing to the development team. Pipeline tests that require security teams to review their results, due to false positives or required expertise, can be more harmful than post hoc audits. Each dev team should be equipped and empowered to decide what the best place and time is to run the tests, adapting it to their workflows and skills.
If you have to pick a direction, you should focus less on shifting left and more on going top to bottom. This means replacing a controlling, dictatorial security practice with an empowering one, as I mentioned earlier.
The other common term used to describe the transformation required in the security industry is DevSecOps.
In its broadest sense, DevSecOps means embedding security into DevOps practices. It’s a term trying to work itself out of a job, reaching a state where security should just be a natural part of DevOps, not something that needs to be called out separately. In the meantime, it’s used to represent the required change and allow security people and programs to identify themselves with it.
DevSecOps is a very broad term, and those using it typically mean one of three things:
Adapting security to DevOps technologies, such as containers, IaC, or the elastic compute cloud itself
Adapting security to DevOps practices, such as continuous deployment, elastic scaling, or observability
Adapting security to the DevOps shared ownership mindset, driving cultural change toward seeing security as everybody’s responsibility
All three are required changes, and their order evolves from tactical to strategic. In the short term, the need to secure technologies that DevOps teams embrace is urgent and top of mind. In the long term, you have to adapt your security culture and change security practices to enable security to keep up with the rest of the business.
DevSecOps and dev-first security have similar aspirations of adapting security to the DevOps world. They have a lot in common and thus can often be used interchangeably, but they have a different starting point: one in dev, and the other in ops.
DevSecOps typically focuses on ops changes. It rotates around the convergence of SecOps and DevOps practices and the post-deployment world, covering topics such as managing cloud infrastructure, runtime observability, and incident response processes. Like modern ops teams before them, who renamed themselves DevOps teams to signal their different approach, we see modern SecOps teams calling themselves DevSecOps teams.
In contrast, dev-first security revolves more around developers and their work. It focuses on content in repositories (repos), code editing, and review processes, pipelines, and more, embedding security controls in them. It deals primarily with the work to get secure applications to production, whereas DevSecOps gives more attention to post-deployment work.
In addition, dev-first security is never used to describe a team, only an approach. Tools and practices may describe themselves as dev-first, but I’ve yet to encounter a dev-first security team. That said, the need to signal your different approach is felt in AppSec too, and thus many forward-thinking AppSec teams have renamed themselves product security teams, reflecting their broader scope and dev-first security thinking.
The line between DevSecOps and dev-first security is blurry. Many DevOps professionals identify as developers, and developers have significant post-deployment responsibilities. In this book, I focus on dev-first security, but I find both terms and movements valuable.
Dev-first security is a new approach to security. It biases in favor of adoption by those who can fix the problem versus those who manage it. However, a security program that cares only about the needs of developers is not good enough either. A good dev-first security program has to help security teams govern successfully and keep the organization secure.
It requires two teams working together (see Figure 2-6):
An empowered development team, equipped with the right tools and mandate to secure what they build
A supportive security team, focused on making security easy for developers and providing the security expertise and governance required to keep the company safe
1 A CVE number is a universal ID for a known vulnerability, used to synchronize between different tools referring to the same issue. The CVSS is a standard way of defining the severity of a vulnerability, using a small set of predefined criteria.
2 CI, often called a “build,” processes source code and packages it. CD ships this package to the next step, such as deploying it on cloud. Tests are often incorporated into both steps to find flaws and “break” the process if it doesn’t meet certain conditions.