foreword

In the early 1990s I was in my first graduate job in the middle of a recession, and they were having a tough round of layoffs. Someone noticed that each victim’s UNIX account was being locked out just before the friendly HR person came to tap them on the shoulder and escort them from the building. They wrote a small script to monitor differences in the user password file and display the names of users whose accounts were being locked. We suddenly had a magic tool that would identify the next target just before the hatchet fell...and an enormous security and privacy breach.

In my second job, as a programmer at a marketing firm, there were lots of password-protected Microsoft Word documents flying around, often with sensitive commercial information in them. I pointed out how weak the encryption was on these files, and how easy it was to read them using a freely available tool that was making the rounds on Usenet (your grandparents’ Google Groups). No one listened until I started emailing the files back to the senders with the encryption removed.

Then I figured most people’s login passwords were probably too weak as well. I got the same lack of response until I wrote a script that ran a simple password-cracking tool on a regular basis, and emailed people their login passwords. There was a pretty high hit rate. At that stage I didn’t know anything about information theory, Shannon entropy, attack surface areas, asymmetric cryptography—I was just a kid with a password-cracking tool. But I became the company’s de facto InfoSec Officer. Those were simpler times!

Over a decade later, as a developer at ThoughtWorks building a large-scale energy trading platform, I received what is still my favorite ever bug report. One of our testers noticed that a password field didn’t have a check for password length, which should have been 30 characters. However, she didn’t log the bug as “30 character password limit isn’t being checked.” Instead, she thought “I wonder how much text I could shove into that password field?” By a process of trial and error, the final bug report was “If you enter more than 32,000 characters in the password field, then the application crashes.” She had turned a simple validation error into a denial-of-service security exploit, crashing the entire application server just by entering a suitably crafted password. (Some years later I was at a software testing conference where they decided to use iPads for conference registration, using an app they had written themselves. I learned you should never do this with software testers, when a tester friend tried registering as “Julie undefined” and brought the whole system to its knees. Testers are evil.)

Fast-forward another decade or so to the present day, and I watch in dismay as nearly every week yet another data security breach of a high-profile company appears in the news. I could cite some recent ones, but they will be ancient history by the time you read this, and newer, bigger, more worrying data hauls of passwords, phone numbers, credit card details, social security numbers, and other sensitive personal and financial data will have appeared on the dark web, only to be discovered and reported months or years later to an increasingly desensitized and vulnerable public.

Why is this picture so bleak? In a world of free multifactor authentication, biometric security, physical tokens, password suites like 1Password (https://1password.com/) and LastPass (https://www.lastpass.com/), and notification services like Have I Been Pwned (https://haveibeenpwned.com), you could be forgiven for thinking we’ve got security covered. But as Dan, Daniel, and Daniel point out in the introduction (I felt obliged to write this foreword on the basis there weren’t enough people called Daniel involved), there is no point having strong locks and heavy doors if a malicious actor can just lift the doors off their metaphorical hinges and walk off with the prize.

There is no such thing as a secure system, at least not in absolute terms. All security is relative to a perceived threat model, and all systems are more or less secure with respect to that model. The goal of this book, and the reason its content has never been more urgent or relevant, is to demonstrate that security is first and foremost a design consideration. It isn’t something you can graft on at the end, however well-intentioned you are.

Security is in the data types you choose, and how you represent them in code. Security is in the domain terms you use, and how faithfully you model domain concepts and business rules. Security is in reducing the cognitive distance between the business domain and the tools you build to address customer needs in that domain.

As the authors demonstrate again and again throughout this book, reducing this cognitive distance eliminates entire classes of security risk. The easier we can make it for domain experts to recognize concepts and processes in the way we model a solution, and in the corresponding code, tests, and other technical artifacts, the more likely they are to spot problems. They can call out the discrepancies, inconsistencies, assumptions, and all the other myriad ways we build systems that don’t reflect the real world: online bookstores where you can buy a negative number of books, password fields that allow you to submit a decent-sized sonnet, and sensitive account information that can be viewed by casual snoopers.

Secure by Design is my favorite kind of book for two reasons. First, it weaves together two of my favorite fields: Application and Information Security—in which I am an enthusiastic amateur—and Domain-Driven Design—in which I hope I can claim some kind of proficiency. Second, it is a practical, actionable handbook. It isn’t just a call to arms about treating security seriously as a design activity, which would be a worthy goal in its own right, it also provides a raft of real examples, worked through from design considerations to actual code listings, that put meat on the bones of security by design.

I want to note a couple of standout examples, though there are many. One is the treatment of “shallow design,” exemplified by using primitive types like integers and strings to represent rich business concepts. This exposes you to risks like the password exploit (a Password type would be self-validating for length, say, in a way a string isn’t), or the negative books (a BookCount type wouldn’t allow negative values like an integer does). Reading this section, as someone who has been writing software professionally for over 30 years, I wanted to reach back through time and hit my younger programming self on the head with this book, or at least leave it mysteriously on his desk with an Alice in Wonderland-style Read Me label on it.

Another exemplar is the topic of poor error handling, which is a huge source of potential security violations. Most modern programming languages have two types of code paths: the ones where things go OK, and the ones where bad things happen. The latter mostly live in a twilight zone of catch-blocks and exception handlers, or halfhearted guard clauses. As programmers, our cognitive biases conspire to convince us we have covered all the cases. We even have the hubris to write comments like // this can’t happen. We are wrong again and again.

The late Joe Armstrong, an amazing systems engineer and inventor of the Erlang language, used to say that the only reliable way to handle an error is to “Let it crash!” The contortions we go through to avoid “letting it crash” range from the “billion-dollar mistake” of null pointers and their tricksy exceptions, through nested if-else stacks and the will-they-won’t-they fall-through logic of switch blocks, to leaning on our IDEs to generate the arcane boilerplate code for interpolating strings or evaluating equality.

We know smaller components are easier to test than larger ones. They have exponentially fewer places for bugs to hide, and it is therefore easier to reason about their security. However, we are only beginning to understand the security implications of running a system of hundreds or thousands of small components—microservices or serverless architectures—and the fledgling domains of Observability and Chaos Engineering are starting to gain mindshare in a way DevOps and Continuous Delivery did before them.

I see Secure by Design as an important contribution to this trajectory, but focusing on the very heart of the development cycle, in the domain-modeling activities that DDD folks refer to as knowledge crunching, and leveraging the ideas of ubiquitous language and bounded contexts to bring security to the fore in programming, testing, deployment, and runtime. Shallow modeling and post hoc security audits don’t cut it anymore.

We can’t all be security experts. We can all be mindful of good Domain-Driven Design and its consequent impact on security.

Daniel Terhorst-North, Security Amateur, London, July 2019

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.60.29