Last Thursday, a consortium of intelligence agencies spanning Australia, Canada, Germany, Netherlands, New Zealand, United Kingdom, and the United States published a document called Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default. Its aim is to incite vendors to deliver products that are secure-by-design and -default.
Per their guidance, secure-by-design “means that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure.” And secure-by-default “means products are resilient against prevalent exploitation techniques out of the box without additional charge.” Both are admirable, worthy pursuits.
But a natural question for readers after digesting this guidance is, “Okay, secure-by-design and secure-by-default sound great… but how do I implement all of this in my organization?”
Well, darling readers, the timing could not be more fortuitous. You see, the
obsessive passion project book I wrote throughout last year – Security Chaos Engineering: Sustaining Resilience in Software and Systems – just came out. And I cannot overstate how perfectly the book answers that very valid question of how we can achieve these noble pursuits in practice.
In this post, I’ll map the recommendations in CISA, et al’s report (which I’ll refer to as SBDD for short) to specific chapters and sections in my own book, optimizing for brevity since our time is valuable. If the report is an apéritif, then consider the book a flavorful and fragrant multi-course meal to leave you fulfilled and flourishing.
Software Product Security Principles
There are three principles the SBDD guidance outlines (page 6), two of which are covered in the book at some length1:
- The burden of security should not fall solely on the customer
- Build organizational structure and leadership to achieve these goals
To really immerse yourself in this first principle, skip to Chapter 7 in the book within the section “Designing a Solution.” I suspect you’ll especially enjoy the sidebar on “Solution Design and the Thermodynamics of Effort,” which crystallizes the dynamic that effort can only be shifted around between humans, not destroyed.
The second principle defined by SBDD is also the focus of Chapter 7, which introduces the concept of a Platform Resilience Engineering team dedicated to sustaining resilience in software and systems. Platform resilience engineering teams treat software resilience like a product:
A platform engineering approach to resilience treats security as a product, as something created through a process that provides benefits to a market. Platform engineering teams treat their internal customers’ outages as their own outages, as a call to action to build a better product for them. (~page 2722 of Security Chaos Engineering: Sustaining Resilience in Software and Systems)
This topic also relates to one spot where I differ with their report: the emphasis on “IT departments.” Securing corporate laptops radically differs from assessing software quality; they are distinct skill sets. Chapter 7 untangles this distinction and offers a path forward for the latter (assessing software quality, of which security is a part).
Software manufacturers should perform a risk assessment to identify and enumerate prevalent cyber threats to critical systems, and then include protections in product blueprints that account for the evolving cyber threat landscape. (SBDD page 4)
Their Secure-by-Design call-to-action resembles the E&E Assessment Approach I describe in Chapter 2, albeit in different lingo (since “risk” is a slippery concept we largely eschew). The E&E Resilience Assessment reflects two “tiers” of assessment: Evaluation and Experimentation.
Tier 1 (Evaluation) emphasizes the value of creating decision trees for critical functionality, which directly maps to the SBDD guidance of “identify and enumerate prevalent cyber threats to critical systems.” The decision trees you create should capture the paths attackers might take to achieve a particular goal, as well as capture existing and prospective security mechanisms – satisfying this criteria.
In the book, we take it further than what SBDD suggests, however. It isn’t enough to “include protections in product blueprints”; we must verify that those protections work as expected. This is Tier 2 (Experimentation), in which we conduct resilience stress tests – what we refer to as “chaos experiments” in software land – to observe how our system behaves and responds in an adverse scenario.
Another element of SBDD’s guidance is also satisfied by the E&E Resilience Assessment:
The authoring agencies recommend manufacturers use a tailored threat model during the product development stage to address all potential threats to a system and account for each system’s deployment process.” (SBDD page 4)
We need only conduct the Tier 1 assessment to satisfy this; nevertheless, we still recommend both the Evaluation and Experimentation phases to fully assess system resilience.
Manufacturers are encouraged make hard tradeoffs and investments, including those that will be “invisible” to the customers, such as migrating to programming languages that eliminate widespread vulnerabilities. (SBDD page 5)
Prioritize the use of memory safe languages wherever possible. (SBDD page 8)
The value of memory-safe languages in sustaining software resilience receives special focus in a few sections of my book:
Chapter 4 in the section called “Standardization of Raw Materials,” which includes a tl;dr of what memory safety is, why it matters, how to select the right language when building software, and how to handle migrations or otherwise cope with lingering C/C++ code.
Chapter 7 in the section called “Substitute Less Hazardous Methods or Materials,” since, per NSA guidance elsewhere as well, we should treat C/C++ code as hazardous materials to avoid when possible.
Secure software components.
Acquire and maintain well-secured software components (e.g., software libraries, modules, middleware, frameworks,) from verified commercial, open source, and other third-party developers to ensure robust security in consumer software products. (SBDD page 8)
Chapter 4 in my book covers this “best practice” as well. It’s honestly tricky for me to pinpoint where precisely to point you because it’s such a pervasive topic woven throughout the chapter. But my favorite warning box in the book is relevant:
Angy Scorpion is in the section “‘Boring’ Technology Is Resilient Technology,” but I also recommend the following sub-sections in Chapter 4 (really, just read the whole chapter; it’s the longest one by far in the book, but bursting with practical opportunities):
- Standardization of Raw Materials
- Standardization of Patterns and Tools
- Integration Tests, Load Tests, and Test Theater
- Modularity: Humanity’s Ancient Tool for Resilience
Web template frameworks & parameterized queries
Use web template frameworks that implement automatic escaping of user input to avoid web attacks such as cross-site scripting. (SBDD page 8)
Both Chapter 3 and Chapter 4 cover the principle of “Choose Boring” technology, which covers this. We don’t want to DIY something and potentially miss important security characteristics when there are readily-available frameworks and libraries that are well-vetted.
Use parameterized queries rather than including user input in queries, to avoid SQL injection attacks. (SBDD page 8)
This exact recommendation is covered in Chapter 4, subsection “Documenting Why and When,” although we also recommend ORMs:
The organization should instead standardize on their database access patterns and make choices that make the secure way the default way. (~page 185 of Security Chaos Engineering: Sustaining Resilience in Software and Systems)
It’s flattering to imagine these agencies took a peek at my book draft along the way to inform their guidance given the similarities in language. Please do not start a real conspiracy theory about this.
Strive to ensure that code submitted into products goes through peer review by other developers to ensure higher quality. (SBDD page 9)
The art of thoughtful code reviews is extensively covered in Chapter 4 in the subsection “Code Reviews and Mental Models.” One thing the report doesn’t mention but the book does is that we really need to implement code reviews for tests, too, and also expend special effort on code reviews for error-handling functionality.
Defense in depth.
Design infrastructure so that the compromise of a single security control does not result in compromise of the entire system. For example, ensuring that user privileges are narrowly provisioned and access control lists are employed can reduce the impact of a compromised account. Also, software sandboxing techniques can quarantine a vulnerability to limit compromise of an entire application. (SBDD page 9)
Isolation is so essential for resilience – and life itself – that it is threaded throughout the whole book. But, for this “best practice,” read Chapter 3, subsections “Investing in Loose Coupling in Software Systems” and “Introducing Linearity into our Systems”; they cover isolation as well as design practices like the D.I.E. triad (short for Distributed, Immutable, and Ephemeral). I cover different forms of isolation worth considering, too – fault isolation, performance isolation, and cost isolation. You’ll even learn some things about biology along the way:
The goal espoused by SBDD here also reflects eliminating or reducing hazards by design, which we explore in depth in Chapter 7. I highly recommend beholding the “Ice Cream Cone Hierarchy of Security Solutions,” which is especially tasty wisdom.
A macro idea in the report’s Secure-by-Default section is treating security like a product, which is the precise focus of Chapter 7 in my book. The SBDD report also touches on the futility of training/policy/guidance relative to building in security qualities by design, which is also covered in Chapter 7. And, surprise surprise, the importance of UX to security solutions is also covered in Chapter 7.
As a little taste of what you’ll read, here’s a cheerful lemur teaching you about user journeys:
Safe to say that if you care about the secure-by-default angle, Chapter 7 is for you.
Consider the user experience consequences of security settings
Each new setting increases the cognitive burden on end users and should be assessed in conjunction with the business benefit it derives. Ideally, a setting should not exist; instead, the most secure setting should be integrated into the product by default. When configuration is necessary, the default option should be broadly secure against common threats. (SBDD page 11)
Cognitive overhead is a common concern covered in the book, but for this guidance I highly recommend this subsection in Chapter 7: “Understanding How Humans Make Trade-Offs Under Pressure.” We cover how human brains really work, especially under pressure; why we should be curious about users’ workarounds rather than incensed; and how to respect the cognitive load of our users.
Chapter 7 also discusses choice architecture and the power of defaults in the subsection “Substitute Less Hazardous Methods or Materials,” drawing from behavioral science.
Forward-looking security over backwards compatibility
Too often, backwards-compatible legacy features are included, and often enabled, in products despite causing risks to product security. Prioritize security over backwards compatibility, empowering security teams to remove insecure features even if it means causing breaking changes. (SBDD page 11)
The report suggests avoiding backwards compatibility for security’s sake, which belies the sheltered purview of the authors, who need not appease the fickle market gods. Few organizations will forego a seven-figure contract with a customer who requires backwards compatibility. The report does mention offering carrots to customers to incentivize updates, although doesn’t elaborate.
Organizations already charge customers more to maintain backwards compatibility, which is an incentive to upgrade. I suspect many would be surprised how much some customers are willing to pay to keep things as static as possible.
But I do think there are other opportunities here in terms of how to achieve this. Towards the end of Chapter 4, I describe the Strangler Fig Pattern for software transformation; it’s a beloved migration pattern in software engineering land but few infosec people know about it in my experience. It’s perfect for especially conservative organizations worried about breaking things when modernizing and migrating. The section in which it lives, “Flexibility and Willingness to Change,” is worth a read regardless to inoculate ourselves against the pestilent industry rhetoric that change is dangerous.
For years, I’ve watched the faces of infosec mortals twist in fear and revulsion at my suggestion that we can and should infuse security by design. “But the Falmer Magic Wheel says blahdiblah!” I don’t care. A regressive approach will never help us outmaneuver attackers. This report, though, grants me a secret weapon: I can appeal to authority now, too, rather than rely on logical arguments – because logic is a poor persuasive tool, as I’ve learned through tribulations and tears and will forever lament. I’m unironically thrilled.
I sincerely think that the guidance within SBDD is a strong start as far as why, but weak (by design, I think) on how. I am obviously quite biased as it aligns quite well with my own thinking – thinking that blossomed into 300 pages in this book (I don’t have the luxury of being a bunch of intelligence agencies in a trench coat where people automatically trust what I say without elaborate justification).
I hope that for those in the community who crave meaningful security outcomes – to achieve victories like secure-by-design and secure-by-default, or even come close to the summit – that the book charts your course to navigate the oft-tumultuous waters of software delivery and sustain resilience in your systems.