<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Posts on Sensemaking by Shortridge</title>
        <link>https://kellyshortridge.com/blog/posts/</link>
        <description>Recent content in Posts on Sensemaking by Shortridge</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en-us</language>
        <copyright>This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.</copyright>
        <lastBuildDate>Thu, 24 Apr 2025 10:10:07 -0400</lastBuildDate>
        <atom:link href="https://kellyshortridge.com/blog/posts/index.xml" rel="self" type="application/rss+xml" />
        
        <item>
            <title>Shortridge Makes Sense of Verizon&#39;s 2025 Data Breach Investigations Report (DBIR)</title>
            <link>https://kellyshortridge.com/blog/posts/shortridge-makes-sense-of-verizon-dbir-2025/</link>
            <pubDate>Thu, 24 Apr 2025 10:10:07 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/shortridge-makes-sense-of-verizon-dbir-2025/</guid>
            <description>Every year Verizon publishes our collective best attempt at collating real-world evidence of attacks in the 2025 Data Breach Investigations Report (DBIR). It is either my privilege or curse as notorious cyber raconteur and rapscallion to yet again receive an advance copy1 of the report to digest and distill.
What follows is my commentary on the Verizon 2025 DBIR, attempting to make sense of this year’s data and share this sensemaking with the community.
As ever, I remain a skeptic and scientist, seeping in the systematic doubt of the Socratic method. The Verizon DBIR team readily acknowledges2 that this data cannot reveal the full dynamics pervading incidents and breaches.
The report polishes disparate tiles from data sources – some ceramic here, limestone there, glass, marble, a veritable jumble – and adds pieces to the industry mosaic, incomplete, but nevertheless an incremental step forward towards Truth.
Well, as close to Truth as the travesty that is VERIS allows.
Let’s dive in.
1. So what? Why does any of this matter?3 When you strip away the sublime sensation of “OwO new shiny data!!” to which all humans succumb, how does this data add value?
Simply put: what is the impact of the incidents and breaches enumerated within? Impact’s absence throughout the report in some sense recalls Ozymandias, that seething expanse of desert, primordial dunes swept and sculpted by eternal winds – that profound existential emptiness found in the colorless abyss of slipshod content that perhaps, most of all, defines our current times.
A little bird told me that the DBIR team yearns to ingest cyber insurance claim data, and I, too, yearn, so, please, if you are in a position to provide it, do so.
For it is in the “what actual $s €s £s did insurance companies pay out for incidents with X attack vectors or Y assets?” that we will learn where optimal ROI resides – both in the micro and macro.
Imagine if we could see just how much these incidents and breaches cost insurance companies in claims – then compare that with the collective spend on vendors, security teams, and the other elements that make up titanic (albeit, this year, often flat growth) security budgets.
For 2025 in particular, it is worth asking: did the Crowdstrike outage’s impact on businesses (or society more generally) outweigh all the breaches in this report combined? Very rough calculations – truly extremely untrustworthy math4 – from the report imply ~$2.66bn in damages from Ransomware attacks. But the Crowdstrike outage cost Fortune 500 companies at least $5.4 billion, or $44 million per Fortune 500 company.
This ignores the indirect costs imposed upon the populace across sectors – like those saddled on travelers due to cancelled flights. Nearly 17,000 flights were cancelled during the 72 hours after the outage. How do you quantify the cost of a family member missing a funeral due to cancelled flights?5 How do you quantify the cost of a loved one missing a critical medical procedure?
The core question is: should we be allocating more or less effort to cybersecurity – and where should we be allocating those efforts more or less? The report cannot really answer this without tying these statistics to tangible costs and benefits.
It calls to mind the classic quip, “Half my advertising spend is wasted; the trouble is, I don’t know which half.” As anyone in advertising will divulge, presuming we only waste “half” is a generous estimate.
To the well-meaning cybersecurity professionals among you, how do you intend to use this information? How will it inform your choices? How will you investigate how these trends align with what matters to your business? Consider these questions as you continue reading this commentary.
2. Espionage: fast fashion or couture? The 2025 DBIR features new contributors that added much-needed data points related to “espionage” events, i.e. those conducted by nation state-affiliated actors. This figure shot up to 17% (as shown in Figure 21 from the DBIR), much to the delight of FUD-driven GTM teams who will soon descend on Moscone, I’m sure.
However, many of those espionage events also feature financial motives. Why might that be? Shouldn’t nation states be sufficiently flush with cash and coin that they need not stoop to monetary theft?
Well, depending on the nation state, sanctions suck and make it hard to transact at financial institutions. Or, if you’re a nation state who wants to save face on the geopolitical stage, it’s far better to spend someone else’s money – especially if in the form of cryptocurrency – to enhance plausible deniability.
Many nation states prefer to slink and prowl through the ferns and fronds of software systems, abscond in shadows, slip past moonlight and crisp branches to avert the treacherous crunch that would startle their prey. Unlike successful ransomware-aaS startups6, nation states don’t need to run marketing campaigns to differentiate and drive demand for their services.
In general, nation state actors would like to preserve their access into target systems for as long as they can (a point to which we’ll return shortly), or else achieve other nearer-term goals in service of the longer-term mission. Being mistaken for a “nuisance” or petty criminal may hurt their pride, but, unlike too many traditionalist blue teams, they can put that aside for their mission.
These missions pile up expenses. Every attack group still must consider costs. There is no tree on this planet that grows money (though we can burn them en masse to mine cryptocurrency, true).
APTs need hardware to mount attacks, new IPs to evade detection, compute to run workloads. Those things cost money. Even money laundering costs money. For some of these things, like compute, they can steal from organizations so it comes from someone else’s wallet, not their own – but this thievery alone cannot sustain their ongoing operation.
Attack operations, like any business operation, require an intricate support system to keep running. And, like most business operations, to get more funding for your team, your mission, your whatever, you must appeal to the authorities above you.
From that perspective, is it not obvious why nation state actors would commit financial crimes? Maneuvering through someone else’s machines is easier than maneuvering through the humans who control your budget7.
so, tl;dr Your threat model is still predominantly money crimes.
What does this – still admittedly minor – presence of espionage mean in practice? Why does “nation state pursuing financial crimes” matter as a distinction from “cyber criminals pursuing financial crimes”? Do their methods differ so much that we need different defenses in place?
Not really.
The distinction grabs attention but is not where our focus should lie. The more relevant question is: what is the essential element of my business that makes me a target? As in, if I make it harder, are attackers going to bother someone else, or do they want me specifically for some reason?
Simply put: we should stop worrying so much about who the attackers are, and more about who we are; what makes us special, rather than what makes specific attack operators special; how our differentiation, competitive dynamics, and market strategy reveals more “threat intelligence” than whether the attacker prefers bamboo or wood pulp TTPs.
If you die in a sophisticated attack vs. a crude one, you’re still dead.
Attackers are not stupid, but they often spend their effort on relatively stupid (or “simple”) techniques. I, too, will often take the trash panda, “lazy” way8 rather than taking the higher-minded, effortful way unless necessary, and I suspect you, dear reader, can also relate.
The Gayfemboy botnet operators epitomize this impetus to evolve only as necessary. To wit, XLab observed:
However, the developers behind [the gayfemboy botnet] were clearly unwilling to remain mediocre. They launched an aggressive iterative development journey, starting with modifying registration packets, experimenting with UPX polymorphic packing, actively integrating N-day vulnerabilities, and even discovering 0-day exploits to continually expand Gayfemboy’s infection scale.
I certainly count the Gayfemboy botnet as “sophisticated” on these grounds9. I also think the Salt Typhoon campaign showed sophistication. I don’t think what the DBIR – and, certainly, the broader ecosystem – characterizes as “sophisticated” is necessarily so.
It raises a valuable question: what does sophisticated mean in the context of attacks and cybercrimes? Sophistication, definitionally10, involves flavors of:
Understanding how things really work High quality or reflecting a high level of skill Refined taste Rizz11 That first characteristic – understanding how the world really works – implies that sophisticated attackers will be those who adapt their techniques based on what works; sophisticated attackers won’t waste their finite time and effort on techniques that don’t.
From the attacker’s perspective, if they perform one big, flashy display their target quickly detects, now they’re dead. If they take consistent action to maintain access in their target systems for years, but those actions are “simple” or “stupid” – well, they’re still alive.
If we find someone persisted in our network for years using bash, after exploiting one lucky, exposed N-day vulnerability – maybe that is sophisticated.
Sophisticated is more about the attacker’s “broader goal” – their roadmap, vision, and ability to execute on it, if you will – rather than about the means they employ. Can an unsophisticated actor turn stolen credentials into a one-time smash and grab? Certainly. But can that actor turn those stolen credentials into getting in, and staying in, forever? Probably not. That is the hallmark of a sophisticated actor.
Perhaps an appropriate definition of “sophisticated attack” refers to how hard we must work to detect it. If we list admins and see new ones, that’s an APT TTP corresponding to MITRE ATT&amp;CK blah blah, but that’s not very hidden, and thus not very sophisticated.
“Sophisticated attack” vibes are like, someone smart cared about this attack operation, and that’s spooky because it means they had a purpose. What was that purpose? Hence, we return to the key question of: what is the essential element of my business that makes me a target?12
If we apply this line of reasoning to defense, does it not imply that “sophisticated” security strategies are those that prioritize consistency over flashiness? That take incremental action to adapt and persist in their mission of preserving the values their business treasures most (like revenue, profitability, or DEI)?
3. APTs go BWAAhaha &gt;:3 One insight that might surprise many readers in the 2025 DBIR is that Espionage-flavored actors account for approximately 62% of Basic Web Application Attacks (BWAA), up from 10 - 20% historically (largely due to the 2025 DBIR’s data contributors proffering more espionage in their data set).
As nation states largely conduct espionage, this means oft-aggrandized APTs are booping and bopping your web applications to see what works before they escalate their efforts. Their workflow, not unlike most professional cyberattackers, goes something like:
doodle around in shodan look at some banners (optional step: ingest more caffeine) ponder, “what is that, anyway?” google it download source fuck around get a shell throw in the wild some extra steps the industry mostly ignores, anyway profit / geopolitical advantage As I stated in my last section, being sophisticated is expensive and time consuming (as any 12-step skincare girlie knows all too well). It is still really smart for APTs to try the easier path of BWAA – but it also, hopefully, demystifies them (a good thing; the industry should never have mythologized them in the first place).
“But, but,” you’re thinking, “BWAA is just the vector for them to gain access to the underlying server and pivot!”
I asked this question to the DBIR team, and they clarified that BWAA refers to messing around at the app layer itself (like XSS), while “System Intrusion” reflects app exploitation to gain access to the underlying server (like Command Execution). I find this distinction fuzzy and frustrating, and another reason why I dislike VERIS. SQLi technically messes with the interaction between the app and the database, but counts as BWAA.
While I’m ranting here, I’ll also lament that they don’t track “abuse” as much as perhaps they should. What I hear from platform eng and security eng leaders alike is that the web app and API abuse events plague them most in terms of impact.
Anyway, my venom for VERIS aside, this data point likely means nation states view these layer 7 gallavants as lucrative enough to try. An individual BWAA attack may not spill out a platoon of candy from a single app piñata, but if you can mount such attacks at scale, against many apps – which, one would hope a nation state subpod could develop such a capability – then they manifest their own whole ass Candy Land.
Through the lens of crimes of geopolitical passion (i.e. espionage) vs. money crimes, this trend perhaps also reflects how the world now conducts so much business via web apps. It’s Snowflake, Salesforce, Hubspot, Zoom, and Google Docs that vampirize our working hours, not desktop software13. Web apps now hold the trade secrets, the sales projections, the strategic plans, product roadmaps, financials forthcoming in an SEC filing.
Do nation states learn about and understand market trends faster than blue teams? Why do attack groups seem to experiment with and adopt emerging technology at the same time blue teams resist those innovations?
Bonus points if you ask these questions at an RSAC cocktail party as a conversation opener.
4. How do the money crimes generate money? If most attacks – conducted by criminal organizations and nation states alike – involve financial motives, how do attackers convert those incidents and breaches into money?
The DBIR does not have many answers for us (nor is it their fault they do not), but it is, nevertheless, a worthy question to explore with the data they do present.
Ransomware is our obvious place to begin. tl;dr payouts are shrinking, and fewer organizations are paying any ransom at all (especially larger enterprises).
Are we improving at system restoration and disaster recovery? Or, alternatively, are the consequences of leaking customer data not actually that bad in practice?
Ransomware’s monetization path feels obvious, and some criminal entrepreneurs have executed that GTM strategy very well14. So, no surprise that there are no surprises in the report on the ransomware front.
The real mystery to me is why we keep seeing DDoS attacks? How are attackers even monetizing it enough to justify the effort? Why do they still try given they don’t succeed very often? These questions vex me15.
A lazy hypothesis might be that attackers are stupid. Some are stupid and delulu, like any arbitrary group of humans, but I don’t think that explains this dynamic.
Ignorance is the more plausible explanation. As in, there are absolutely nation state actors who live in their nation state bubble and don’t participate in “hallway tracks,” or casually chat with private sector tech pros, or gain any other experience other than in classified government settings. The same goes for organized criminals in locales lacking high-scale digital businesses.
There are a lot of people out there who are in relative positions of power now who may earnestly think, “oh, if I get an X gbps botnet then I can hold some digital terrain at risk!” They are probably wrong, because they do not understand CDN capacity and capability in 2025. They may not even understand CDNs exist, or that most enterprises use them16.
With that said, it’s true, based on the figures in this year’s DBIR, that most enterprises could not withstand or mitigate availability attacks on their own.
These numbers boggle the mind; scrombulate the brain, if you will. DoS attacks of this size mean that you, a typical enterprise (let alone an SMB), really can’t run a site or service on more modest infrastructure without expecting a DoS to take it out.
These figures mean you really must use an infrastructure provider that has enough scale and pay for their DDoS protection because the packet and bandwidth rates of the largest attacks are astronomical.
I must disclaim here that my Business Cat day job is serving as VP of Security Products at Fastly, who indeed is one of those infrastructure providers that provides CDN &#43; the requisite security needed to ensure shit doesn’t go wonky when you deliver software on the public internet.
Even so, when I asked the DBIR team why they haven’t paid as much attention to network-based availability attacks, they – like many in the industry – view this as a “solved problem,” in the sense that everyone uses a CDN or CSP now to handle DoS mitigation.
There’s truth to their view, and also that, based on this data, this strategy seems to work pretty well. I found it fascinating to learn that only 2% of availability incidents (mostly DoS) lead to service interruption. Even degradation shows some level of resilience against attacks, too.
Of course, it’s worth asking: how many DoS attempts were there that could not foment sufficient damage to receive the designation of incident?
The prevalence of degradation also reflects of the nature of modern systems. We aren’t building monoliths anymore, so attackers may only succeed in taking down a part of a system – a blip rather than a blammo, so subtle that many users won’t even notice.
The flip side of this evolution away from monoliths is that it might be easier for attackers to disrupt or degrade a critical component that receives less traffic, like your cart checkout page, which will anger your customers and executives alike.
How much does that kind of targeting (and hypothetical impact) happen in practice? I hope next year’s DBIR might answer that question.
Another hypothesis is that not all criminals are good entrepreneurs. Some are quite poor at the business side of crimes, which, naturally, softens their ability to penetrate the multi-billion dollar cybercrime TAM.
Cybercrime vendors, much like cybersecurity vendors, depend on brand awareness to succeed. Sometimes they are quite bad at conceiving and executing marketing campaigns – which, from what I’ve observed, is sometimes a sign they aren’t super great at identifying product/market fit, either (as in cybercrime as in b2b tech, luck plays an enormous factor, and you can stumble into success without ever truly understanding why).
My favorite example of criminals fumbling the bag in clumsy attempts at entrepreneurship appears in the U.S. Department of Justice’s indictment against Anonymous Sudan from last year (June 2024).
Defendant AHMED OMER and UICC 1 would post messages on Telegram in channels they controlled claiming credit for these attacks, often providing proof of their efficacy in the form of Check Host reports showing that the victim computers were offline.
Their claims often did not match the reality. For instance, when they tried to DDoS Netflix and only disrupted a few regions of service, they claimed Netflix was “strongly down.”
Or, in another instance:
Overt Act No. 101: On May 22, 2023, defendant AHMED OMER sent private messages on Telegram stating, “check https://oref.org.il, if you find backend im ready to pay u good money.” The other party then sent AHMED OMER an IP address, to which AHMED OMER replied, “I fuck this ip but www.oref won’t down.”
“I fuck this IP” is lowkey iconic as a pro-DDoS product slogan, but this message, along with messages that provide an IP and ask “how to down?”, perhaps does not instill a sense of confidence in their target customer base.
While originally motivated by ideology and national pride, Anonymous Sudan tried to monetize their operation over time, including creating an automated bot to collect payments from victims seeking a ceasefire:
We can negotiate a price with you to halt all DDoS attacks immediately, and help you apply DDoS mitigation. To negotiate, contact us at our bot : @AnonymousSudan_Bot.
It is unclear to what extent they ever monetized these attacks. They claim to have in some cases, like:
On March 5, 2024, defendant AHMED OMER or another co-conspirator posted a message on Telegram stating, “After more than 48 hours of holding the Zain Bahrain network offline, we have finally reached a deal with them. Therefore, we’ll stop all attacks on their networks immediately. This entire experience proves the revolutionary power of @InfraShutdown team in holding huge networks for days and getting multi-billion companies to their knees. You can request an attack of any scale and unlock this never seen before power by contacting @InfraShutdown_bot.
But, well, do you believe their claims based on these messages?
Perhaps that’s the most common ground blue teams can find with cyber criminals: whether Telegram, or SoMA during RSAC, both of their bubbles drown in grandiose claims.
5. Attackers are still not really using GenAI Last year, I noted that attackers aren’t using GenAI. They still aren’t, really. And when they do, it doesn’t seem to make much of a difference.
Specifically, GenAI hasn’t really made phishing “better” in some sense, because it’s still down in relative prevalence. How much of an asymmetric advantage does it really offer attackers? I suspect GenAI provides more ROI for aspiring CISO thought leaders on LinkedIn than most attackers, although the former means the latter can create their own fake aspiring thought leader CISOs for more credible social engineering.
Alas, if the DBIR hadn’t investigated GenAI involvement in incidents and breaches, I’m sure the AI evangelicals enthusiasts would’ve slandered them with, “you’re missing the LLM impact! It must be huge!” I take its inclusion as the report-writing equivalent of bequeathing someone a token object upon your death in your will so they can’t claim they were inadvertently left out.
This isn’t to say GenAI isn’t used anywhere; it clearly is, but it seems not to have improved efficacy over the previous techniques. Writing spam email with fancier LLMs instead of more basic ML doesn’t necessarily improve deliverability or fidelity.
As a final note on GenAI, since I’m sure some VC or startup bros are already thinking “but wait, what about –” no, the data set doesn’t include any real-world attacks on LLMs themselves. Maybe this will be present in next year’s data set now that people are wiring LLMs to their proprietary data sets and letting them roam as free range models with minimal supervision.
6. If you can’t make your own 0day, store-bought creds are fine The industry obsesses over 0day17, but the 2025 DBIR shows only 8% of incidents involve vulnerabilities at all, and we must presume that even fewer of those involve zero day vulnerabilities (since, inherently, they decay into N-days once a patch releases).
Perhaps more surprising to readers is the report’s insight that not even APTs obsess exclusively over vulnerabilities (see also #3 on my list). Combing through code to uncover exploitable conditions is hard, and writing code to exploit those conditions and assumptions can be even harder. And if cybersecurity vendors will spend marketing dollars on publishing new exploitable vulns to the world, why not save a few bucks and co-opt their work?
My take is we should pay less attention to year-over-year changes and examine the longer-term trends more. The annual vagaries depend on what 0-days attackers or researchers discover, and how easy it is to write exploits for them vs. write patches for them. More precisely, all that really matters when it comes to 0day – or vulnerabilities in general – is how easy it is to turn them into a viable attack, where viable can refer to stealth, scale, consistency, and the other attributes I discussed in #2 on this list.
Consider the big topic from Verizon’s 2024 DBIR: MOVEit. MOVEit was a disaster because of their deployment model, tech-unsavvy customer base, and product proximity to ransomable data. In this year’s DBIR, system intrusion campaigns seem to be frolicking with credential abuse, phishing, BWAA – so any vulns that pop up are a bonus.18
This is an important finding to internalize from this year’s DBIR: use of stolen credentials continues to be a common attack pattern, because, to evoke Paul Hollywood from the Great British Bake-Off, it’s “simple, but effective.” It’s cheap, easy, and scales to large operations (the report notes that 80% of stolen accounts had prior exposure). Attackers might even learn to make friends in their Telegram chats along the way!
I find defenders often sleep on the potency of stolen creds, despite our general nihilism around nothing being secret anymore. The stolen cred problem is seen as “boring,” not one of those meaty technical problems into which you can sink your fangs and extract resume juice (jk, not jk).
To wit, even those infusing Secure by Design principles into how they secure their customers – a thoughtful endeavor, to be sure – may still overlook these “basic bitch” attacks. Snowflake, for example, invested considerable resources into isolating each tenant within their own separate VPC while supporting every cloud service provider19, only for those customers to get rocked by stolen credentials last year.
We should respect our opponents, but not aggrandize them. When threat modeling, assume they’ll follow Ina Garten’s advice: If you can’t make your own 0day, store-bought creds are fine.
7. Security was the real supply chain threat all along Open source software (OSS) issues don’t even seem to be big enough to warrant a mention in the 2025 DBIR report. The only mention of OSS sits quietly in the news section towards the end, probably because talking heads and news outlets heavily report vulns in OSS due to the frankly bizarre level of distaste and disdain the traditional cybersecurity sphere feels towards OSS.
However, curiously, many of the vulns the DBIR lists as leading to incidents reside in what the report euphemistically refers to as “edge devices.” These are more accurately described as security appliances, as shown in the list I helpfully defaced below.
Perhaps the cybersecurity community should dog whistle less about the “insecure supply chain,” typically meaning open-source software, when OSS is akin to humans of code giving things away for free with no warranty. You get what you pay for.
It’s as if you raided someone’s house, tore out their wiring, then complained that the wiring was substandard copper… meanwhile your own house is stuffed with asbestos behind mercury walls.
8. Things Rot Apart It’s 2025, and somehow VPNs and mail servers – the remnants of ye old ways where we had server janitors – persist in their ability to fuck your shit up. A simple security heuristic might be, “have you modernized or not yet?” where modernization means innovation hasn’t died and stunk up your stack like a decomposing skunk.
From that perspective, it is no surprise mail servers remain so prevalent in this data. Exchange is pooptacular (the technical term), but it’s also the best mail server out there, implying the remaining menagerie of mail servers are something like turbopooptacular20.
It’s been years since Google apps murdered any remaining innovation in running your own mail server. Typical exchanges (lolol) go something like:
“We have 20 years of investment sunk in this one mail server stack. Can we integrate some better security into it? Like, idk, even rsa tokens?”
“Absolutely not, because it needs to auth to legacy Outlook, which doesn’t support that.”
“…oh, okay.”
Being a computer janitor for big, chunky, clunky servers you run on-prem is a dying industry. No one is entering it. Students aren’t excited to maintain servers older than they are. There is no innovation left to squeeze from this spoiled, shriveled lime of a niche. For organizations still maintaining such servers, this means they cannot hire people to do it, and so it all rots.
What does it mean for tech to rot, to be rotten? Tech rots either due to neglect, or the world progressing faster than it does (or faster than its operators’ knowledge). Sometimes we see both transpire in sinister symbiosis.
Tech rots when it’s all operators know, and the newer world scares or intimidates them. It rots in restructurings, reductions in force, and other such euphemisms that result in new operators who don’t want to touch it for fear of breaking it – or perhaps such changes result in no owners at all.
When we analyze through this meta lens, the data shows that the new ways – those often met with so much resistance by the recalcitrant cyber reductionists – strongly correlate with better security outcomes. None of us should suffer under the yoke of rotten tech. We don’t have to settle. And we need not sleep restlessly at night, fearing insidious incidents and bowel-shattering breaches if we modernize our technology stacks.
Conversely, rotten tech seems to serve as a leading indicator of incidents and breaches. Why do corporate IT systems rot so badly? Can we prevent it? What other harm does the rot cause? This brings us to…
9. Scooby Doo’s Spooky Kooky Corporate IT Caper It’s hard not to ponder the macro-level incident trends and see the spectre of corporate IT. Credential management, logins, vulnerable Blinky boxes, file transfer, email compromise, VPNs… for many organizations, security teams and their smorgasbord of cybersecurity vendor products fit under corporate IT.
These systems are poorly maintained and monitored, often years out of date. With such poor hygiene, should it surprise us that they’re more likely to succumb to attacks? Thankfully, many of these IT systems live on their own infrastructure and thus can only inflict limited damage.
Security systems, however, insist on embedding themselves in every other system: if it involves a computer, someone is going to want to stick EDR on it and send logs to a SIEM. These tools want the most privileges because that’s the easier design choice for them. Should an upgrade ever need to do more than it does today, they don’t require additional permissions because they already have them all.
Some of them are even designed to run arbitrary commands remotely, which is a sign that a piece of software might be hazardous to operate. This is such an obvious point to anyone who understands how computer systems work and yet eludes way too many security teams.
Being able to run arbitrary code is an exceptional thing for a system to do. Normal software systems don’t do that. The norm is for software systems to perform their function, and if that function ever changes, someone must upgrade the system with new software.
It is perhaps the antithesis of Secure by Design and Principle of Least Privilege, and it is fucking everywhere in security land – these “edge devices” included.
This is one reason, but not the only, why the gulf between corporate IT and software engineering is wide, widens, will keep widening. Such practices indicate a cultural canyon between the two spheres.
Modern software engineering practices explicitly seek to facilitate change, agility, adaptation in ways that corporate IT practices simply do not and cannot (and it’s why savvier security leaders transform their programs towards the former). In modern software engineering, updates are routine affairs involving many more components than corporate IT handles.
If we look at custom software engineering teams build – potentially even out of vulnerable open source components, gasp! – the DBIR data (and data elsewhere) shows that this custom software seldom leads to attackers breaching their organization.
When will we see the RSAC banners about traditionalist security teams not caring enough about software security replace those that fear monger about developers? Tribal narratives are powerful weapons to wield in propaganda. We should maintain intellectual integrity and call out poor software practices when we see them – ideally with constructive recommendations on how to improve – not simp for security vendors when they exhibit egregiously poor software practices.
10. At least some things are improving somewhere A bit of good news that deserves attention: dwell time is down by an entire week. This is a substantial decrease over only two years and suggests organizations have better mechanisms to tell when they’ve been breached.
It would be interesting to know what factors influence this improvement most. Is it primarily engendered by cybersecurity professionals / vendors leveling up their game, or does it arise from technology / architectural shifts (like those common in the fabled “digital transformation”)?
The 2025 DBIR observes a continued shift away from credit card data stolen in breaches, a trend we should also celebrate. I suspect it evinces the success of chip &amp; PIN, as well as NFC (remember when the cyber industrial complex spewed a barrage of FUD about NFC payments and how it reflected the fraudpocalypse? Pepperidge Farm remembers).
The DBIR’s insight that Magecart represents 1% of System Intrusion breaches, but makes up 80% of payment card breaches suggests the latter has plummeted in volume. Now we just need digital payment wallets to really take off on desktop to render this category irrelevant.
11. people continue to click things on the thing clicking machine While the Verizon DBIR is far from the worst offender on this front, security reports often give off “you need to touch grass energy.” This is especially true when they touch topics related to humans clicking on things that facilitate incidents or breaches.
Users – the humans who interact with the technology we’ve made ubiquitous in their lives – expect to be able to click things and for the world not to implode. Clicking things is what many jobs are now. It should be safe to click links in the same way that it is safe to answer the phone.
The fact that it is not is an indictment of the security industry, not human behavior.
This allows me to send you off, dear reader, with one of my old memes, so we can depart back into the cybersecurity realm with a sense of enlightenment and, hopefully, humility.
thx AP &lt;3 Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Bookshop, Amazon, and other major retailers online.
It’s a little cringe that the M-Trends report launched the same day as the DBIR’s widely publicized date. They should feel more confident in their own report and ensure it has a chance to shine; the competitive vibe isn’t the best look, when such reports should be in the spirit of leveling up the community. ↩︎
Indeed, throughout the report, one can feel the “data people with integrity feel anxiety about the data” vibes. This is a far better thing than the more pernicious “lying with data.” ↩︎
Imagine a sentient report, if you will, singing “What was I made for?” by Billie Eilish. Just replace “Just something you paid for” with “Just something you transmuted yourself into an MQL for.” ↩︎
I think it’s difficult to get an accurate ransomware total. We can see the total number of ransomware incidents they tracked: ~9703. However, I’m not sure what proportion they track in their dataset vs. miss. Of those 9703, 64% didn’t pay, leaving 3493. We only have the median, so it’s difficult to get to the aggregate amount paid in their dataset. If we work backwards from the median -&gt; total calculation for BEC, which gives a 6.6 multiplier from median to mean, that suggests $2.66 billion in their dataset. A very very rough calculation. ↩︎
Is it more or less than the satisfaction a cyber reductionist CISO feels when wearing sweet APT-themed swag? ↩︎
Lockbit’s cumulative revenue hit at least $144 million in ransom payments between 2020 and the end of 2023. If I told you about a self-made startup founder who developed innovative SaaS technology and grew it to ~$50mm&#43; ARR in 3 years with zero VC funding, you’d be like, “who is this genius visionary??” It is a mistake to view attackers as genius, superhuman villains wielding silicon magic when they are, in practice, closer to shrewd tech entrepreneurs who identify product/market fit, execute on it by building software with iterative feature development, and successfully scale their operations to address market demand (and yes, sometimes that demand is espionage-flavored, as in “As an aspiring empire, we want to collect data from our primary adversary’s telecom providers so we can create richer profiles of key decision-makers in their government or, possibly, blackmail them.”). Game recognizes game14. ↩︎
I wonder if there’s a doc akin to the CIA’s Simple Sabotage Field Manual – their guide for disrupting progress in workplaces – but to disrupt your adversary’s attack groups by installing egotistical, smoke-blowing “leaders” who took a weekend MBA course and now think they are Frank Quattrone. ↩︎
see also how I solve every puzzle in The Legend of Zelda: Tears of the Kingdom with the time reverse thingy ↩︎
if only more cyber reductionist security pros and linkedin thought leaders were “unwilling to remain medicore” ↩︎
Etymologically, however, sophisticated / sophistication served as a pejorative, related to the root word “sophists,” i.e. those who tell mellifluous, remunerative lies to eager ears. But the original original meaning indeed related to someone who is a master at their craft. ↩︎
To quote Rick Ross, “This shit is highly sophisticated, I just make it look easy” ↩︎
And this requires actually understanding what your business does, where it fits in your market, how it generates revenue, where it burns costs, and similar knowledge that traditionalist cybersecurity pros too often disdain. ↩︎
you can pry the desktop MS Office suite from my cold, dead hands tho. ↩︎
Dmitry Khoroshev walked so Luigi Mangione could run. ↩︎ ↩︎
I bet Dmitry Khoroshev would know the answer(s). If you’re reading this, I’d love to “interview” you, if you know what I mean… ↩︎
I recall attending a security conference in a country stereotypically associated with prolific attack activity, and most attendees did not know what 2-factor authentication was. ↩︎
0day can be sexy, almost artistic in its manipulation of underlying machinations, but few that become public excite me. Annual reminder that I’ll go on a date with anyone who gives me a pre-auth RCE 0day in SSH. ↩︎
Magecart might be the exception, but I suspect the patching rate is quite low on the long tail of online retailers. ↩︎
the product people reading just gasped, I assure you. ↩︎
Gartner will likely label it “next-gen pooptacular” instead. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>Every year Verizon publishes our collective best attempt at collating real-world evidence of attacks in the <a href="https://www.verizon.com/business/resources/reports/dbir/?shortridge">2025 Data Breach Investigations Report (DBIR)</a>. It is either my privilege or curse as notorious cyber raconteur and rapscallion to yet again receive an advance copy<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> of the report to digest and distill.</p>
<p>What follows is my commentary on the Verizon 2025 DBIR, attempting to make sense of this year’s data and share this sensemaking with the community.</p>
<p>As ever, I remain a skeptic and scientist, seeping in the systematic doubt of the Socratic method. The Verizon DBIR team readily acknowledges<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> that this data cannot reveal the full dynamics pervading incidents and breaches.</p>
<p>The report polishes disparate tiles from data sources – some ceramic here, limestone there, glass, marble, a veritable jumble – and adds pieces to the industry mosaic, incomplete, but nevertheless an incremental step forward towards Truth.</p>
<p>Well, as close to Truth as the travesty that is <a href="https://github.com/vz-risk/veris">VERIS</a> allows.</p>
<p>Let’s dive in.</p>
<h2 id="1-so-what">1. So what?</h2>
<p>Why does any of this matter?<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> When you strip away the sublime sensation of “OwO new shiny data!!” to which all humans succumb, how does this data add value?</p>
<p>Simply put: what is the impact of the incidents and breaches enumerated within? Impact’s absence throughout the report in some sense recalls Ozymandias, that seething expanse of desert, primordial dunes swept and sculpted by eternal winds – that profound existential emptiness found in the colorless abyss of slipshod content that perhaps, most of all, defines our current times.</p>
<p>A little bird told me that the DBIR team yearns to ingest cyber insurance claim data, and I, too, yearn, so, please, if you are in a position to provide it, do so.</p>
<p>For it is in the “what actual $s €s £s did insurance companies pay out for incidents with X attack vectors or Y assets?” that we will learn where optimal ROI resides – both in the micro and macro.</p>
<p>Imagine if we could see just how much these incidents and breaches cost insurance companies in claims – then compare that with the collective spend on vendors, security teams, and the other elements that make up titanic (albeit, this year, often flat growth) security budgets.</p>
<p>For 2025 in particular, it is worth asking: did the Crowdstrike outage’s impact on businesses (or society more generally) outweigh all the breaches in this report combined? Very rough calculations – truly extremely untrustworthy math<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup> – from the report imply ~$2.66bn in damages from Ransomware attacks. But the Crowdstrike outage cost Fortune 500 companies <a href="https://news.gatech.edu/features/2024/10/generating-buzz-what-crowdstrike-outage-tells-us-about-global-cybersecurity">at least $5.4 billion</a>, or $44 million per Fortune 500 company.</p>
<p>This ignores the indirect costs imposed upon the populace across sectors – like those saddled on travelers due to cancelled flights. <a href="https://internationalbanker.com/technology/key-implications-of-the-crowdstrike-outage/">Nearly 17,000 flights</a> were cancelled during the 72 hours after the outage. How do you quantify the cost of a family member missing a funeral due to cancelled flights?<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> How do you quantify the cost of a loved one missing a critical medical procedure?</p>
<p>The core question is: should we be allocating more or less effort to cybersecurity – and where should we be allocating those efforts more or less? The report cannot really answer this without tying these statistics to tangible costs and benefits.</p>
<p>It calls to mind the classic quip, “Half my advertising spend is wasted; the trouble is, I don&rsquo;t know which half.” As anyone in advertising will divulge, presuming we only waste “half” is a generous estimate.</p>
<p>To the well-meaning cybersecurity professionals among you, how do you intend to use this information? How will it inform your choices? How will you investigate how these trends align with what matters to your business? Consider these questions as you continue reading this commentary.</p>
<h2 id="2-espionage-fast-fashion-or-couture">2. Espionage: fast fashion or couture?</h2>
<p>The 2025 DBIR features new contributors that added much-needed data points related to “espionage” events, i.e. those conducted by nation state-affiliated actors. This figure shot up to 17% (as shown in Figure 21 from the DBIR), much to the delight of FUD-driven GTM teams who will soon descend on Moscone, I’m sure.</p>
<p><img src="/blog/img/2025-dbir/figure-21.png" alt="Figure 21 from the 2025 DBIR. It shows a horizontal bar chart describing threat actor motives in breaches. The first data point shows threat actors had financial motives in 89% breaches, espionage motives in 17% of breaches, and other motives in 0.6% of breaches."></p>
<p>However, many of those espionage events also feature financial motives. Why might that be? Shouldn&rsquo;t nation states be sufficiently flush with cash and coin that they need not stoop to monetary theft?</p>
<p>Well, depending on the nation state, sanctions suck and make it hard to transact at financial institutions. Or, if you&rsquo;re a nation state who wants to save face on the geopolitical stage, it&rsquo;s far better to spend someone else&rsquo;s money – especially if in the form of cryptocurrency – to enhance plausible deniability.</p>
<p>Many nation states prefer to slink and prowl through the ferns and fronds of software systems, abscond in shadows, slip past moonlight and crisp branches to avert the treacherous crunch that would startle their prey. Unlike successful ransomware-aaS startups<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>, nation states don&rsquo;t need to run marketing campaigns to differentiate and drive demand for their services.</p>
<p>In general, nation state actors would like to preserve their access into target systems for as long as they can (a point to which we&rsquo;ll return shortly), or else achieve other nearer-term goals in service of the longer-term mission. Being mistaken for a &ldquo;nuisance&rdquo; or petty criminal may hurt their pride, but, unlike too many traditionalist blue teams, they can put that aside for their mission.</p>
<p>These missions pile up expenses. Every attack group still must consider costs. There is no tree on this planet that grows money (though we can burn them en masse to mine cryptocurrency, true).</p>
<p>APTs need hardware to mount attacks, new IPs to evade detection, compute to run workloads. Those things cost money. Even money laundering costs money. For some of these things, like compute, they can steal from organizations so it comes from someone else&rsquo;s wallet, not their own – but this thievery alone cannot sustain their ongoing operation.</p>
<p>Attack operations, like any business operation, require an intricate support system to keep running. And, like most business operations, to get more funding for your team, your mission, your whatever, you must appeal to the authorities above you.</p>
<p>From that perspective, is it not obvious why nation state actors would commit financial crimes? Maneuvering through someone else&rsquo;s machines is easier than maneuvering through the humans who control your budget<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>.</p>
<p><img src="/blog/img/2025-dbir/figure-22.png" alt="Figure 22 from the Verizon DBIR 2025, It shows a horizontal bar chart describing state-sponsored threat actor motives in incidents. The first data point shows state-sponsored actors had espionage motives in 74% of incidents, financial incidents in 28% of breaches, secondary motives in 26% of incidents, and other motives in 0.1% of incidents."></p>
<p>so, tl;dr Your threat model is still predominantly money crimes.</p>
<p>What does this – still admittedly minor – presence of espionage mean in practice? Why does “nation state pursuing financial crimes” matter as a distinction from “cyber criminals pursuing financial crimes”? Do their methods differ so much that we need different defenses in place?</p>
<p>Not really.</p>
<p>The distinction grabs attention but is not where our focus should lie. The more relevant question is: what is the essential element of my business that makes me a target? As in, if I make it harder, are attackers going to bother someone else, or do they want me specifically for some reason?</p>
<p>Simply put: we should stop worrying so much about who the attackers are, and more about who we are; what makes us special, rather than what makes specific attack operators special; how our differentiation, competitive dynamics, and market strategy reveals more “threat intelligence” than whether the attacker prefers bamboo or wood pulp TTPs.</p>
<p>If you die in a sophisticated attack vs. a crude one, you’re still dead.</p>
<p>Attackers are not stupid, but they often spend their effort on relatively stupid (or “simple”) techniques. I, too, will often take the trash panda, “lazy” way<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup> rather than taking the higher-minded, effortful way unless necessary, and I suspect you, dear reader, can also relate.</p>
<p>The <a href="https://blog.xlab.qianxin.com/gayfemboy-en/">Gayfemboy botnet operators</a> epitomize this impetus to evolve only as necessary. To wit, XLab observed:</p>
<blockquote>
<p>However, the developers behind [the gayfemboy botnet] were clearly unwilling to remain mediocre. They launched an aggressive iterative development journey, starting with modifying registration packets, experimenting with UPX polymorphic packing, actively integrating N-day vulnerabilities, and even discovering 0-day exploits to continually expand Gayfemboy&rsquo;s infection scale.</p>
</blockquote>
<p>I certainly count the Gayfemboy botnet as “sophisticated” on these grounds<sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup>. I also think the Salt Typhoon campaign showed sophistication. I don’t think what the DBIR – and, certainly, the broader ecosystem – characterizes as “sophisticated” is necessarily so.</p>
<p>It raises a valuable question: <strong>what does <em>sophisticated</em> mean in the context of attacks and cybercrimes</strong>? Sophistication, definitionally<sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup>, involves flavors of:</p>
<ul>
<li>Understanding how things really work</li>
<li>High quality or reflecting a high level of skill</li>
<li>Refined taste</li>
<li>Rizz<sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup></li>
</ul>
<p>That first characteristic – understanding how the world <em>really</em> works – implies that sophisticated attackers will be those who adapt their techniques based on what works; sophisticated attackers won’t waste their finite time and effort on techniques that don’t.</p>
<p>From the attacker’s perspective, if they perform one big, flashy display their target quickly detects, now they’re dead. If they take consistent action to maintain access in their target systems for years, but those actions are “simple” or “stupid” – well, they’re still alive.</p>
<p>If we find someone persisted in our network for years using bash, after exploiting one lucky, exposed N-day vulnerability – maybe that <em>is</em> sophisticated.</p>
<p>Sophisticated is more about the attacker’s “broader goal” – their roadmap, vision, and ability to execute on it, if you will – rather than about the means they employ. Can an unsophisticated actor turn stolen credentials into a one-time smash and grab? Certainly. But can that actor turn those stolen credentials into getting in, and <em>staying in</em>, forever? Probably not. That is the hallmark of a sophisticated actor.</p>
<p>Perhaps an appropriate definition of “sophisticated attack” refers to how hard we must work to detect it. If we list admins and see new ones, that’s an APT TTP corresponding to MITRE ATT&amp;CK blah blah, but that’s not very hidden, and thus not very sophisticated.</p>
<p>“Sophisticated attack” vibes are like, someone smart cared about this attack operation, and that’s spooky because it means they had a purpose. What was that purpose? Hence, we return to the key question of: what is the essential element of my business that makes me a target?<sup id="fnref:12"><a href="#fn:12" class="footnote-ref" role="doc-noteref">12</a></sup></p>
<p>If we apply this line of reasoning to defense, does it not imply that “sophisticated” security strategies are those that prioritize consistency over flashiness? That take incremental action to adapt and persist in their mission of <a href="https://www.kellyshortridge.com/blog/posts/what-does-the-word-security-mean/#what-security-means-in-the-information-society">preserving the values</a> their business treasures most (like revenue, profitability, or DEI)?</p>
<h2 id="3-apts-go-bwaahaha-3">3. APTs go BWAAhaha &gt;:3</h2>
<p>One insight that might surprise many readers in the 2025 DBIR is that Espionage-flavored actors account for approximately 62% of Basic Web Application Attacks (BWAA), up from 10 - 20% historically (largely due to the 2025 DBIR’s data contributors proffering more espionage in their data set).</p>
<p><img src="/blog/img/2025-dbir/figure-58.png" alt="Figure 58 from the 2025 Verizon DBIR. It is a horizontal bar graph denoting top actor motives in Basic Web Application Attacks in 2024. Espionage accounted for 62% of attacker motivations. Financial accounted for 33% of attacker motivations. Ideology accounted for 4% of attacker motivations. Grudge accounted for 0.4% of attacker motivations. &amp;ldquo;Other&amp;rdquo; accounted for 0.4% of attacker motivations. Fun accounted for 0.3% of attacker motivations."></p>
<p>As nation states largely conduct espionage, this means oft-aggrandized APTs are booping and bopping your web applications to see what works before they escalate their efforts. Their workflow, not unlike most professional cyberattackers, goes something like:</p>
<ol>
<li>doodle around in shodan</li>
<li>look at some banners</li>
<li>(optional step: ingest more caffeine)</li>
<li>ponder, “what is that, anyway?”</li>
<li>google it</li>
<li>download source</li>
<li>fuck around</li>
<li>get a shell</li>
<li>throw in the wild</li>
<li>some extra steps the industry mostly ignores, anyway</li>
<li>profit / geopolitical advantage</li>
</ol>
<p>As I stated in my last section, being sophisticated is expensive and time consuming (as any 12-step skincare girlie knows all too well). It is still really smart for APTs to try the easier path of BWAA – but it also, hopefully, demystifies them (a good thing; the industry should never have mythologized them in the first place).</p>
<p>“But, but,” you’re thinking, “BWAA is just the vector for them to gain access to the underlying server and pivot!”</p>
<p>I asked this question to the DBIR team, and they clarified that BWAA refers to messing around at the app layer itself (like XSS), while “System Intrusion” reflects app exploitation to gain access to the underlying server (like Command Execution). I find this distinction fuzzy and frustrating, and another reason why I dislike VERIS. SQLi technically messes with the interaction between the app and the database, but counts as BWAA.</p>
<p>While I’m ranting here, I’ll also lament that they don’t track “abuse” as much as perhaps they should. What I hear from platform eng and security eng leaders alike is that the web app and API abuse events plague them most in terms of impact.</p>
<p>Anyway, my venom for VERIS aside, this data point likely means nation states view these layer 7 gallavants as lucrative enough to try. An individual BWAA attack may not spill out a platoon of candy from a single app piñata, but if you can mount such attacks at scale, against many apps – which, one would hope a nation state subpod could develop such a capability – then they manifest their own whole ass Candy Land.</p>
<p>Through the lens of crimes of geopolitical passion (i.e. espionage) vs. money crimes, this trend perhaps also reflects how the world now conducts so much business via web apps. It’s Snowflake, Salesforce, Hubspot, Zoom, and Google Docs that vampirize our working hours, not <em>desktop</em> software<sup id="fnref:13"><a href="#fn:13" class="footnote-ref" role="doc-noteref">13</a></sup>. Web apps now hold the trade secrets, the sales projections, the strategic plans, product roadmaps, financials forthcoming in an SEC filing.</p>
<p>Do nation states learn about and understand market trends faster than blue teams? Why do attack groups seem to experiment with and adopt emerging technology at the same time blue teams resist those innovations?</p>
<p>Bonus points if you ask these questions at an RSAC cocktail party as a conversation opener.</p>
<h2 id="4-how-do-the-money-crimes-generate-money">4. How do the money crimes generate money?</h2>
<p>If most attacks – conducted by criminal organizations and nation states alike – involve financial motives, how do attackers convert those incidents and breaches into money?</p>
<p>The DBIR does not have many answers for us (nor is it their fault they do not), but it is, nevertheless, a worthy question to explore with the data they do present.</p>
<p>Ransomware is our obvious place to begin. tl;dr payouts are shrinking, and fewer organizations are paying any ransom at all (especially larger enterprises).</p>
<p>Are we improving at system restoration and disaster recovery? Or, alternatively, are the consequences of leaking customer data not actually that bad in practice?</p>
<p><img src="/blog/img/2025-dbir/figure-46.png" alt="Figure 46 from the Verizon DBIR. It shows three dot plots describing the distribution of ransom payments in USD between 2022 and 2024. The 5th percentile payout was $329 in 2022, $129 in 2023, and $300 in 2024. The 25th percentile payout was $980 in 2022, $1121 in 2023, and $999 in 2024. The median payout was $73500 in 2022, $150000 in 2023, and $115000 in 2024. The 75th percentile payout was $1335000 in 2022, $22790000 in 2023, and $1000000 in 2024. The 95th percentile payout was $77120000 in 2022, $9900250 in 2023, and $3637500 in 2024."></p>
<p>Ransomware&rsquo;s monetization path feels obvious, and some criminal entrepreneurs have executed that GTM strategy very well<sup id="fnref:14"><a href="#fn:14" class="footnote-ref" role="doc-noteref">14</a></sup>. So, no surprise that there are no surprises in the report on the ransomware front.</p>
<p>The real mystery to me is why we keep seeing DDoS attacks?  How are attackers even monetizing it enough to justify the effort? Why do they still try given they don’t succeed very often? These questions vex me<sup id="fnref:15"><a href="#fn:15" class="footnote-ref" role="doc-noteref">15</a></sup>.</p>
<p>A lazy hypothesis might be that attackers are stupid. Some are stupid and delulu, like any arbitrary group of humans, but I don’t think that explains this dynamic.</p>
<p>Ignorance is the more plausible explanation. As in, there are absolutely nation state actors who live in their nation state bubble and don’t participate in “hallway tracks,” or casually chat with private sector tech pros, or gain any other experience other than in classified government settings. The same goes for organized criminals in locales lacking high-scale digital businesses.</p>
<p>There are a lot of people out there who are in relative positions of power now who may earnestly think, &ldquo;oh, if I get an X gbps botnet then I can hold some digital terrain at risk!” They are probably wrong, because they do not understand CDN capacity and capability in 2025. They may not even understand CDNs exist, or that most enterprises use them<sup id="fnref:16"><a href="#fn:16" class="footnote-ref" role="doc-noteref">16</a></sup>.</p>
<p>With that said, it’s true, based on the figures in this year’s DBIR, that most enterprises could not withstand or mitigate availability attacks on their own.</p>
<p><img src="/blog/img/2025-dbir/figure-72.png" alt="Figure 72 from the Verizon DBIR. The graph shows the distribution over time of Denial of Service traffic in bytes per second between 2018. The median grows from 1.1 Gbps in 2018 to 1.6Gbps in 2019 and 2020, rises to 6 Gbps in 2021, falls to 2.6 Gbps in 2022, falls again to 2.1 Gbps in 2023, before rising again to 4.2 Gbps in 2024. The 10% lower bound and 90% upper bounds rise and fall accordingly, with the upper bound topping out at around 60 Gbps in 2021."></p>
<p>These numbers boggle the mind; scrombulate the brain, if you will. DoS attacks of this size mean that you, a typical enterprise (let alone an SMB), really can’t run a site or service on more modest infrastructure without expecting a DoS to take it out.</p>
<p>These figures mean you really <em>must</em> use an infrastructure provider that has enough scale <em>and</em> pay for their DDoS protection because the packet and bandwidth rates of the largest attacks are astronomical.</p>
<p>I must disclaim here that my Business Cat day job is serving as VP of Security Products <a href="https://www.fastly.com/blog/category/security?shortridge">at Fastly</a>, who indeed is one of those infrastructure providers that provides CDN + the requisite security needed to ensure shit doesn&rsquo;t go wonky when you deliver software on the public internet.</p>
<p>Even so, when I asked the DBIR team why they haven&rsquo;t paid as much attention to network-based availability attacks, they – like many in the industry – view this as a &ldquo;solved problem,&rdquo; in the sense that everyone uses a CDN or CSP now to handle DoS mitigation.</p>
<p>There&rsquo;s truth to their view, and also that, based on this data, this strategy seems to work pretty well. I found it fascinating to learn that only 2% of availability incidents (mostly DoS) lead to service <em>interruption</em>. Even degradation shows some level of resilience against attacks, too.</p>
<p>Of course, it&rsquo;s worth asking: how many DoS attempts were there that could not foment sufficient damage to receive the designation of incident?</p>
<p><img src="/blog/img/2025-dbir/figure-39.png" alt="Figure 39 from the Verizon DBIR. It is a horizontal bar chart showing availability varieties in incidents without data disclosure. Degradation was present in 78% of such incidents; obscuration was present in 19% of such incidents; interruption was present in 2.3% of such incidents; loss was present in 0.3% of such incidents; other varieties were present in 0.2% of such incidents."></p>
<p>The prevalence of degradation also reflects of the nature of modern systems. We aren’t building monoliths anymore, so attackers may only succeed in taking down a <em>part</em> of a system – a blip rather than a blammo, so subtle that many users won’t even notice.</p>
<p>The flip side of this evolution away from monoliths is that it might be easier for attackers to disrupt or degrade a critical component that receives less traffic, like your cart checkout page, which will anger your customers and executives alike.</p>
<p>How much does that kind of targeting (and hypothetical impact) happen in practice? I hope next year&rsquo;s DBIR might answer that question.</p>
<p>Another hypothesis is that not all criminals are good entrepreneurs. Some are quite poor at the business side of crimes, which, naturally, softens their ability to penetrate the multi-billion dollar cybercrime <a href="https://en.wikipedia.org/wiki/Total_addressable_market">TAM</a>.</p>
<p>Cybercrime vendors, much like cybersecurity vendors, depend on brand awareness to succeed. Sometimes they are quite bad at conceiving and executing marketing campaigns – which, from what I’ve observed, is sometimes a sign they aren’t super great at identifying product/market fit, either (as in cybercrime as in b2b tech, luck plays an enormous factor, and you can stumble into success without ever truly understanding why).</p>
<p>My favorite example of criminals fumbling the bag in clumsy attempts at entrepreneurship appears in the U.S. Department of Justice’s <a href="https://www.justice.gov/usao-cdca/media/1373581/dl?inline">indictment against Anonymous Sudan</a> from last year (June 2024).</p>
<blockquote>
<p>Defendant AHMED OMER and UICC 1 would post messages on Telegram in channels they controlled claiming credit for these attacks, often providing proof of their efficacy in the form of Check Host reports showing that the victim computers were offline.</p>
</blockquote>
<p>Their claims often did not match the reality. For instance, when they tried to DDoS Netflix and only disrupted a few regions of service, they claimed Netflix was &ldquo;strongly down.&rdquo;</p>
<p>Or, in another instance:</p>
<blockquote>
<p>Overt Act No. 101: On May 22, 2023, defendant AHMED OMER sent private messages on Telegram stating, “check <a href="https://oref.org.il">https://oref.org.il</a>, if you find backend im ready to pay u good money.” The other party then sent AHMED OMER an IP address, to which AHMED OMER replied, “I fuck this ip but www.oref won’t down.”</p>
</blockquote>
<p>“I fuck this IP” is lowkey iconic as a pro-DDoS product slogan, but this message, along with messages that provide an IP and ask &ldquo;how to down?&rdquo;, perhaps does not instill a sense of confidence in their target customer base.</p>
<p>While originally motivated by ideology and national pride, Anonymous Sudan tried to monetize their operation over time, including creating an automated bot to collect payments from victims seeking a ceasefire:</p>
<blockquote>
<p>We can negotiate a price with you to halt all DDoS attacks immediately, and help you apply DDoS mitigation. To negotiate, contact us at our bot : @AnonymousSudan_Bot.</p>
</blockquote>
<p>It is unclear to what extent they ever monetized these attacks. They claim to have in some cases, like:</p>
<blockquote>
<p>On March 5, 2024, defendant AHMED OMER or another co-conspirator posted a message on Telegram stating, “After more than 48 hours of holding the Zain Bahrain network offline, we have finally reached a deal with them. Therefore, we&rsquo;ll stop all attacks on their networks immediately. This entire experience proves the revolutionary power of @InfraShutdown team in holding huge networks for days and getting multi-billion companies to their knees. You can request an attack of any scale and unlock this never seen before power by contacting @InfraShutdown_bot.</p>
</blockquote>
<p>But, well, do you believe their claims based on these messages?</p>
<p>Perhaps that&rsquo;s the most common ground blue teams can find with cyber criminals: whether Telegram, or SoMA during RSAC, both of their bubbles drown in grandiose claims.</p>
<h2 id="5-attackers-are-still-not-really-using-genai">5. Attackers are still not really using GenAI</h2>
<p><a href="https://kellyshortridge.com/blog/posts/shortridge-makes-sense-of-verizon-dbir-2024/">Last year</a>, I noted that attackers aren’t using GenAI. They still aren’t, really. And when they do, it doesn’t seem to make much of a difference.</p>
<p>Specifically, GenAI hasn’t really made phishing “better” in some sense, because it’s still down in relative prevalence. How much of an asymmetric advantage does it really offer attackers? I suspect GenAI provides more ROI for aspiring CISO thought leaders on LinkedIn than most attackers, although the former means the latter can create their own fake aspiring thought leader CISOs for more credible social engineering.</p>
<p><img src="/blog/img/2025-dbir/figure-23.png" alt="Figure 23 is a scatter plot describing the percentage of AI-assisted malicious emails between 2022 and 2025. It shows a general trend of the amount of AI-assisted emails steadily rising from approximately 5.5% in 2022 to approximately 10% at the end of 2025."></p>
<p>Alas, if the DBIR <em>hadn’t</em> investigated GenAI involvement in incidents and breaches, I’m sure the AI <del>evangelicals</del> enthusiasts would’ve slandered them with, “you’re missing the LLM impact! It must be huge!” I take its inclusion as the report-writing equivalent of bequeathing someone a token object upon your death in your will so they can’t claim they were inadvertently left out.</p>
<p>This isn’t to say GenAI isn’t used <em>anywhere</em>; it clearly is, but it seems not to have improved efficacy over the previous techniques. Writing spam email with fancier LLMs instead of more basic ML doesn’t necessarily improve deliverability or fidelity.</p>
<p>As a final note on GenAI, since I’m sure some VC or startup bros are already thinking “but wait, what about –” no, the data set doesn’t include any real-world attacks <em>on</em> LLMs themselves. Maybe this will be present in next year’s data set now that people are wiring LLMs to their proprietary data sets and letting them roam as free range models with minimal supervision.</p>
<h2 id="6-if-you-cant-make-your-own-0day-store-bought-creds-are-fine">6. If you can’t make your own 0day, store-bought creds are fine</h2>
<p>The industry obsesses over 0day<sup id="fnref:17"><a href="#fn:17" class="footnote-ref" role="doc-noteref">17</a></sup>, but the 2025 DBIR shows only 8% of incidents involve vulnerabilities at all, and we must presume that even fewer of those involve zero day vulnerabilities (since, inherently, they decay into N-days once a patch releases).</p>
<p>Perhaps more surprising to readers is the report&rsquo;s insight that not even APTs obsess exclusively over vulnerabilities (see also #3 on my list). Combing through code to uncover exploitable conditions is hard, and writing code to exploit those conditions and assumptions can be even harder. And if cybersecurity vendors will spend marketing dollars on publishing new exploitable vulns to the world, why not save a few bucks and co-opt their work?</p>
<p><img src="/blog/img/2025-dbir/figure-16.png" alt="Figure 16 from the Verizon DBIR. It&amp;rsquo;s a spaghetti chart showing the known initial access vectors over time in non-Error, non-Misuse breaches. Credential abuse is described as the initial access vector in 35% of such breaches in 2022&amp;rsquo;s dataset, 40% in 2023, 30% in 2024, and 22% in 2025. Phishing is described as the initial access vector in 18% of breaches in 2022&amp;rsquo;s dataset, 13% in 2023, 16% in 2024, and 17% in 2025. Exploitation of vulnerabilities is described as the initial access vector in 7% of such breaches in 2022&amp;rsquo;s dataset, 6% in 2023, 16% in 2024, and 20% in 2025."></p>
<p>My take is we should pay less attention to year-over-year changes and examine the longer-term trends more. The annual vagaries depend on what 0-days attackers or researchers discover, and how easy it is to write exploits for them vs. write patches for them. More precisely, all that really matters when it comes to 0day – or vulnerabilities in general – is how easy it is to turn them into a viable attack, where viable can refer to stealth, scale, consistency, and the other attributes I discussed in #2 on this list.</p>
<p>Consider the big topic from Verizon&rsquo;s 2024 DBIR: MOVEit. MOVEit was a disaster because of their deployment model, tech-unsavvy customer base, and product proximity to ransomable data. In this year&rsquo;s DBIR, system intrusion campaigns seem to be frolicking with credential abuse, phishing, BWAA – so any vulns that pop up are a bonus.<sup id="fnref:18"><a href="#fn:18" class="footnote-ref" role="doc-noteref">18</a></sup></p>
<p>This is an important finding to internalize from this year&rsquo;s DBIR: use of stolen credentials continues to be a common attack pattern, because, to evoke Paul Hollywood from the Great British Bake-Off, it&rsquo;s “simple, but effective.” It’s cheap, easy, and scales to large operations (the report notes that 80% of stolen accounts had prior exposure). Attackers might even learn to make friends in their Telegram chats along the way!</p>
<p>I find defenders often sleep on the potency of stolen creds, despite our general nihilism around nothing being secret anymore. The stolen cred problem is seen as “boring,” not one of those meaty technical problems into which you can sink your fangs and extract resume juice (jk, not jk).</p>
<p>To wit, even those infusing <a href="https://kellyshortridge.com/blog/posts/rfi-secure-by-design-response/">Secure by Design principles</a> into how they secure their customers – a thoughtful endeavor, to be sure – may still overlook these “basic bitch” attacks. Snowflake, for example, invested considerable resources into isolating each tenant within their own separate VPC while supporting every cloud service provider<sup id="fnref:19"><a href="#fn:19" class="footnote-ref" role="doc-noteref">19</a></sup>, only for those customers to get rocked by stolen credentials last year.</p>
<p><img src="/blog/img/2025-dbir/figure-56.png" alt="Figure 56 from the Verizon DBIR is a horizontal bar chart describing top action varieties in basic web application attacks during 2024. 88% of BWAA attacks used stolen credentials. 56% of BWAA attacks used brute force. 51% used other actions. 42% used a backdoor or C2."></p>
<p>We should respect our opponents, but not aggrandize them. When threat modeling, assume they&rsquo;ll follow Ina Garten&rsquo;s advice: If you can&rsquo;t make your own 0day, store-bought creds are fine.</p>
<h2 id="7-security-was-the-real-supply-chain-threat-all-along">7. Security was the real supply chain threat all along</h2>
<p>Open source software (OSS) issues don’t even seem to be big enough to warrant a mention in the 2025 DBIR report. The only mention of OSS sits quietly in the news section towards the end, probably because talking heads and news outlets heavily report vulns in OSS due to the frankly bizarre level of distaste and disdain the traditional cybersecurity sphere feels towards OSS.</p>
<p>However, curiously, many of the vulns the DBIR lists as leading to incidents reside in what the report euphemistically refers to as &ldquo;edge devices.&rdquo; These are more accurately described as security appliances, as shown in the list I helpfully defaced below.</p>
<p>Perhaps the cybersecurity community should dog whistle less about the &ldquo;insecure supply chain,&rdquo; typically meaning open-source software, when OSS is akin to humans of code giving things away for free with no warranty. You get what you pay for.</p>
<p>It&rsquo;s as if you raided someone&rsquo;s house, tore out their wiring, then complained that the wiring was substandard copper&hellip; meanwhile your own house is stuffed with asbestos behind mercury walls.</p>
<p><img src="/blog/img/2025-dbir/table-1-defaced.png" alt="Table 1 from the 2024 Verizon DBIR shows edge device vulnerabilities used by attacks in 2024 broken out by vendor, with the vendor&amp;rsquo;s name anonymized. I have defaced the chart, describing each vendor by a satirical name in bright old-school fonts: NetScoober, Forgortinet, iVaNti hot LAN parti, non-stick PANW Keanu&amp;rsquo;s retirement plan, SANICWOLOLOL, Crisco Systems, and Junpipper nooooooo networks."></p>
<h2 id="8-things-rot-apart">8. Things Rot Apart</h2>
<p>It’s 2025, and somehow VPNs and mail servers – the remnants of ye old ways where we had server janitors – persist in their ability to fuck your shit up. A simple security heuristic might be, “have you modernized or not yet?” where modernization means innovation hasn’t died and stunk up your stack like a decomposing skunk.</p>
<p>From that perspective, it is no surprise mail servers remain so prevalent in this data. Exchange is pooptacular (the technical term), but it’s also the best mail server out there, implying the remaining menagerie of mail servers are something like turbopooptacular<sup id="fnref:20"><a href="#fn:20" class="footnote-ref" role="doc-noteref">20</a></sup>.</p>
<p>It’s been years since Google apps murdered any remaining innovation in running your own mail server. Typical exchanges (lolol) go something like:</p>
<p>“We have 20 years of investment sunk in this one mail server stack. Can we integrate some better security into it? Like, idk, even rsa tokens?”</p>
<p>“Absolutely not, because it needs to auth to legacy Outlook, which doesn&rsquo;t support that.”</p>
<p>“&hellip;oh, okay.”</p>
<p>Being a computer janitor for big, chunky, clunky servers you run on-prem is a dying industry. No one is entering it. Students aren’t excited to maintain servers older than they are. There is no innovation left to squeeze from this spoiled, shriveled lime of a niche.
For organizations still maintaining such servers, this means they cannot hire people to do it, and <strong>so it all rots</strong>.</p>
<p><img src="/blog/img/2025-dbir/figure-30.png" alt="Figure 30 from the 2025 Verizon DBIR shows the distribution of the median of days from initial notification until full remediation of edge device vulnerabilities. The 5th and 25th percentile of remediation occurred in under a day. The median remediation occurred within 32 days. The 75th percentile of remediation occurred within 155 days. The 95th percentile of remediation occurred within 222 days."></p>
<p>What does it mean for tech to rot, to be rotten? Tech rots either due to neglect, or the world progressing faster than it does (or faster than its operators’ knowledge). Sometimes we see both transpire in sinister symbiosis.</p>
<p>Tech rots when it’s all operators know, and the newer world scares or intimidates them. It rots in restructurings, reductions in force, and other such euphemisms that result in new operators who don’t want to touch it for fear of breaking it – or perhaps such changes result in no owners at all.</p>
<p>When we analyze through this meta lens, the data shows that the new ways – those often met with so much resistance by the recalcitrant cyber reductionists –  strongly correlate with better security outcomes. None of us should suffer under the yoke of rotten tech. We don’t have to settle. And we need not sleep restlessly at night, fearing insidious incidents and bowel-shattering breaches if we modernize our technology stacks.</p>
<p>Conversely, rotten tech seems to serve as a leading indicator of incidents and breaches. Why do corporate IT systems rot so badly? Can we prevent it? What other harm does the rot cause? This brings us to&hellip;</p>
<h2 id="9-scooby-doos-spooky-kooky-corporate-it-caper">9. Scooby Doo&rsquo;s Spooky Kooky Corporate IT Caper</h2>
<p>It’s hard not to ponder the macro-level incident trends and see the spectre of corporate IT. Credential management, logins, vulnerable Blinky boxes, file transfer, email compromise, VPNs&hellip; for many organizations, security teams and their smorgasbord of cybersecurity vendor products fit under corporate IT.</p>
<p>These systems are poorly maintained and monitored, often years out of date. With such poor hygiene, should it surprise us that they&rsquo;re more likely to succumb to attacks? Thankfully, many of these IT systems live on their own infrastructure and thus can only inflict limited damage.</p>
<p>Security systems, however, insist on embedding themselves in every other system: if it involves a computer, someone is going to want to stick EDR on it and send logs to a SIEM. These tools want the most privileges because that&rsquo;s the easier design choice for them. Should an upgrade ever need to do more than it does today, they don&rsquo;t require additional permissions because they already have them all.</p>
<p>Some of them are even designed to run arbitrary commands remotely, which is a sign that a piece of software might be hazardous to operate. This is such an obvious point to anyone who understands how computer systems work and yet eludes way too many security teams.</p>
<p>Being able to run arbitrary code is an exceptional thing for a system to do. Normal software systems don&rsquo;t do that. The norm is for software systems to perform their function, and if that function ever changes, someone must upgrade the system with new software.</p>
<p>It is perhaps the antithesis of Secure by Design and Principle of Least Privilege, and it is fucking <em>everywhere</em> in security land – these &ldquo;edge devices&rdquo; included.</p>
<p>This is one reason, but not the only, why the gulf between corporate IT and software engineering is wide, widens, will keep widening. Such practices indicate a cultural canyon between the two spheres.</p>
<p>Modern software engineering practices explicitly seek to facilitate change, agility, adaptation in ways that corporate IT practices simply do not and cannot (and it&rsquo;s why savvier security leaders transform their programs towards the former). In modern software engineering, updates are routine affairs involving many more components than corporate IT handles.</p>
<p>If we look at custom software engineering teams build – potentially even out of vulnerable open source components, gasp! – the DBIR data (and data elsewhere) shows that this custom software seldom leads to attackers breaching their organization.</p>
<p>When will we see the RSAC banners about traditionalist security teams not caring enough about software security replace those that fear monger about developers? Tribal narratives are powerful weapons to wield in propaganda. We should maintain intellectual integrity and call out poor software practices when we see them – ideally with constructive recommendations on how to improve – not simp for security vendors when they exhibit egregiously poor software practices.</p>
<p><img src="/blog/img/2025-dbir/supply-chain-security-tools-scooby-meme.png" alt="A meme image with Fred from Scooby Doo unmasking a ghost. In the first frame the costumed ghost character is labeled &amp;ldquo;supply chain threat.&amp;rdquo; In the second frame the now unmasked ghost character is labeled &amp;ldquo;security tools.&amp;rdquo;"></p>
<h2 id="10-at-least-some-things-are-improving-somewhere">10. At least some things are improving somewhere</h2>
<p>A bit of good news that deserves attention: dwell time is down by an entire week. This is a substantial decrease over only two years and suggests organizations have better mechanisms to tell when they’ve been breached.</p>
<p>It would be interesting to know what factors influence this improvement most. Is it primarily engendered by cybersecurity professionals / vendors leveling up their game, or does it arise from technology / architectural shifts (like those common in the fabled “digital transformation”)?</p>
<p><img src="/blog/img/2025-dbir/figure-42.png" alt="Figure 42 is a dot plot from the Verizon DBIR showing the distribution of dwell time in non-Actor-disclosed breaches in 2022 and 2024. In 2022, the median dwell time was 30 days. In 2024, the median dwell time was 24 days. Dwell time dropped across the entire distribution between 2022 and 2024."></p>
<p>The 2025 DBIR observes a continued shift away from credit card data stolen in breaches, a trend we should also celebrate. I suspect it evinces the success of chip &amp; PIN, as well as NFC (remember when the cyber industrial complex spewed a barrage of FUD about NFC payments and how it reflected the fraudpocalypse? Pepperidge Farm remembers).</p>
<p>The DBIR&rsquo;s insight that Magecart represents 1% of System Intrusion breaches, but makes up 80% of payment card breaches suggests the latter has plummeted in volume. Now we just need digital payment wallets to really take off on desktop to render this category irrelevant.</p>
<h2 id="11-people-continue-to-click-things-on-the-thing-clicking-machine">11. people continue to click things on the thing clicking machine</h2>
<p>While the Verizon DBIR is far from the worst offender on this front, security reports often give off &ldquo;you need to touch grass energy.&rdquo; This is especially true when they touch topics related to humans clicking on things that facilitate incidents or breaches.</p>
<p>Users – the humans who interact with the technology we&rsquo;ve made ubiquitous in their lives – expect to be able to click things and for the world not to implode. Clicking things is what many jobs are now. It should be safe to click links in the same way that it is safe to answer the phone.</p>
<p>The fact that it is not is an indictment of the security industry, not human behavior.</p>
<p>This allows me to send you off, dear reader, with one of my old memes, so we can depart back into the cybersecurity realm with a sense of enlightenment and, hopefully, humility.</p>
<p><img src="/blog/img/2024-dbir/thing-clicking-machine.jpeg" alt="The horse sketch meme adapted by yours truly to illustrate the sad intellectual decline of the cybersecurity industry. The well-drawn end of the horse starts with the labels multilevel security, trusting trust, and formal methods. As the drawing gets progressively worse, the labels are firewalls, threat intelligence, and, once we reach the level of stick figure, the labels are machine learning for anomaly detection, and “prevent people from clicking things on the thing-clicking machine.&amp;rdquo;"></p>
<hr>
<h2 id="thx-ap-3">thx AP &lt;3</h2>
<p><em>Enjoy this post? You might like <a href="https://www.securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, and other major retailers online.</em></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>It&rsquo;s a little cringe that the M-Trends report launched the same day as the DBIR&rsquo;s widely publicized date. They should feel more confident in their own report and ensure it has a chance to shine; the competitive vibe isn&rsquo;t the best look, when such reports should be in the spirit of leveling up the community.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Indeed, throughout the report, one can feel the “data people with integrity feel anxiety about the data” vibes. This is a far better thing than the more pernicious “lying with data.”&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Imagine a sentient report, if you will, singing “What was I made for?” by Billie Eilish. Just replace “Just something you paid for” with “Just something you transmuted yourself into an MQL for.”&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>I think it’s difficult to get an accurate ransomware total. We can see the total number of ransomware incidents they tracked: ~9703. However, I&rsquo;m not sure what proportion they track in their dataset vs. miss. Of those 9703, 64% didn’t pay, leaving 3493. We only have the median, so it’s difficult to get to the aggregate amount paid in their dataset. If we work backwards from the median -&gt; total calculation for BEC, which gives a 6.6 multiplier from median to mean, that suggests $2.66 billion in their dataset. A <em>very</em> <em>very</em> rough calculation.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>Is it more or less than the satisfaction a cyber reductionist CISO feels when wearing sweet APT-themed swag?&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Lockbit’s cumulative revenue hit at least <a href="https://www.state.gov/reward-for-information-lockbit-ransomware-as-a-service/">$144 million</a> in ransom payments between 2020 and the end of 2023. If I told you about a self-made startup founder who developed innovative SaaS technology and grew it to ~$50mm+ ARR in 3 years with zero VC funding, you’d be like, “who is this genius visionary??” It is a mistake to view attackers as genius, superhuman villains wielding silicon magic when they are, in practice, closer to shrewd tech entrepreneurs who identify product/market fit, execute on it by building software with iterative feature development, and successfully scale their operations to address market demand (and yes, sometimes that demand is espionage-flavored, as in “As an aspiring empire, we want to collect data from our primary adversary’s telecom providers so we can create richer profiles of key decision-makers in their government or, possibly, blackmail them.”). Game recognizes game<sup id="fnref1:14"><a href="#fn:14" class="footnote-ref" role="doc-noteref">14</a></sup>.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>I wonder if there&rsquo;s a doc akin to the CIA&rsquo;s <a href="https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/SimpleSabotage.pdf"><em>Simple Sabotage Field Manual</em></a> &ndash; their guide for disrupting progress in workplaces &ndash; but to disrupt your adversary&rsquo;s attack groups by installing egotistical, smoke-blowing &ldquo;leaders&rdquo; who took a weekend MBA course and now think they are Frank Quattrone.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>see also how I solve every puzzle in The Legend of Zelda: Tears of the Kingdom with the time reverse thingy&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>if only more cyber reductionist security pros and linkedin thought leaders were “unwilling to remain medicore”&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:10">
<p>Etymologically, however, sophisticated / sophistication served as a pejorative, related to the root word <a href="https://www.kellyshortridge.com/blog/posts/what-does-the-word-security-mean/#a-platonic-dialogue">“sophists,”</a> i.e. those who tell mellifluous, remunerative lies to eager ears. But the original original meaning indeed related to someone who is a master at their craft.&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>To quote Rick Ross, “This shit is highly sophisticated, I just make it look easy”&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:12">
<p>And this requires actually understanding what your business does, where it fits in your market, how it generates revenue, where it burns costs, and similar knowledge that traditionalist cybersecurity pros too often disdain.&#160;<a href="#fnref:12" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:13">
<p>you can pry the desktop MS Office suite from my cold, dead hands tho.&#160;<a href="#fnref:13" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:14">
<p>Dmitry Khoroshev walked so Luigi Mangione could run.&#160;<a href="#fnref:14" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a>&#160;<a href="#fnref1:14" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:15">
<p>I bet Dmitry Khoroshev would know the answer(s). If you’re reading this, I’d love to “interview” you, if you know what I mean…&#160;<a href="#fnref:15" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:16">
<p>I recall attending a security conference in a country stereotypically associated with prolific attack activity, and most attendees did not know what 2-factor authentication was.&#160;<a href="#fnref:16" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:17">
<p>0day can be sexy, almost artistic in its manipulation of underlying machinations, but few that become public excite me. Annual reminder that I&rsquo;ll go on a date with anyone who gives me a pre-auth RCE 0day in SSH.&#160;<a href="#fnref:17" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:18">
<p>Magecart might be the exception, but I suspect the patching rate is quite low on the long tail of online retailers.&#160;<a href="#fnref:18" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:19">
<p>the product people reading just gasped, I assure you.&#160;<a href="#fnref:19" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:20">
<p>Gartner will likely label it “next-gen pooptacular” instead.&#160;<a href="#fnref:20" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Deciduous-VS: Local Decision Tree Threat Modeling in VSCode</title>
            <link>https://kellyshortridge.com/blog/posts/deciduous-for-vscode-local-decision-tree-editing/</link>
            <pubDate>Thu, 19 Dec 2024 08:00:00 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/deciduous-for-vscode-local-decision-tree-editing/</guid>
            <description>My Deciduous co-conspirator, Ryan Petrich1, created a Visual Studio Code extension (Deciduous-VS) that lets you edit decision tree threat models alongside a live graph preview within VSCode.
This is a cool DX upgrade in itself, but this release also makes Deciduous compatible with GDPR and other regimes requiring a local development experience… like classified environments (you’re welcome federal Deciduous fanbois across the world (u know who u are)2).
If you aren’t familiar with the open source Deciduous project, nor familiar with the delectation of developing decision trees via threat modeling as code3, visit the repo for examples and getting started resources.
why use Deciduous-VS? The Deciduous VSCode extension lets platform engineering, app dev, or security teams build decision tree-based threat models with the same live, interactive experience as the web app but within the world’s most popular IDE.
In practice, most engineering teams build and update Deciduous threat models in a collaborative fashion. Harnessing behavioral game theory to prepare for adverse scenarios requires variegated perspectives and assumption-poking by your own co-conspirators (although especially neurotic individuals, like yours truly, may enjoy solo play, too).
You’ll likely pull up Deciduous on a shared virtual screen or big screen if IRL, making on-the-fly changes to the YAML code that defines the decision tree on-the-fly. With each change – in the spirit of immediate gratification that the technological society demands – Deciduous-VS immediately updates the rendered graph in VSCode’s secondary editor pane.
Using Deciduous-VS doesn’t lock you into VScode for your decision trees; you can share the Deciduous .yaml’s between the hosted (web) version and IDE version – while respecting your organization’s classification or compliance requirements, naturally4.
Deciduous-VS vs. deciduous.app Deciduous-VS offers most of the features available in the hosted app, including:
Easy YAML structure to configure your decision trees using facts, attacks5 (which can be any type of failure to generalize to other platform engineering scenarios), and mitigations Subjective commentary (i.e. optional shade throwing) on connections between nodes, e.g. labeling a decision to perform some action as &#39;\#yolo&#39; Syntax highlighting to identify and keep track of facts, mitigations, and attacks/failures within your code Clickable nodes and edges in the graph that highlight the corresponding YAML source code to make navigating complex scenarios quicker Three graph color schemes that match the web app: dark, light, and accessible6 But! We actually gain more than just deciduous.app parity by slogging our way through Azure’s convoluted IAM packaging decision tree-based threat modeling into VSCode, sprouting new functionality like:
Quickly toggling between multiple decision trees in different tabs: useful for comparing trees or, as I’ve often found when advising organizations through their cross-team threat modeling exercises, quickly flipping to a specific graph when inspiration strikes while editing a different graph (part of why decision trees are so effective for threat modeling is how they nurture emergent, evolving ideation7) Right-click export of the graph as PNG, SVG, or DOT directly to the local directory of your choice: useful for sharing the visual representation of Deciduous attack trees into other systems, from the comfort of your IDE Interactive git integration via VSCode’s native SCM features: useful to version decision trees over space and time as their unique context changes Collaborative editing via VSCode Live Share: useful for working with coworkers/co-conspirators/comrades on a decision tree without the hassle of screen sharing where only one human can make edits and everyone has to put up with blurry delayed text8 Humoring security puritans: useful for editing behind a firewall or with a corporate policy that prohibits entering proprietary information into a web page (as we keep stressing, the server running deciduous.app doesn’t process your data; Deciduous runs entirely local within your browser, but the kind folks writing such security policies do not understand this (or pretend not to) nor know how to audit the source code (which is open), seemingly9) installing Deciduous-VS To get started, search for Deciduous-VS in the VS Code extension “activity”10 or press the Install button on this Visual Studio marketplace link. Once installed, activate Deciduous from the Command Palette or press the default assigned keyboard shortcut of Ctrl&#43;Shift&#43;D on Windows/Linux or ⌘⇧D on macOS.
By pressing these arcane symbols in the sacred order, you will command the thinking sand to vibe the Deciduous preview pane into life.
Many thanks to @fire opening the VScode extension request as well as to all my cherished fanbois who’ve bugged me for a local version over the years.
As always, we beseech you to publish your own use cases to propitiate us keep inspiring the Deciduous community and prove that threat modeling deserves civilized workflows. Please open issues in the repo should you have requests or send feedback via carrier pigeon11.
in fairness, yours truly did attempt to write a Deciduous VS code extension last year, but did not finish due to the grim vicissitudes of living in a mortal society, and, subsequently, due to secret professional-flavored vagaries I will reveal in a few lunations as kindling against the raw, frostbitten nadir of northward winter, and I believe my co-conspirator grew impatient and thus spite-driven dev’d his way into building it over a long weekend, to my surprise and unabashed delight ↩︎
a little birdie told me y’all often use VS code for your spooky lil deep state dev, we’re ok with you using deciduous for role reversal xoxo ↩︎
just like everything had to be aaS a decade ago, so must everything be “as code” now… but wait, if we made this a B2B startup, would it then be Behavioral Decision-tree Security Modeling as a service??? ↩︎
we won’t tell; snitches get ditches (but also this is a labor of spite love, so we aren’t paid enough to tell, and delvauxs don’t buy themselves) ↩︎
they fact but they also attacc?? ↩︎
the three genders, according to ao3 ↩︎
the corporate way of saying it works with your adhd rather than against it ↩︎
or the human has an uncorporately wide monitor because they repurposed their old curved gaming screen and tries to apologize for the absurd aspect ratio over zoom but then the others accuse them of humble bragging as if size queening over monitors isn’t so 2000 and late and boys, girls, and whirlidirls, if you only knew how my bilighting gaming rig is as hot as my nyc apartment radiators as is my penchant for orotund whale facts as is the yearning for meaning in the soul of any man haunted by hopes boiled in beluga blubber lamps, adrift with salt spray orisons barnacled in existentialist obsession, and what else would sink a man to such foamy, gloaming fathoms of meta? ↩︎
or maybe this is you and you now know this! Learning and adapting is a beautiful, deserved act of self-compassion ↩︎
Microsoft calls them “activities” like VSCode is a Chuck-e-cheese ↩︎
ideally attaching a knead love feelings brownie as bribery, I’m sure the pigeon can make it to the fastly nyc offices ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>My Deciduous co-conspirator, Ryan Petrich<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>, created a Visual Studio Code extension (<a href="https://marketplace.visualstudio.com/items?itemName=shortridge-sensemaking.deciduous-vs">Deciduous-VS</a>) that lets you edit decision tree threat models alongside a live graph preview within VSCode.</p>
<p>This is a cool DX upgrade in itself, but this release also makes Deciduous compatible with GDPR and other regimes requiring a local development experience&hellip; like classified environments (you&rsquo;re welcome federal Deciduous fanbois across the world (u know who u are)<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>).</p>
<p>If you aren&rsquo;t familiar with the open source <a href="https://www.deciduous.app/">Deciduous project</a>, nor familiar with the delectation of developing decision trees via threat modeling as code<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>, <a href="https://github.com/rpetrich/deciduous">visit the repo</a> for examples and getting started resources.</p>
<h2 id="why-use-deciduous-vs">why use Deciduous-VS?</h2>
<p>The <a href="https://marketplace.visualstudio.com/items?itemName=shortridge-sensemaking.deciduous-vs">Deciduous VSCode extension</a> lets platform engineering, app dev, or security teams build decision tree-based threat models with the same live, interactive experience <a href="https://www.deciduous.app/">as the web app</a> but within the world&rsquo;s <a href="https://www.reddit.com/r/ProgrammerHumor/comments/1hdmjrd/iheartvscode/">most popular</a> IDE.</p>
<p>In practice, most engineering teams build and update Deciduous threat models in a collaborative fashion. Harnessing behavioral game theory to prepare for adverse scenarios requires variegated perspectives and assumption-poking by your own co-conspirators (although especially neurotic individuals, like yours truly, may enjoy solo play, too).</p>
<p>You&rsquo;ll likely pull up Deciduous on a shared virtual screen or big screen if IRL, making on-the-fly changes to the YAML code that defines the decision tree on-the-fly. With each change – in the spirit of immediate gratification that the technological society demands – Deciduous-VS immediately updates the rendered graph in VSCode&rsquo;s secondary editor pane.</p>
<p><img src="/blog/img/deciduous-vs-screenshot.png" alt="A screenshot of Deciduous-VS running inside of Visual Studio Code, with YAML code in the left pane describing an example decision tree and a visual rendering of the threat model in the right pane. The attack tree describes the scenario of sensitive video recordings stored in S3. Each branch reflects a series of attacker actions and defensive mitigations, such as, &amp;ldquo;If the S3 bucket is public, the attacker can use the Wayback archive to find them; but if defenders make the bucket private, attackers now must brute force their way past access control.&amp;rdquo; The scenarios are organized into an easily navigable tree that makes it easier to mentally model adverse scenarios across spacetime."></p>
<p>Using Deciduous-VS doesn&rsquo;t lock you into VScode for your decision trees; you can share the Deciduous .yaml&rsquo;s between the hosted (web) version and IDE version – while respecting your organization&rsquo;s classification or compliance requirements, naturally<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>.</p>
<h2 id="deciduous-vs-vs-deciduousapp">Deciduous-VS vs. deciduous.app</h2>
<p><a href="https://marketplace.visualstudio.com/items?itemName=shortridge-sensemaking.deciduous-vs">Deciduous-VS</a> offers most of the features available in <a href="https://www.deciduous.app/">the hosted app</a>, including:</p>
<ul>
<li>Easy YAML structure to configure your decision trees using <code>facts</code>, <code>attacks</code><sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> (which can be any type of failure to generalize to other platform engineering scenarios), and <code>mitigations</code></li>
<li>Subjective commentary (i.e. optional shade throwing) on connections between nodes, e.g. labeling a decision to perform some action as <code>'\#yolo'</code></li>
<li>Syntax highlighting to identify and keep track of facts, mitigations, and attacks/failures within your code</li>
<li>Clickable nodes and edges in the graph that highlight the corresponding YAML source code to make navigating complex scenarios quicker</li>
<li>Three graph color schemes that match the web app: dark, light, and accessible<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup></li>
</ul>
<p>But! We actually gain more than just deciduous.app parity by <del>slogging our way through Azure&rsquo;s convoluted IAM</del> packaging decision tree-based threat modeling into VSCode, sprouting new functionality like:</p>
<ul>
<li>Quickly toggling between multiple decision trees in different tabs: useful for comparing trees or, as I&rsquo;ve often found when advising organizations through their cross-team threat modeling exercises, quickly flipping to a specific graph when inspiration strikes while editing a different graph (part of why decision trees are so effective for threat modeling is how they nurture emergent, evolving ideation<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>)</li>
<li>Right-click export of the graph as PNG, SVG, or DOT directly to the local directory of your choice: useful for sharing the visual representation of Deciduous attack trees into other systems, from the comfort of your IDE</li>
<li>Interactive git integration via VSCode&rsquo;s native SCM features: useful to version decision trees over space and time as their unique context changes</li>
<li>Collaborative editing via VSCode Live Share: useful for working with coworkers/co-conspirators/comrades on a decision tree without the hassle of screen sharing where only one human can make edits and everyone has to put up with blurry delayed text<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup></li>
<li>Humoring security puritans: useful for editing behind a firewall or with a corporate policy that prohibits entering proprietary information into a web page (as we keep stressing, the server running deciduous.app doesn&rsquo;t process your data; Deciduous runs entirely local within your browser, but the kind folks writing such security policies do not understand this (or pretend not to) nor know how to audit the source code (which is open), seemingly<sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup>)</li>
</ul>
<h2 id="installing-deciduous-vs">installing Deciduous-VS</h2>
<p>To get started, search for Deciduous-VS in the VS Code extension “activity”<sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup> or press the Install button on <a href="https://marketplace.visualstudio.com/items?itemName=shortridge-sensemaking.deciduous-vs">this Visual Studio marketplace link</a>. Once installed, activate Deciduous from the Command Palette or press the default assigned keyboard shortcut of Ctrl+Shift+D on Windows/Linux or ⌘⇧D on macOS.</p>
<p>By pressing these arcane symbols in the sacred order, you will command the thinking sand to vibe the Deciduous preview pane into life.</p>
<p><img src="/blog/img/decidous-vs-demo.gif" alt="An animated gif demonstrating how to use Deciduous-VS within VSCode, including clicking on attack or mitigation nodes to jump to the relevant YAML code that defines the node, and any interactions between it and other nodes. It also demonstrates how the graph in the visual pane on the right side of your IDE changes as you update the YAML code."></p>
<p>Many thanks to @fire opening <a href="https://github.com/rpetrich/deciduous/issues/16">the VScode extension request</a> as well as to all my cherished fanbois who&rsquo;ve bugged me for a local version over the years.</p>
<p>As always, we beseech you to publish your own use cases to <del>propitiate us</del> keep inspiring the Deciduous community and prove that threat modeling deserves civilized workflows. Please open issues <a href="https://github.com/shortridge-sensemaking/deciduous-vscode">in the repo</a> should you have requests or send feedback via carrier pigeon<sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup>.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>in fairness, yours truly did attempt to write a Deciduous VS code extension last year, but did not finish due to the grim vicissitudes of living in a mortal society, and, subsequently, due to secret professional-flavored vagaries I will reveal in a few lunations as kindling against the raw, frostbitten nadir of northward winter, and I believe my co-conspirator grew impatient and thus spite-driven dev&rsquo;d his way into building it over a long weekend, to my surprise and unabashed delight&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>a little birdie told me y&rsquo;all often use VS code for your spooky lil deep state dev, we&rsquo;re ok with you using deciduous for role reversal xoxo&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>just like everything had to be aaS a decade ago, so must everything be “as code” now… but wait, if we made this a B2B startup, would it then be Behavioral Decision-tree Security Modeling as a service???&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>we won&rsquo;t tell; snitches get ditches (but also this is a labor of <del>spite</del> love, so we aren&rsquo;t paid enough to tell, and delvauxs don&rsquo;t buy themselves)&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>they fact but they also attacc??&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>the three genders, according to ao3&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>the corporate way of saying it works with your adhd rather than against it&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>or the human has an uncorporately wide monitor because they repurposed their old curved gaming screen and tries to apologize for the absurd aspect ratio over zoom but then the others accuse them of humble bragging as if size queening over monitors isn&rsquo;t so 2000 and late and boys, girls, and whirlidirls, if you only knew how my bilighting gaming rig is as hot as my nyc apartment radiators as is my penchant for orotund whale facts as is the yearning for meaning in the soul of any man haunted by hopes boiled in beluga blubber lamps, adrift with salt spray orisons barnacled in existentialist obsession, and what else would sink a man to such foamy, gloaming fathoms of meta?&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>or maybe this is you and you now know this! Learning and adapting is a beautiful, deserved act of self-compassion&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:10">
<p>Microsoft calls them “activities” like VSCode is a Chuck-e-cheese&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>ideally attaching a knead love feelings brownie as bribery, I&rsquo;m sure the pigeon can make it to the fastly nyc offices&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>My 2023 Reading List</title>
            <link>https://kellyshortridge.com/blog/posts/2023-reading-list/</link>
            <pubDate>Mon, 28 Oct 2024 13:06:36 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2023-reading-list/</guid>
            <description>I read 26 books in 2023, but the one I read the most was my own book during the warpspeed editing process. My non-fiction tome on the subject of software resilience came out in Spring 2023 and the reader response fulfills my soul like little else.
After devouring so many resources for my own book, I admittedly grew a bit sick of reading1. I dabbled more in reading short fiction and poetry via literary journals in 2023; I’m still figuring out which ones most nurture my creative energy.
As always, I am not rating or recommending any specific works in the list below. With that said, I dedicate considerable effort to screening books beforehand since time is the most precious, fleeting resource we possess.
If you’re looking for more science fiction, speculative fiction, pretentious literary fiction, philosophical navel-gazing, or non-fiction recommendations, check out my reading lists from prior years:
2022 reading list 2021 reading list 2020 reading list 2019 reading list 2018 reading list 2017 reading list 2016 reading list Fiction Convenience Store Woman by Sayaka Murata
The Daughter of Doctor Moreau by Silvia Moreno-Garcia
The Death of Vivek Oji by Akwaeke Emezi
Dept. of Speculation by Jenny Offill
The Factory by Hiroko Oyamada
Kaikeyi by Vaishnavi Patel
A Memory Called Empire by Arkady Martine
The Moon and Sixpence by W. Somerset Maugham
Nightwood by Djuna Barnes
The Overstory by Richard Powers
Robopocalypse by Daniel H. Wilson
Severance by Ling Ma
Sound and the Fury by William Faulkner
Speedboat by Renata Adler
Things You May Find Hidden in My Ear: Poems from Gaza by Mosab Abu Toha
We by Yevgeny Zamyatin
Non-fiction Bad Gays by Huw Lemmey and Ben Miller
Color and Light: A Guide for the Realist Painter by James Gurney
Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls That Derail Us by Eugene Spafford, Leigh Metcalf, and Josiah Dykstra
The Extended Mind: The Power of Thinking Outside the Brain by Annie Murphy Paul
How Far the Light Reaches: A Life in Ten Sea Creatures by Sabrina Imbler
Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer by Kathy Kleiman
The Technological Society by Jacques Ellul
What an Owl Knows: The New Science of the World’s Most Enigmatic Birds by Jennifer Ackerman
Wild Problems: A Guide to the Decisions That Define Us by Russ Roberts
World of Wonders: In Praise of Fireflies, Whale Sharks, and Other Astonishments by Aimee Nezhukumatathil
And writing, hence why this post on my 2023 reading list is coming out nearly 11 months into 2024… ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>I read 26 books in 2023, but the one I read the most was <a href="https://www.securitychaoseng.com/">my own book</a> during the warpspeed editing process. My non-fiction tome on the subject of <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471?ean=9781098113827">software resilience</a> came out in Spring 2023 and the reader response fulfills my soul like little else.</p>
<p>After devouring so many resources for my own book, I admittedly grew a bit sick of reading<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. I dabbled more in reading short fiction and poetry via literary journals in 2023; I&rsquo;m still figuring out which ones most nurture my creative energy.</p>
<p>As always, I am not rating or recommending any specific works in the list below. With that said, I dedicate considerable effort to screening books beforehand since time is the most precious, fleeting resource we possess.</p>
<p>If you’re looking for more science fiction, speculative fiction, pretentious literary fiction, philosophical navel-gazing, or non-fiction recommendations, check out my reading lists from prior years:</p>
<ul>
<li><a href="/blog/posts/2022-reading-list">2022 reading list</a></li>
<li><a href="/blog/posts/2021-reading-list">2021 reading list</a></li>
<li><a href="/blog/posts/2020-reading-list">2020 reading list</a></li>
<li><a href="/blog/posts/2019-reading-list">2019 reading list</a></li>
<li><a href="/blog/posts/2018-reading-list">2018 reading list</a></li>
<li><a href="/blog/posts/2017-reading-list">2017 reading list</a></li>
<li><a href="/blog/posts/2016-reading-list">2016 reading list</a></li>
</ul>
<h2 id="fiction">Fiction</h2>
<p><a href="https://bookshop.org/p/books/convenience-store-woman-sayaka-murata/7393087?ean=9780802129628">Convenience Store Woman</a> by Sayaka Murata</p>
<p><a href="https://bookshop.org/p/books/the-daughter-of-doctor-moreau-silvia-moreno-garcia/17572196?ean=9780593355350">The Daughter of Doctor Moreau</a> by Silvia Moreno-Garcia</p>
<p><a href="https://bookshop.org/p/books/the-death-of-vivek-oji-akwaeke-emezi/13330817?ean=9780525541622">The Death of Vivek Oji</a> by Akwaeke Emezi</p>
<p><a href="https://bookshop.org/p/books/dept-of-speculation-jenny-offill/8304572?ean=9780345806871">Dept. of Speculation</a> by Jenny Offill</p>
<p><a href="https://bookshop.org/p/books/the-factory-hiroko-oyamada/12421042?ean=9780811228855">The Factory</a> by Hiroko Oyamada</p>
<p><a href="https://bookshop.org/p/books/kaikeyi-vaishnavi-patel/17360977?ean=9780759557307">Kaikeyi</a> by Vaishnavi Patel</p>
<p><a href="https://bookshop.org/p/books/a-memory-called-empire-arkady-martine/6986710?ean=9781250186447">A Memory Called Empire</a> by Arkady Martine</p>
<p><a href="https://bookshop.org/p/books/the-moon-and-sixpence-w-somerset-maugham/401908?ean=9780143039341">The Moon and Sixpence</a> by W. Somerset Maugham</p>
<p><a href="https://bookshop.org/p/books/nightwood-djuna-barnes/7343203?ean=9780811216715">Nightwood</a> by Djuna Barnes</p>
<p><a href="https://bookshop.org/p/books/the-overstory-richard-powers/17315941?ean=9780393356687">The Overstory</a> by Richard Powers</p>
<p><a href="https://bookshop.org/p/books/robopocalypse-daniel-h-wilson/15283741?ean=9780307740809">Robopocalypse</a> by Daniel H. Wilson</p>
<p><a href="https://bookshop.org/p/books/severance-ling-ma/9880874?ean=9781250214997">Severance</a> by Ling Ma</p>
<p><a href="https://bookshop.org/p/books/the-sound-and-the-fury-william-faulkner/6705121?ean=9780679732242">Sound and the Fury</a> by William Faulkner</p>
<p><a href="https://bookshop.org/p/books/speedboat-renata-adler/11767704?ean=9781590176139">Speedboat</a> by Renata Adler</p>
<p><a href="https://bookshop.org/p/books/things-you-may-find-hidden-in-my-ear-poems-from-gaza-mosab-abu-toha/17260642?ean=9780872868601">Things You May Find Hidden in My Ear: Poems from Gaza</a> by Mosab Abu Toha</p>
<p><a href="https://bookshop.org/p/books/we-yevgeny-zamyatin/16634156?ean=9780063068445">We</a> by Yevgeny Zamyatin</p>
<h2 id="non-fiction">Non-fiction</h2>
<p><a href="https://bookshop.org/p/books/bad-gays-a-homosexual-history-ben-miller/18814639?ean=9781839763281">Bad Gays</a> by Huw Lemmey and Ben Miller</p>
<p><a href="https://bookshop.org/p/books/color-and-light-a-guide-for-the-realist-paintervolume-2-james-gurney/944750?ean=9780740797712">Color and Light: A Guide for the Realist Painter</a> by James Gurney</p>
<p><a href="https://bookshop.org/p/books/cybersecurity-myths-and-misconceptions-avoiding-the-hazards-and-pitfalls-that-derail-us-josiah-dykstra/18485777?ean=9780137929238">Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls That Derail Us</a> by Eugene Spafford, Leigh Metcalf, and Josiah Dykstra</p>
<p><a href="https://bookshop.org/p/books/the-extended-mind-the-power-of-thinking-outside-the-brain-annie-murphy-paul/17058771?ean=9780358695271">The Extended Mind: The Power of Thinking Outside the Brain</a> by Annie Murphy Paul</p>
<p><a href="https://bookshop.org/p/books/how-far-the-light-reaches-a-life-in-ten-sea-creatures-sabrina-imbler/18790437?ean=9780316540537">How Far the Light Reaches: A Life in Ten Sea Creatures</a> by Sabrina Imbler</p>
<p><a href="https://bookshop.org/p/books/proving-ground-the-untold-story-of-the-six-women-who-programmed-the-world-s-first-modern-computer-kathy-kleiman/18353598?ean=9781538718292">Proving Ground: The Untold Story of the Six Women Who Programmed the World&rsquo;s First Modern Computer</a> by Kathy Kleiman</p>
<p><a href="https://bookshop.org/p/books/the-technological-society-jacques-ellul/11253635?ean=9780394703909">The Technological Society</a> by Jacques Ellul</p>
<p><a href="https://bookshop.org/p/books/what-an-owl-knows-the-new-science-of-the-world-s-most-enigmatic-birds-jennifer-ackerman/18824294?ean=9780593298886">What an Owl Knows: The New Science of the World&rsquo;s Most Enigmatic Birds</a> by Jennifer Ackerman</p>
<p><a href="https://bookshop.org/p/books/wild-problems-a-guide-to-the-decisions-that-define-us-russ-roberts/17400461?ean=9780593418253">Wild Problems: A Guide to the Decisions That Define Us</a> by Russ Roberts</p>
<p><a href="https://bookshop.org/p/books/world-of-wonders-in-praise-of-fireflies-whale-sharks-and-other-astonishments-aimee-nezhukumatathil/13571642?ean=9781571313652">World of Wonders: In Praise of Fireflies, Whale Sharks, and Other Astonishments</a> by Aimee Nezhukumatathil</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>And writing, hence why this post on my 2023 reading list is coming out nearly 11 months into 2024&hellip;&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Shortridge Makes Sense of the 2024 Verizon DBIR</title>
            <link>https://kellyshortridge.com/blog/posts/shortridge-makes-sense-of-verizon-dbir-2024/</link>
            <pubDate>Wed, 01 May 2024 05:59:13 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/shortridge-makes-sense-of-verizon-dbir-2024/</guid>
            <description>
Every year Verizon publishes the best hope we have of scouring real world evidence of attacks and their impacts in the Verizon Data Breach Investigations Report (DBIR). I, the lucky daedric prince of chaos that I am, was privileged to receive an advance copy of the report last Sunday to ponder and prepare my thoughts (and by that I mean scramble to finish this in two witching hours).
What follows are my thoughts on the Verizon 2024 DBIR, attempting to make sense of the delectable data within and share this sensemaking with the community.
In some cases, I will cast a skeptical eye on their commentary (a meta commentary, if you will). When appropriate, I provoke the cybersecurity industry writ large. For all the ballyhooing we do about users being “irrational,” much of our security “strategy” seems quite irrational in light of this evidence…
Your threat model is still money crimes Yet again, to my absolute lack of surprise (but somehow to the surprise of many in infosec) cybersecurity is largely about money crimes. Financial motives remain the driver of ~93% of all breaches. Yes, espionage is up from 5% last year to 7%, but highly concentrated in Public Administration (i.e. government) breaches. And, as the report mentions, “espionage” also includes things like sales bros downloading their customer contact information to take to their new employer.
Really think about this and internalize it: out of 5,632 breaches and a bias towards government victims driving this bigger sample size, they found that espionage is still only 7% of breaches.
You are not a spider caught in a sexy spy web. You do not need to spin up “war rooms” or “task forces” for geopolitical events. The real APTs – the ones motivated by geopolitical power – do not think about you at all1.
Anyway, end-user as “threat actor” rose from 11% to 26%, but mostly due to “misdelivery” errors – sending something to the wrong recipient – that are subject to mandatory disclosure (so don’t interpret this as an “insider threat” surge).
So, if you combine “employee did a whoopsie” with “organized crime,” you get 90% of your “threat actors.” The “Other” actor type is just under 10% (based on eyeballing the chart) and includes things like activists, auditors, competitors, customers, force majeures, acquaintances, and terrorists. And then, very much last and least prevalent, we have nation-state or state-affiliated threat actors.
I know that it sounds super duper cool to be important enough for an intelligence agency to target your organization, because how elegantly does that solve one’s existential woes2, our natural instinct to yearn for meaning in our life and work – a deus ex hostis, if you will – but you really do look very silly and, in light of this persistent evidence, irrational to your organization if you LARP as a counterintelligence professional.
MOVEit, pwn it, steal it, makes us poorer, sadder, weaker, bitter They mention MOVEit precisely 25 times3 in the 2024 DBIR for good reason: they found MOVEit implicated in 1,567 breach notifications. What fascinates me is that this dwarfs Log4shell’s impact last year; it was mentioned in only 0.4% of incidents (which are more numerous than breaches in the DBIR’s parlance).
Why? Why did MOVEit foment more real-world impact? Why did the industry clamor over Log4Shell more, with a longer tail of hype and FUD? I have theories.
The first theory returns to my heuristic for software engineering teams about whether to care about a vulnerability: the vulnerability must be scalable and easy-to-use, in the specific sense that it does not require too many steps for attackers to reach their goal.
MOVEit is quite literally designed to store sensitive data in a uniform manner, making it trivial to ransom. With Log4Shell, okay, you get a shell, but how do you transmogrify that into money? You must tailor your exploit to whatever system you’re targeting, then you land on a host with who knows what – possibly nothing of value – so now you must pivot, etc. etc.
With MOVEit, you can download all the data and documents within and hope that some will contain sensitive stuff that the victim does not want exposed. Thus, 0day in MOVEit checks the boxes on both scalability and ease of use in a way Log4Shell did not.
The second theory is that MOVEit is the IT department and Log4Shell is software engineering. In theory, software designed for end users (IT) should be easier to update than software designed as a library to use in other systems; this is the typical sOftWaRe SuPpLy ChAiN fuddery.
However, I believe this shows that engineering teams can actually update pretty quickly when it matters, especially compared to end users. They have automated tooling that suggests when something is out of date and informs them when the associated update fixes vulns. (I have more thoughts on this that are mostly summarized in our RFI response on OSS).
The third theory is really more of a list of contributing factors that drove MOVEit’s impact:
MOVEit’s customers are primarily in Education and Healthcare, sectors not known for their ability to effectively operationalize software On-prem deployment model (SaaS makes it easier to update on behalf of customers) MOVEit’s fundamental purpose is to handle sensitive data, so attackers can expect that every system in use contains some sensitive data Uniformity of system deployment, which makes attacks on the software straightforward to automate It’s internet-accessible, making it easy for attackers to discover potential victims and perform their spooky action at a distance Compliance software doesn’t have to be good; compliance requirements force customers to buy it anyway In which the DBIR is alarmed that people skim emails On page 9, the 2024 Verizon DBIR states, “This leads to an alarming finding: The median time for users to fall for phishing emails is less than 60 seconds.”
I do not understand what’s alarming about this; it feels very unsurprising. Who is spending more than a minute scrutinizing emails?
Because I like my salt with a side of science, I looked for evidence and the answer is that few people are spending whole ass minutes on emails. In 2023, Litmus found humans spend 9 seconds on average looking at an email. 30% on average receive less than two seconds of attention, while only 29% receive more than eight seconds. Given Litmus is involved in delivering the type of email we could generously deem consensual spam, we could argue that people spend a little more time looking at corporate emails… but likely not by much.
Given this, what is alarming about humans spending a median of 21 seconds to click a malicious link after opening an email? Or even taking a median of 28 seconds for the victim to enter their data? Whether we have hot girl shit to do or corporate drone shit, our universal experience is that we want to finish email as efficiently as we can so we can move onto the stuff that gets us promoted and paid.
But this is cybersecurity with its rampant projection bias and intolerance for preferences other than enshrining security as the One True Goal. So, let’s appeal to rationality instead… because clicking the link really is rational.
My extremely lazy math I posted awhile ago on Mastodon is as follows:
If we assume the average corporate worker receives ~100 biz-related emails per day during the work week4, that’s approximately 26k per year. Let’s assume 50% have links.
If they click on 1 malicious email link in a year, that’s a ~0.008% “fail” rate to them. Even if they click on 100 malicious links, that’s only ~0.8%.
It’s entirely rational to click the links. As I memed many years ago, we have utterly failed as an industry if our hope rests on humans not clicking things on the thing clicking machine:
Spending even 1 minute on scrutinizing each email adds up to 217 hours per year, which neither employees nor their employers will appreciate. But also, the DBIR’s framing that 20 seconds and change to click the link is “alarming” suggests that even a minute might not be sufficient.
If we consider the time it would take to truly verify everything is legit in an email — especially for non-nerds — it’s probably more like 5 minutes. That results in over 1,000 hours per year scrutinizing emails on average (again, this is extremely lazy math).
Idk about you, dear reader, but spending 45 days of my year scrutinizing emails feels so uncivilized as to border on torture. “Careless”5 from the security normative perspective sounds a lot more like “rational” (or even “self respect”) from another.
How to solve this? Other than the security industry finally prioritizing UX (lol), what can we do?
Well, imagine if humans of corporate received only 10 emails in their inbox per day. Might they not naturally scrutinize them more? After all, their inbox would no longer resemble a bleak grave where, just as they carve through the soil to glimpse the gentle dawn light – perhaps even a bit of sparkly dew trembling on the satin petals of a nearby tulip – the shovel clinks against headstone and dumps another fetid clump on their exhausted form.
At 10 emails per day, it is reasonable to ask employees to spend a minute being extra sure about email. I see no practical path to whittling email into this covetable state of affairs (other than more aggressive filtering to just the VIE – very important email – which might bruise some egos) (Slack or Discord, and their peers (other than Teams), might also help).
But as long as inboxes fester like a summer picnic ruined by a swarm of wasps and mosquitoes, incessant buzzes vying for our attention, then we should not demand humans immolate even more of their lives to make up for our industry’s decades-long failure to invest in UX.
Attackers are not using GenAI I don’t really have anything to add to the 2024 DBIR’s commentary on AI except to praise it for its Real Housewives-level shade throwing.
First, in terms of evidence, they found that Gen AI didn’t even merit 100 mentions in underground crime forums over the past two years combined. When Gen AI is mentioned, it’s for gross reasons:
Most of the mentions involved the selling of accounts to commercial GenAI offerings or tools for AI generation of non-consensual pornography.
That distressing factoid aside, there are two footnotes that spark joy that I wish to highlight for you all, too.
“Despite pressure from a vocal minority of the cybersecurity community^17” –&gt; “Foonote 17: Strange spelling for ‘unhinged marketing hype’
and
“However, it is still a very timely topic and one that has been occupying the minds of technology and cybersecurity executives worldwide ^19” –&gt; “Footnote 19: Just like real impactful technologies such as blockchain and the metaverse.”
But there isn’t just snark here, they make an excellent point – one I’ve made as well – about how little need there is for attackers to even adopt GenAI in their operations:
One can argue, given our Social Engineering pattern numbers from the past few years, that Phishing or Pretexting attacks don’t need to be more sophisticated to be successful against their targets, as we have seen with the growth of BEC-like attacks (page 17)
Attackers do not let “good enough” be the enemy of perfect and care about results, not hype; perhaps defenders could learn from them.
Bring a bucket and a ROP for this WAP The tl;dr here are vulns are back babyyyyyy. Exploitation almost tripled (up 180%) from last year, but still pales in comparison “ways-in” wise (i.e. initial access) to credentials or phishing; it’s 10% of initial actions in breaches. Breaking that down a bit more, attackers predominately exploited vulns in web apps, followed by desktop sharing software and VPNs.
Yet, the chart above suggests credential stuffing in a web app is still the primary way in for anything that isn’t a “whoopsie” kind of breach. Cred stuffing is certainly not as scintillating as vuln exploitation, but we shouldn’t sleep on it, either (it would be great to see the loss data associated with cred stuffing vs. exploitation, but alas). Either way, web apps remain a favorite for attackers.
On the topic of vulns more generally, they insist that “It’s much easier to harden a system than it is to harden an individual,” which suggests they are not familiar with Ao3 users.
The 2024 DBIR also veers into software supply chain territory and… well, here are my thoughts. It’s perhaps worthwhile to create a new designation for incidents and breaches involving third-party software libraries – even though I find “supply chain interconnection” odd phrasing – especially since it grew from 9% last to 15% presence in breaches (68% growth year-over-year, so close!).
But their commentary around these “supply chain interconnections” leaves much to be desired. They say:
We welcome feedback and suggestions of alternative angles, and we believe the only way through it is to find ways to hold repeat offenders accountable and reward resilient software and services with our business.
Let’s unpack that. For one, this is hardly actionable. How are we to know what software is resilient? I quite literally wrote a book on software resilience and read academic research across every possible complex systems domain regarding resilience for fun and I can tell you that none of them know how to directly measure resilience yet. Not volcanic plumbing systems, not forest ecology, not food supply chains nor neurology. Sure, we have hunches like temporal autocorrelation or functional diversity, but without this turning into a whole diatribe, suffice to say that we can barely verify that our security controls work as expected, let alone delve into those more complicated endeavors.
Next, it’s missing all sorts of stuff that we can do to minimize the impact of software vulns that aren’t “just don’t write vulns, duh.” I mean this somewhat literally, for the report says elsewhere:
If their preference for file transfer platforms continues, this should serve as a caution for those vendors to check their code very closely for common vulnerabilities.
I don’t think they were joking? But in any case, yes there are numerous other ways to frame this problem and a dazzling variety of solutions around vuln exploitation that I covered with a co-conspirator at length in two RFI responses on Security by Design as well as OSS Security so please read those if you would like a deeper dive.
Briefly, I’ll offer a few provoking questions: What if the same vulns were present, but couldn’t be chained together, or got the attacker much less access? Would there still be ~1,500 breaches reported? I suspect not, and that leads to the following jumble of points:
punishing devs (including the horrible DX of many vuln scanners) is far from our holy grail isolation (both spatial and temporal) – and modular architecture more generally – really should get hyped more to address both the “couldn’t be chained together” and “got the attacker much less access” goals… but alas, VCs would rather fund AI holograms that shame devs for mistakes and industry thought leaders can’t sell out by promoting it we do not learn from other industries; the whole point of safety in other domains is that things still work even when there are flaws, like the plane still flying with a crack in its wing, because they focus on minimizing impact look at Rust’s CVE list (to really lean into my crabbiness); Rust dispenses a CVE for anything that could possibly violate the safety of its abstract machine, even if misused. If we did that with C and C&#43;&#43;, we would see no end to CVEs; the design behavior of addition, subtraction, function calls, etc. is as vulnerable when misused (and they’re very easy to misuse); the point is: Rust tries to make certain classes of vulnerabilities impossible to write and considers it a security bug in Rust itself whenever it’s possible to write one without using unsafe (even if it’s academic and confined to sample code in a pedantic GitHub issue); C&#43;&#43;, in contrast, considers this the user’s problem anyway, the point is, there are lots of ways to disrupt attackers, many of which we can borrow from the software engineering community (who often invented these clever things to minimize the impact of performance failures); I’ve written a ton about this elsewhere in books and blogs and papers, so let’s keep moving My final gripe related to the software vuln commentary regards their section on malicious packages (page 45). Frankly, it feels misleading. Their implication is that malicious packages have an impact on breaches, but they do not cite evidence in favor of that interpretation. If anything, the data seems mixed in; the rest of the report (rightly) focuses on impact, whereas this section is about potential.
Without that hard evidence connecting potential to real impact, it’s disappointing to see the 2024 DBIR lament how easy it is to install libraries. The benefits of software should be accessible to all. It should be easy and not gatekept by tower-skulking mages with power they hoard from the masses.
Because that same ease of installation begets the ease of upgrading, which, as stated earlier (and last year), is a key contributing factor to why Log4Shell wasn’t as horrible as we feared.
Security teams Ransomware with corporate budgets In the 2024 DBIR, the team combines Ransomware and Extortion breaches and I agree with that decision. As they note, it’s often the same attackers conducting these operations and choosing their own adventure based on the optimal monetization of whatever access they gain. I like to abbreviate ransomware &#43; extortion as REx, kind of like JLo6.
REx made up ~62% of “action varieties” in financially-motivated breaches, while pretexting (like business email compromise) was 24%. One especially interesting finding to me in the DBIR was that, when they removed Ransomware groups from the “system intrusion” dataset (the types of exploit pew pew compromises we glamorize), the actor split is 44% criminal and 40% state-affiliated actors. I interpret this as evidence that criminals overwhelmingly pivoted to REx – for exploit pew pew operations, at least – away from other monetization methods, like the now-uncool payment skimming.
The 2024 Verizon DBIR reports that the median loss from ransomware and extortion breaches is up to $46,000 with a loss range of $3 to $1,141,467 for 95% of cases. The $3 lower bound is up from $1 last year, which follows NYC pizza slice inflation pretty well.
They also leveraged data from a ransomware negotiator – which means there’s inherent bias in the data towards well-resourced organizations – and found that the median ratio of the initially requested ransom to company revenue was 1.34% (between 0.13% and 8.30% in 80% of cases); presumably the final median ratio was lower after they negotiated the payment down.
But, only 4% of ransomware &#43; extortion complaints had any actual loss, down from the already-meager 7% last year. Specifically, 96% of ransomware incidents resulted in no direct loss (there might be subsequent costs associated with investigation, etc., but the DBIR is not privy to those).
Okay, so, let’s do some more bad math: if we have a meager 4% probability of loss and multiply it by the 1.34% revenue figure, we get a preposterously approximate budget of 0.05% of revenue for ransomware protection (to break even on the loss).
For a company guzzling $100mm in revenue, the median spend on ransomware should therefore be ~$54,000. If you’re buying EDR, that gets you something like ~300 licenses at best, which is way too few for most organizations making that much money. More likely, these organizations have at least 1k endpoints, which would be more like $185k per year for EDR… and this ignores the cost of backups and other things that are very useful to minimize the impact of ransomware.
To be clear, I’m not saying we should “let the bad guys win” or all the other pejoratives security pros have thrown at me over the years when I talk about the economics of cyber stuff7. The reality is that there is a huge trust problem in many companies between the cybersecurity org and the rest of the org, and talking about the sky is falling and it’s going to cost us a blahajillion of dollars will not help your cause.
It’s far better to present this kind of evidence with a range of options like (assuming an example revenue of $100mm):
We start with the assumption of 4% probability of any loss We can assume either the 20% confidence interval, median, or 80% confidence interval These options mean we either invest $5,200; $53,600; or $332,000 on ransomware protection If we want to really cover tail risk, we can look at the 95% interval, which means we’d invest $999,600 on the problem Then list out what mitigations you can get at those price points, starting with backup costs or implementing isolation / modularity, then moving to fancier things like EDR Now the organization can make an informed choice! Probably even they will agree spending $5,200 is absurd, especially when we’d likely want backups for other use cases, anyway. And they’ll likely reject the nearly $1mm option, too. But in between is a lot of wiggle room and now you look like a well-informed leader rather than Don Quixote. Conclusion I unironically relish the Verizon DBIR, and 2024 is no exception. We are an industry starved for data and they throw their whole being into trying to untangle the mess of incident and breach reports into something informative and consumable by the community. You can read it here.
Yes, I quibble with some things as always, but this is really a great input for anyone – security teams or engineering teams alike – to inform their investments. If you’re a vendor who has information that could be useful, please consider contributing to the report8 as a way to level up everyone’s understanding of what the everliving fuck is actually going on in cybersecurity.
&lt;3 thanks AP
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
Unless you work in semiconductor fabrication or nuclear energy or other things that are war-making things. ↩︎
Speaking of existentialism, page 16 accidentally stumbles into the divine omnipotence problem. It includes God (‘acts of’) in “external entities” and says, “Typically, no trust or privilege is implied for external entities.” Big if true. ↩︎
At least, they claim in footnote 8 that they mention “MOVEit” 25 times, I did not verify this claim, lazy journalism, I know. ↩︎
As of 2019, HBR reports the average was 120 emails per day; I suspect it’s higher now post-pandemic. Possibly some of you read my estimate and thought, “only 100 emails per day?! how lucky!” We all deserve more than this. ↩︎
I’ve determined from this year’s DBIR that I really despise VERIS. ↩︎
Have we tried using olive oil to fix all our cybersecurity problems??? (extremely niche joke for the smallest venn diagram of skincare and security enthusiasts) ↩︎
Having people angry at me about estimating a price on things we’re not “supposed to” put a price on (but humans and their societies still do, tacitly) makes me a real economist. ↩︎
No, Verizon did not pay me to say this; I peg my “shill” fee to Chanel flap bag prices and those are rising faster than inflation. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/2024-dbir/dbir-kitty.png" alt="A picture of a confused looking fluffy kitty holding dollar bills while more dollar bills rain around them. Sitting alongside this adorable floof gleams a pile of silver coins that sound like they&amp;rsquo;d clink against the web app servers on which they stand. You can almost hear the web app server fans whirring and sighing, especially given they are somehow levitating in ethereal clouds. In the background of the scene, diamond threads rain down; perhaps they are glowing fiber cables descending like meteors around our fluffy fiend with the thousand yard stare."></p>
<p>Every year Verizon publishes the best hope we have of scouring real world evidence of attacks and their impacts in the <a href="https://www.verizon.com/business/resources/reports/dbir/">Verizon Data Breach Investigations Report (DBIR)</a>. I, the lucky daedric prince of chaos that I am, was privileged to receive an advance copy of the report last Sunday to ponder and prepare my thoughts (and by that I mean scramble to finish this in two witching hours).</p>
<p>What follows are my thoughts on the Verizon 2024 DBIR, attempting to make sense of the delectable data within and share this sensemaking with the community.</p>
<p>In some cases, I will cast a skeptical eye on their commentary (a meta commentary, if you will). When appropriate, I provoke the cybersecurity industry writ large. For all the ballyhooing we do about users being &ldquo;irrational,&rdquo; much of our security &ldquo;strategy&rdquo; seems quite irrational in light of this evidence&hellip;</p>
<h1 id="your-threat-model-is-still-money-crimes">Your threat model is still money crimes</h1>
<p>Yet again, to my absolute lack of surprise (but somehow to the surprise of many in infosec) cybersecurity is largely about money crimes. Financial motives remain the driver of ~93% of all breaches. Yes, espionage is up from 5% last year to 7%, but highly concentrated in Public Administration (i.e. government) breaches. And, as the report mentions, “espionage” also includes things like sales bros downloading their customer contact information to take to their new employer.</p>
<p><img src="/blog/img/2024-dbir/financial-motive.png" alt="A bar chart showing threat actor motives in breaches, with a sample size of 5,632."></p>
<p>Really think about this and internalize it: out of <em>5,632 breaches</em> and a bias towards <em>government victims</em> driving this bigger sample size, they found that espionage is <em>still</em> only 7% of breaches.</p>
<p>You are not a spider caught in a sexy spy web. You do not need to spin up “war rooms” or “task forces” for geopolitical events. The real APTs – the ones motivated by geopolitical power – do not think about you at all<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>.</p>
<p>Anyway, end-user as “threat actor” rose from 11% to 26%, but mostly due to &ldquo;misdelivery&rdquo; errors – sending something to the wrong recipient – that are subject to mandatory disclosure (so don&rsquo;t interpret this as an “insider threat” surge).</p>
<p>So, if you combine “employee did a whoopsie” with “organized crime,” you get 90% of your &ldquo;threat actors.&rdquo; The &ldquo;Other&rdquo; actor type is just under 10% (based on eyeballing the chart) and includes things like activists, auditors, competitors, customers, force majeures, acquaintances, and terrorists. And then, very much last and least prevalent, we have nation-state or state-affiliated threat actors.</p>
<p><img src="/blog/img/2024-dbir/threat-actor-varieties.png" alt="A bar chart showing threat actor varieties in breaches, with a sample size of 7,921."></p>
<p>I know that it sounds super duper cool to be important enough for an intelligence agency to target your organization, because how elegantly does that solve one&rsquo;s existential woes<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>, our natural instinct to yearn for meaning in our life and work – a <em>deus ex hostis</em>, if you will – but you really do look very silly and, in light of this persistent evidence, irrational to your organization if you LARP as a counterintelligence professional.</p>
<h1 id="moveit-pwn-it-steal-it-makes-us-poorer-sadder-weaker-bitter">MOVEit, pwn it, steal it, makes us poorer, sadder, weaker, bitter</h1>
<p>They mention MOVEit precisely 25 times<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> in the 2024 DBIR for good reason: they found MOVEit implicated in 1,567 breach notifications. What fascinates me is that this <a href="https://kellyshortridge.com/blog/posts/kellys-kommentary-on-verizon-dbir-2023/">dwarfs Log4shell&rsquo;s impact</a> last year; it was mentioned in only 0.4% of <em>incidents</em> (which are more numerous than breaches in the DBIR&rsquo;s parlance).</p>
<p>Why? Why did MOVEit foment more real-world impact? Why did the industry clamor over Log4Shell more, with a longer tail of hype and FUD? I have theories.</p>
<p>The first theory returns to my heuristic for software engineering teams about whether to care about a vulnerability: the vulnerability must be scalable and easy-to-use, in the specific sense that it does not require too many steps for attackers to reach their goal.</p>
<p>MOVEit is quite literally designed to store sensitive data in a uniform manner, making it trivial to ransom. With Log4Shell, okay, you get a shell, but how do you transmogrify that into money? You must tailor your exploit to whatever system you&rsquo;re targeting, then you land on a host with who knows what &ndash; possibly nothing of value &ndash; so now you must pivot, etc. etc.</p>
<p>With MOVEit, you can download all the data and documents within and hope that some will contain sensitive stuff that the victim does not want exposed. Thus, 0day in MOVEit checks the boxes on both scalability and ease of use in a way Log4Shell did not.</p>
<p>The second theory is that MOVEit is the IT department and Log4Shell is software engineering. In theory, software designed for end users (IT) should be easier to update than software designed as a library to use in other systems; this is the typical sOftWaRe SuPpLy ChAiN fuddery.</p>
<p>However, I believe this shows that engineering teams can actually update pretty quickly when it matters, especially compared to end users. They have automated tooling that suggests when something is out of date and informs them when the associated update fixes vulns. (I have more thoughts on this that are mostly summarized in <a href="https://kellyshortridge.com/blog/posts/rfi-open-source-security-response/">our RFI response on OSS</a>).</p>
<p>The third theory is really more of a list of contributing factors that drove MOVEit&rsquo;s impact:</p>
<ul>
<li>MOVEit&rsquo;s customers are primarily in Education and Healthcare, sectors not known for their ability to effectively operationalize software</li>
<li>On-prem deployment model (<a href="https://kellyshortridge.com/blog/posts/rfi-secure-by-design-response/">SaaS makes it easier to update on behalf of customers</a>)</li>
<li>MOVEit&rsquo;s fundamental purpose is to handle sensitive data, so attackers can expect that every system in use contains some sensitive data</li>
<li>Uniformity of system deployment, which makes attacks on the software straightforward to automate</li>
<li>It&rsquo;s internet-accessible, making it easy for attackers to discover potential victims and perform their spooky action at a distance</li>
<li>Compliance software doesn&rsquo;t have to be good; compliance requirements force customers to buy it anyway</li>
</ul>
<p><img src="/blog/img/2024-dbir/MOVEit-industries.png" alt="A chart showing MOVEit&amp;rsquo;s victims by industry"></p>
<h1 id="in-which-the-dbir-is-alarmed-that-people-skim-emails">In which the DBIR is alarmed that people skim emails</h1>
<p>On page 9, the 2024 Verizon DBIR states, &ldquo;This leads to an alarming finding: The median time for users to fall for phishing emails is less than 60 seconds.&rdquo;</p>
<p>I do not understand what&rsquo;s alarming about this; it feels very unsurprising. Who is spending more than a minute scrutinizing emails?</p>
<p>Because I like my salt with a side of science, I looked for evidence and the answer is that few people are spending whole ass minutes on emails. In 2023, <a href="https://www.digitalinformationworld.com/2023/01/people-spend-33-less-time-reading.html">Litmus found</a> humans spend <em>9 seconds</em> on average looking at an email. 30% on average receive less than two seconds of attention, while only 29% receive more than eight seconds. Given Litmus is involved in delivering the type of email we could generously deem consensual spam, we could argue that people spend a little more time looking at corporate emails&hellip; but likely not by much.</p>
<p>Given this, what is alarming about humans spending a median of 21 seconds to click a malicious link after opening an email? Or even taking a median of 28 seconds for the victim to enter their data? Whether we have hot girl shit to do or corporate drone shit, our universal experience is that we want to finish email as efficiently as we can so we can move onto the stuff that gets us promoted and paid.</p>
<p>But this is cybersecurity with its rampant projection bias and intolerance for preferences other than enshrining security as the One True Goal. So, let&rsquo;s appeal to rationality instead&hellip; because clicking the link really is rational.</p>
<p>My extremely lazy math <a href="https://hachyderm.io/@shortridge/111920107189325231">I posted awhile ago on Mastodon</a> is as follows:</p>
<p>If we assume the average corporate worker receives ~100 biz-related emails per day during the work week<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, that’s approximately 26k per year. Let’s assume 50% have links.</p>
<p>If they click on 1 malicious email link in a year, that’s a ~0.008% “fail” rate to them. Even if they click on 100 malicious links, that’s only ~0.8%.</p>
<p>It’s entirely rational to click the links. As I <a href="https://twitter.com/swagitda_/status/1451203420673740800?lang=en">memed many years ago</a>, we have utterly failed as an industry if our hope rests on humans not clicking things on the thing clicking machine:</p>
<p><img src="/blog/img/2024-dbir/thing-clicking-machine.jpeg" alt="The horse sketch meme adapted by yours truly to illustrate the sad intellectual decline of the cybersecurity industry. The well-drawn end of the horse starts with the labels multilevel security, trusting trust, and formal methods. As the drawing gets progressively worse, the labels are firewalls, threat intelligence, and, once we reach the level of stick figure, the labels are machine learning for anomaly detection, and “prevent people from clicking things on the thing-clicking machine.&quot;"></p>
<p>Spending even 1 minute on scrutinizing each email adds up to 217 hours per year, which neither employees nor their employers will appreciate. But also, the DBIR&rsquo;s framing that 20 seconds and change to click the link is &ldquo;alarming&rdquo; suggests that even a minute might not be sufficient.</p>
<p>If we consider the time it would take to truly verify everything is legit in an email — especially for non-nerds — it’s probably more like 5 minutes. That results in over 1,000 hours per year scrutinizing emails on average (again, this is extremely lazy math).</p>
<p>Idk about you, dear reader, but spending 45 days of my year scrutinizing emails feels so uncivilized as to border on torture. “Careless”<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> from the security normative perspective sounds a lot more like “rational” (or even &ldquo;self respect&rdquo;) from another.</p>
<p>How to solve this? Other than the security industry finally prioritizing UX (lol), what can we do?</p>
<p>Well, imagine if humans of corporate received only 10 emails in their inbox per day. Might they not naturally scrutinize them more? After all, their inbox would no longer resemble a bleak grave where, just as they carve through the soil to glimpse the gentle dawn light &ndash; perhaps even a bit of sparkly dew trembling on the satin petals of a nearby tulip &ndash; the shovel clinks against headstone and dumps another fetid clump on their exhausted form.</p>
<p>At 10 emails per day, it is reasonable to ask employees to spend a minute being extra sure about email. I see no practical path to whittling email into this covetable state of affairs (other than more aggressive filtering to just the VIE &ndash; very important email &ndash; which might bruise some egos) (Slack or Discord, and their peers (other than Teams), might also help).</p>
<p>But as long as inboxes fester like a summer picnic ruined by a swarm of wasps and mosquitoes, incessant buzzes vying for our attention, then we should not demand humans immolate even more of their lives to make up for our industry&rsquo;s decades-long failure to invest in UX.</p>
<h1 id="attackers-are-not-using-genai">Attackers are not using GenAI</h1>
<p>I don&rsquo;t really have anything to add to the 2024 DBIR&rsquo;s commentary on AI except to praise it for its Real Housewives-level shade throwing.</p>
<p>First, in terms of evidence, they found that Gen AI didn&rsquo;t even merit 100 mentions in underground crime forums over the past two years combined. When Gen AI <em>is</em> mentioned, it&rsquo;s for gross reasons:</p>
<blockquote>
<p>Most of the mentions involved the selling of accounts to commercial GenAI offerings or tools for AI generation of non-consensual pornography.</p>
</blockquote>
<p><img src="/blog/img/2024-dbir/ai-lol.png" alt="A line chart showing the cumulative sum of GenAI mentions on criminal forums, both general terms and in the context of attack types.&quot;"></p>
<p>That distressing factoid aside, there are two footnotes that spark joy that I wish to highlight for you all, too.</p>
<p>&ldquo;Despite pressure from a vocal minority of the cybersecurity community^17&rdquo; &ndash;&gt; &ldquo;Foonote 17: Strange spelling for &lsquo;unhinged marketing hype&rsquo;</p>
<p>and</p>
<p>&ldquo;However, it is still a very timely topic and one that has been occupying the minds of technology and cybersecurity executives worldwide ^19&rdquo; &ndash;&gt; &ldquo;Footnote 19: Just like real impactful technologies such as blockchain and the metaverse.&rdquo;</p>
<p>But there isn&rsquo;t just snark here, they make an excellent point &ndash; one I&rsquo;ve made as well &ndash; about how little need there is for attackers to even adopt GenAI in their operations:</p>
<blockquote>
<p>One can argue, given our Social Engineering pattern numbers from the past few years, that Phishing or Pretexting attacks don’t need to be more sophisticated to be successful against their targets, as we have seen with the growth of BEC-like attacks (page 17)</p>
</blockquote>
<p>Attackers do not let &ldquo;good enough&rdquo; be the enemy of perfect and care about results, not hype; perhaps defenders could learn from them.</p>
<h1 id="bring-a-bucket-and-a-rop-for-this-wap">Bring a bucket and a ROP for this WAP</h1>
<p>The tl;dr here are vulns are back babyyyyyy. Exploitation almost tripled (up 180%) from last year, but still pales in comparison &ldquo;ways-in&rdquo; wise (i.e. initial access) to credentials or phishing; it&rsquo;s 10% of initial actions in breaches. Breaking that down a bit more, attackers predominately exploited vulns in web apps, followed by desktop sharing software and VPNs.</p>
<p><img src="/blog/img/2024-dbir/ways-in.png" alt="A chart showing the various ways-in for breaches that didn&amp;rsquo;t involve whoopsie-daisies by well-meaning humans."></p>
<p>Yet, the chart above suggests credential stuffing in a web app is still the primary way in for anything that isn&rsquo;t a &ldquo;whoopsie&rdquo; kind of breach. Cred stuffing is certainly not as scintillating as vuln exploitation, but we shouldn&rsquo;t sleep on it, either (it would be great to see the loss data associated with cred stuffing vs. exploitation, but alas). Either way, web apps remain a favorite for attackers.</p>
<p>On the topic of vulns more generally, they insist that &ldquo;It&rsquo;s much easier to harden a system than it is to harden an individual,&rdquo; which suggests they are not familiar with Ao3 users.</p>
<p>The 2024 DBIR also veers into software supply chain territory and&hellip; well, here are my thoughts. It&rsquo;s perhaps worthwhile to create a new designation for incidents and breaches involving third-party software libraries &ndash; even though I find &ldquo;supply chain interconnection&rdquo; odd phrasing &ndash; especially since it grew from 9% last to 15% presence in breaches (68% growth year-over-year, so close!).</p>
<p>But their commentary around these &ldquo;supply chain interconnections&rdquo; leaves much to be desired. They say:</p>
<blockquote>
<p>We welcome feedback and suggestions of alternative angles, and we believe the only way through it is to find ways to hold repeat offenders accountable and reward resilient software and services with our business.</p>
</blockquote>
<p>Let&rsquo;s unpack that. For one, this is hardly actionable. How are we to know what software is resilient? I quite literally wrote <a href="https://www.securitychaoseng.com/">a book on software resilience</a> and read academic research across every possible complex systems domain regarding resilience for fun and I can tell you that <em>none of them</em> know how to directly measure resilience yet. Not volcanic plumbing systems, not forest ecology, not food supply chains nor neurology. Sure, we have hunches like temporal autocorrelation or functional diversity, but without this turning into a whole diatribe, suffice to say that we can barely verify that our security controls work as expected, let alone delve into those more complicated endeavors.</p>
<p>Next, it&rsquo;s missing all sorts of stuff that we can do to minimize the impact of software vulns that aren&rsquo;t &ldquo;just don&rsquo;t write vulns, duh.&rdquo; I mean this somewhat literally, for the report says elsewhere:</p>
<blockquote>
<p>If their preference for file transfer platforms continues, this should serve as a caution for those vendors to check their code very closely for common vulnerabilities.</p>
</blockquote>
<p>I don&rsquo;t think they were joking? But in any case, yes there are numerous other ways to frame this problem and a dazzling variety of solutions around vuln exploitation that I covered with a co-conspirator at length in two RFI responses on <a href="https://kellyshortridge.com/blog/posts/rfi-secure-by-design-response/">Security by Design</a> as well as <a href="https://kellyshortridge.com/blog/posts/rfi-open-source-security-response/">OSS Security</a> so please read those if you would like a deeper dive.</p>
<p>Briefly, I&rsquo;ll offer a few provoking questions: What if the same vulns were present, but couldn&rsquo;t be chained together, or got the attacker much less access? Would there still be ~1,500 breaches reported? I suspect not, and that leads to the following jumble of points:</p>
<ol>
<li>punishing devs (including the horrible DX of many vuln scanners) is far from our holy grail</li>
<li>isolation (both spatial and temporal) &ndash; and modular architecture more generally &ndash; really should get hyped more to address both the &ldquo;couldn&rsquo;t be chained together&rdquo; and &ldquo;got the attacker much less access&rdquo; goals&hellip; but alas, VCs would rather fund AI holograms that shame devs for mistakes and industry thought leaders can&rsquo;t sell out by promoting it</li>
<li>we do not learn from other industries; the whole point of safety in other domains is that things still work even when there are flaws, like the plane still flying with a crack in its wing, because they focus on <em>minimizing impact</em></li>
<li>look at Rust&rsquo;s CVE list (to really lean into my crabbiness); Rust dispenses a CVE for anything that could possibly violate the safety of its abstract machine, even if misused. If we did that with C and C++, we would see no end to CVEs; the design behavior of addition, subtraction, function calls, etc. is as vulnerable when misused (and they&rsquo;re very easy to misuse); the point is: Rust tries to make certain classes of vulnerabilities <em>impossible to write</em> and considers it a security bug in Rust itself whenever it&rsquo;s possible to write one without using unsafe (even if it&rsquo;s academic and confined to sample code in a pedantic GitHub issue); C++, in contrast, considers this the user&rsquo;s problem</li>
<li>anyway, the point is, there are lots of ways to disrupt attackers, many of which we can borrow from the software engineering community (who often invented these clever things to minimize the impact of performance failures); I&rsquo;ve written a ton about this elsewhere in books and blogs and papers, so let&rsquo;s keep moving</li>
</ol>
<p>My final gripe related to the software vuln commentary regards their section on malicious packages (page 45). Frankly, it feels misleading. Their implication is that malicious packages have an impact on breaches, but they do not cite evidence in favor of that interpretation. If anything, the data seems mixed in; the rest of the report (rightly) focuses on impact, whereas this section is about potential.</p>
<p>Without that hard evidence connecting potential to real impact, it&rsquo;s disappointing to see the 2024 DBIR lament how easy it is to install libraries. The benefits of software should be accessible to all. It <em>should</em> be easy and not gatekept by tower-skulking mages with power they hoard from the masses.</p>
<p>Because that same ease of installation begets the ease of upgrading, which, as stated earlier (and last year), is a key contributing factor to why Log4Shell wasn&rsquo;t as horrible as we feared.</p>
<h1 id="security-teams-ransomware-with-corporate-budgets">Security teams Ransomware with corporate budgets</h1>
<p>In the 2024 DBIR, the team combines Ransomware and Extortion breaches and I agree with that decision. As they note, it&rsquo;s often the same attackers conducting these operations and choosing their own adventure based on the optimal monetization of whatever access they gain. I like to abbreviate ransomware + extortion as REx, kind of like JLo<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>.</p>
<p>REx made up ~62% of &ldquo;action varieties&rdquo; in financially-motivated breaches, while pretexting (like business email compromise) was 24%. One especially interesting finding to me in the DBIR was that, when they removed Ransomware groups from the &ldquo;system intrusion&rdquo; dataset (the types of exploit pew pew compromises we glamorize), the actor split is 44% criminal and 40% state-affiliated actors. I interpret this as evidence that criminals overwhelmingly pivoted to REx &ndash; for exploit pew pew operations, at least &ndash; away from other monetization methods, like the now-uncool payment skimming.</p>
<p>The 2024 Verizon DBIR reports that the median loss from ransomware and extortion breaches is up to $46,000 with a loss range of $3 to $1,141,467 for 95% of cases. The $3 lower bound is up from $1 last year, which follows NYC pizza slice inflation pretty well.</p>
<p><img src="/blog/img/2024-dbir/ransomware-cost.png" alt="A chart showing adjusted incident cost for Ransomware"></p>
<p>They also leveraged data from a ransomware negotiator &ndash; which means there&rsquo;s inherent bias in the data towards well-resourced organizations &ndash; and found that the median ratio of the initially requested ransom to company revenue was 1.34% (between 0.13% and 8.30% in 80% of cases); presumably the <em>final</em> median ratio was lower after they negotiated the payment down.</p>
<p>But, <strong>only 4% of ransomware + extortion complaints had any actual loss</strong>, down from the already-meager 7% last year. Specifically, <strong>96% of ransomware incidents resulted in no direct loss</strong> (there might be subsequent costs associated with investigation, etc., but the DBIR is not privy to those).</p>
<p>Okay, so, let&rsquo;s do some more bad math: if we have a meager 4% probability of loss and multiply it by the 1.34% revenue figure, we get a preposterously approximate budget of 0.05% of revenue for ransomware protection (to break even on the loss).</p>
<p>For a company guzzling $100mm in revenue, the median spend on ransomware should therefore be ~$54,000. If you&rsquo;re buying EDR, that gets you something like ~300 licenses at best, which is way too few for most organizations making that much money. More likely, these organizations have at least 1k endpoints, which would be more like $185k per year for EDR&hellip; and this ignores the cost of backups and other things that are very useful to minimize the impact of ransomware.</p>
<p>To be clear, I&rsquo;m not saying we should &ldquo;let the bad guys win&rdquo; or all the other pejoratives security pros have thrown at me over the years when I talk about the economics of cyber stuff<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>. The reality is that there is a huge trust problem in many companies between the cybersecurity org and the rest of the org, and talking about the sky is falling and it&rsquo;s going to cost us a blahajillion of dollars will not help your cause.</p>
<p>It&rsquo;s far better to present this kind of evidence with a range of options like (assuming an example revenue of $100mm):</p>
<ul>
<li>We start with the assumption of 4% probability of any loss</li>
<li>We can assume either the 20% confidence interval, median, or 80% confidence interval</li>
<li>These options mean we either invest $5,200; $53,600; or $332,000 on ransomware protection</li>
<li>If we want to really cover tail risk, we can look at the 95% interval, which means we&rsquo;d invest $999,600 on the problem</li>
<li>Then list out what mitigations you can get at those price points, starting with backup costs or implementing isolation / modularity, then moving to fancier things like EDR</li>
<li>Now the organization can make an informed choice! Probably even they will agree spending $5,200 is absurd, especially when we&rsquo;d likely want backups for other use cases, anyway. And they&rsquo;ll likely reject the nearly $1mm option, too. But in between is a lot of wiggle room and now you look like a well-informed leader rather than Don Quixote.</li>
</ul>
<p><img src="/blog/img/2024-dbir/ransomware-revenue.png" alt="A chart showing adjusted incident cost for Ransomware"></p>
<h1 id="conclusion">Conclusion</h1>
<p>I unironically relish the Verizon DBIR, and 2024 is no exception. We are an industry starved for data and they throw their whole being into trying to untangle the mess of incident and breach reports into something informative and consumable by the community. You can <a href="https://www.verizon.com/business/resources/reports/dbir/">read it here</a>.</p>
<p>Yes, I quibble with some things as always, but this is really a great input for anyone &ndash; security teams or engineering teams alike &ndash; to inform their investments. If you&rsquo;re a vendor who has information that could be useful, please consider contributing to the report<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup> as a way to level up everyone&rsquo;s understanding of what the everliving fuck is actually going on in cybersecurity.</p>
<p>&lt;3 thanks AP</p>
<hr>
<p><em>Enjoy this post? You might like <a href="https://www.securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>Unless you work in semiconductor fabrication or nuclear energy or other things that are war-making things.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Speaking of existentialism, page 16 accidentally stumbles into <a href="https://plato.stanford.edu/entries/evil/">the divine omnipotence problem</a>. It includes God (&lsquo;acts of&rsquo;) in &ldquo;external entities&rdquo; and says, “Typically, no trust or privilege is implied for external entities.” Big if true.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>At least, they claim in footnote 8 that they mention &ldquo;MOVEit&rdquo; 25 times, I did not verify this claim, lazy journalism, I know.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>As of 2019, <a href="https://hbr.org/2019/01/how-to-spend-way-less-time-on-email-every-day">HBR reports</a> the average was 120 emails per day; I suspect it&rsquo;s higher now post-pandemic. Possibly some of you read my estimate and thought, &ldquo;<em>only</em> 100 emails per day?! how lucky!&rdquo; We all deserve more than this.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>I&rsquo;ve determined from this year&rsquo;s DBIR that I really despise VERIS.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Have we tried using olive oil to fix all our cybersecurity problems??? (extremely niche joke for the smallest venn diagram of skincare and security enthusiasts)&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>Having people angry at me about estimating a price on things we&rsquo;re not &ldquo;supposed to&rdquo; put a price on (but humans and their societies still do, tacitly) makes me a real economist.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>No, Verizon did not pay me to say this; I peg my &ldquo;shill&rdquo; fee to Chanel flap bag prices and those are rising faster than inflation.&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Secure by Design RFI Response from Shortridge Sensemaking LLC</title>
            <link>https://kellyshortridge.com/blog/posts/rfi-secure-by-design-response/</link>
            <pubDate>Wed, 21 Feb 2024 08:00:24 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/rfi-secure-by-design-response/</guid>
            <description>My frequent co-conspirator, Ryan Petrich, and I submitted a response to the U.S. government’s Request for Information on Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Secure by Design Software (CISA-2023-0027).
Secure by Design is a strategy we believe can nurture a future where technology is safe, secure, and resilient. But Secure by Design is a zeitgeisty topic that could be distorted into security theater or captured by crusty vendors to benefit themselves.
Indeed, we fear the whitepaper in question is overly focused on “forcing” organizations to prioritize security equally to business success rather than understanding what principles and practices would lead to the outcomes we need as a global software community. Many of the recommendations incentivize lip service, for software manufacturers to “prove” their commitment to security through words rather than actions.
We believe this is woefully misguided. We believe Secure by Design can align with business priorities like software velocity, developer productivity, and reliability in production. We believe in outcomes rather than outputs. So, our response enumerates principles and practices that software engineering teams can adopt without getting fired for ignoring business goals.
Similar to our prior RFI response on open source security, our commentary is exhaustive since we wanted to not only offer our expertise but enumerate as many potential opportunities for organizations to apply secure by design in practice as we could in light of only finding out about the RFI a week before it was due. In that vein, for those of you looking for, “How should my software engineering team(s) start investing in secure by design?” we suggest you read Section 1.2.1 in our response.
Our response begins with overall commentary on CISA’s whitepaper, both where we agree and, more often, where we disagree – but proposing ample alternatives along the way. After that, we address multiple question areas from the RFI ranging from economic incentives and dynamics; threat modeling; education; and more.
We are publishing our response in the spirit of transparency; you can read it at the following link: https://kellyshortridge.com/papers/CISA-2023-0027-Shortridge-Sensemaking.pdf
In the spirit of shepherding the Secure by Design movement towards the resilient future we envision, we feel privileged to submit our recommendations for CISA to consider as they navigate how to nourish Secure by Design in practice.
Note that we are submitting as Shortridge Sensemaking LLC. The views expressed in our response are not necessarily the views of our employers or any of their affiliates. The information contained herein is not intended to provide, and should not be relied upon for, investment advice (which we would hope is obvious).
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
</description>
            <atom:content type="html"><![CDATA[<p>My frequent co-conspirator, <a href="https://rpetrich.com/blog/">Ryan Petrich</a>, and I <a href="https://kellyshortridge.com/papers/CISA-2023-0027-Shortridge-Sensemaking.pdf">submitted a response</a> to the U.S. government&rsquo;s <a href="https://www.federalregister.gov/documents/2023/12/20/2023-27948/request-for-information-on-shifting-the-balance-of-cybersecurity-risk-principles-and-approaches-for">Request for Information</a> on Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Secure by Design Software (CISA-2023-0027).</p>
<p>Secure by Design is a strategy we believe can nurture a future where technology is safe, secure, and resilient. But Secure by Design is a zeitgeisty topic that could be distorted into security theater or captured by crusty vendors to benefit themselves.</p>
<p>Indeed, we fear the whitepaper in question is overly focused on &ldquo;forcing&rdquo; organizations to prioritize security equally to business success rather than understanding what principles and practices would lead to the outcomes we need as a global software community. Many of the recommendations incentivize lip service, for software manufacturers to &ldquo;prove&rdquo; their commitment to security through words rather than actions.</p>
<p>We believe this is woefully misguided. We believe Secure by Design can align with business priorities like software velocity, developer productivity, and reliability in production. We believe in outcomes rather than outputs. So, our response enumerates principles and practices that software engineering teams can adopt without getting fired for ignoring business goals.</p>
<p>Similar to our prior RFI response on open source security, our commentary is exhaustive since we wanted to not only offer our expertise but enumerate as many potential opportunities for organizations to apply secure by design in practice as we could in light of only finding out about the RFI a week before it was due. In that vein, for those of you looking for, &ldquo;How should my software engineering team(s) start investing in secure by design?&rdquo; we suggest you read Section 1.2.1 in our response.</p>
<p>Our response begins with overall commentary on CISA&rsquo;s whitepaper, both where we agree and, more often, where we disagree &ndash; but proposing ample alternatives along the way. After that, we address multiple question areas from the RFI ranging from economic incentives and dynamics; threat modeling; education; and more.</p>
<p>We are publishing our response in the spirit of transparency; you can read it at the following link: <a href="https://kellyshortridge.com/papers/CISA-2023-0027-Shortridge-Sensemaking.pdf">https://kellyshortridge.com/papers/CISA-2023-0027-Shortridge-Sensemaking.pdf</a></p>
<p>In the spirit of shepherding the Secure by Design movement towards the resilient future we envision, we feel privileged to submit our recommendations for CISA to consider as they navigate how to nourish Secure by Design in practice.</p>
<p>Note that we are submitting as Shortridge Sensemaking LLC. The views expressed in our response are not necessarily the views of our employers or any of their affiliates. The information contained herein is not intended to provide, and should not be relied upon for, investment advice (which we would hope is obvious).</p>
<hr>
<p><em>Enjoy this post? You might like <a href="https://www.securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
]]></atom:content>
        </item>
        
        <item>
            <title>The Basics of Software Resilience and Security Chaos Engineering</title>
            <link>https://kellyshortridge.com/blog/posts/security-chaos-engineering-sustaining-software-systems-resilience-cliff-notes/</link>
            <pubDate>Thu, 04 Jan 2024 10:00:49 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/security-chaos-engineering-sustaining-software-systems-resilience-cliff-notes/</guid>
            <description>The software resilience transformation I pioneered with my book — coalescing1, defining, and innovating the principles, practices, and patterns needed to pursue a comprehensive resilience strategy — is gathering momentum in mindshare. This means humans, in their general propensity towards least effort, want the cliff notes2 version of what this “software resilience” and “security chaos engineering” stuff means. This is a reasonable request, which I am happy to oblige in this post.
Below are the chapter takeaways from my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems. My hope is these summaries can serve as a cliff notes study guide on the mosaic of concepts within the tome3 — at least the basics — and make the software resilience approach more accessible to curious beings wanting to do software security better.
As a sneaky bonus, this post serves as a suitable citation for those of you who are only allowed to cite public / non-commercial sources (which my book is not):
Shortridge, Kelly. “The Basics of Software Resilience and Security Chaos Engineering.” Sensemaking by Shortridge (blog). January 4, 2024. https://kellyshortridge.com/blog/posts/security-chaos-engineering-sustaining-software-systems-resilience-cliff-notes/
tl;dr — My standard definition of Security Chaos Engineering is “a socio-technical transformation that enables the organizational ability to gracefully respond to failure and adapt to evolving conditions.” This applies to my concept of Platform Resilience Engineering, too. Really, it’s all about sustaining resilience in practice.
Resilience in Software and Systems Takeaways from Chapter 1
All of our software systems are complex. Complex systems are filled with variety, are adaptive, and are holistic in nature. Failure is when systems — or components within systems — do not operate as intended. In complex systems, failure is inevitable and happening all the time. What matters is how we prepare for it. Failure is never the result of one factor; there are multiple influencing factors working in concert. Acute and chronic stressors are factors, as are computer and human surprises. Resilience is the ability for a system to gracefully adapt its functioning in response to changing conditions so it can continue thriving. Resilience is the foundation of security chaos engineering. Security Chaos Engineering (SCE) is a set of principles and practices that help you design, build, and operate complex systems that are more resilient to attack (and other types of failures, too). The five ingredients of the “resilience potion” include understanding a system’s critical functionality; understanding its safety boundaries; observing interactions between its components across space and time; fostering feedback loops and a learning culture; and maintaining flexibility and openness to change. Resilience is a verb. Security, as a subset of resilience, is something a system does, not something a system has. SCE recognizes that a resilient system is one that performs as needed under a variety of conditions and can respond appropriately both to disturbances — like threats — as well as opportunities. Security programs are meant to help the organization anticipate new types of hazards as well as opportunities to innovate to be even better prepared for the future (whether new incidents, market conditions, or more). There are many myths about resilience, four of which we covered: resilience is conflated with robustness, the ability the “bounce back” to normal after an attack; the belief that we can and should prevent failure (which is impossible); the myth that the security of each component adds up to the security of the whole system (it does not); and that creating a “security culture” fixes the “human error” problem (it never does). SCE embraces the idea that failure is inevitable and uses it as a learning opportunity. Rather than preventing failure, we must prioritize handling failure gracefully — which better aligns with organizational goals, too. Systems-Oriented Security Takeaways from Chapter 2
If we want to protect complex systems, we can’t think in terms of components. We must infuse systems thinking in our security programs. No matter our role, we maintain some sort of “mental model” about our systems — assumptions about how a system behaves. Because our systems and their surrounding context constantly evolve, our mental models will be incomplete. Attackers take advantage of our incomplete mental models. They search for our “this will always be true” assumptions and hunt for loopholes and alternative explanations (much like lawyers). We can proactively find loopholes in our own mental models through resilience stress testing. That way, we can refine our mental models before attackers can take advantage of inaccuracies in them. Resilience stress testing is a cross-discipline practice of identifying the confluence of conditions in which failure is possible; financial markets, healthcare, ecology, biology, urban planning, disaster recovery, and many other disciplines recognize its value in achieving better responses to failure (versus risk-based testing). In software, we call resilience stress testing “chaos experimentation.” It involves injecting adverse conditions into a system to observe how the system responds and adapts. The E&amp;E Approach is a repeatable, standardized means to incrementally transform toward resilience. It involves two tiers of assessment: evaluation and experimentation. The evaluation tier is a readiness assessment that solidifies the first three resilience potion ingredients: understanding critical functions, mapping system flows to those functions, and identifying failure thresholds. The experimentation tier harnesses learning and flexibility: conducting chaos experiments to expose real system behavior in response to adverse conditions, which informs changes to improve system resilience. The “fail-safe” mindset is anchored to prevention and component-based thinking. The “safe-to-fail” mindset nurtures preparation and systems-based thinking. Fail-safe tries to stop failure from ever happening (impossible) while safe-to-fail proactively learns from failure for continuous improvement. The fail-safe mindset is a driver of the status quo cybersecurity industry’s lack of systems thinking, its fragmentation, and its futile obsession with prediction. Security Chaos Engineering (SCE) helps organizations migrate away from the security theater that abounds in traditional cybersecurity programs. Security theater is performative; it focuses on outputs rather than outcomes. Security theater punishes “bad apples” and stifles the organization’s capacity to learn; it is manual, inefficient, and siloed. Instead, SCE prioritizes measurable success outcomes, nurtures curiosity and adaptability, and supports a decentralized model for security programs. RAV Engineering (or RAVE) reflects a set of principles—repeatability, accessibility, and variability — that support resilience across the software delivery lifecycle. When an activity is repeatable, it minimizes mistakes and is easier to mentally model. Accessible security means stakeholders don’t have to be experts to achieve our goal security outcomes. Supporting variability means sustaining our capacity to respond and adapt gracefully to stressors and surprises in a reality defined by variability. Architecting and Designing for Software Resilience Takeaways from Chapter 3
Our systems are always “becoming,” an active process of change. What started as a simple system we could mental-model with ease will become complex as it grows and the context around it evolves. When architecting and designing a system, your responsibility is not unlike that of Mother Nature: to nurture your system so it may recover from incidents, adapt to surprises, and evolve to succeed over time. We — as individuals, teams, and organizations — only possess finite effort and must prioritize how we expend it. The Effort Investment Portfolio concept captures the need to balance our “effort capital” across activities to best achieve our objectives. When we allocate our Effort Investment Portfolio during design and architecture, we must consider the local context of the entire sociotechnical system and preserve possibilities for both software and humans within it to adapt and evolve over time. There are four macro failure modes for complex systems that can inform how we allocate effort when architecting and designing systems. We especially want to avoid the Danger Zone quadrant—where tight coupling and interactive complexity combine — because this is where surprising and hard-to-control failures, like cascading failures, manifest. We can invest in looser coupling to stay out of the Danger Zone. In this chapter, I covered numerous opportunities to architect and design for looser coupling; the best opportunities depend on your local context. Tight coupling is sneaky and may only be revealed during an incident; systems often inadvertently become more tightly coupled as changes are made and we excise perceived “excess.” We can use chaos experiments to expose coupling proactively and refine our design accordingly. We can also invest in linearity to stay out of the Danger Zone. We described many opportunities to architect and design for linearity, including isolation, choosing “boring” technology, and functional diversity. The right opportunities depend, again, on your local context. Scaling the sociotechnical system is where coupling and complexity especially matter. When immersed in the labyrinthine nest of teams and software interactions in larger organizations, we must tame tight coupling (by investing in looser coupling) and find opportunities to introduce linearity — or else find our forward progress crushed. Experiments can generate evidence of how our systems behave in reality so we can refine our mental models during design and architecture. If we do our jobs well, our systems will grow and therefore become impossible to mentally model on our own. We can leverage experimentation to regain confidence in our understanding of system behavior. Building and Delivering for Software Resilience Takeaways from Chapter 4
When we build and deliver software, we are implementing intentions described during design, and our mental models almost certainly differ between the two phases. This is also the phase where we possess many opportunities to adapt as our organization, business model, market, or any other pertinent context changes. Who owns application security (and resilience)? The transformation of database administration serves as a template for the shift in security needs; it migrated from a centralized, siloed gatekeeper to a decentralized paradigm where engineering teams adopt more ownership. We can similarly transform security. There are four key opportunities to support critical functionality when building and delivering software: defining system goals and guidelines (prioritizing with the “airlock” approach); performing thoughtful code reviews; choosing “boring” technology to implement a design; and standardizing “raw materials” in software (like memory safe languages). We can expand safety boundaries during this phase with a few opportunities: anticipating scale during development; automating security checks via CI/CD; standardizing patterns and tools; and performing dependency analysis and vulnerability prioritization (the latter in a quite contrary approach to status quo cybersecurity). There are four opportunities for us to observe system interactions across spacetime and make them more linear when building and delivering software and systems: adopting Configuration as Code; performing fault injection during development; crafting a thoughtful test strategy (prioritizing integration tests over unit tests to avoid “test theater”); and being especially cautious about the abstractions we create. To foster feedback loops and learning during this phase, we can implement test automation; treat documentation as an imperative (not a nice-to-have), capturing both why and when; implement distributed tracing and logging; and refine how humans interact with our processes during this phase (keeping realistic behavioral constraints in mind). To sustain resilience, we must adapt. During this phase, we can support this flexibility and willingness to change through five key opportunities: iteration to mimic evolution; modularity, a tool wielded by humanity over millennia for resilience; feature flags and dark launches for flexible change; preserving possibilities for refactoring through (programming language) typing; and pursuing the strangler fig pattern for incremental, elegant transformation. Operating and Observing for Software Resilience Takeaways from Chapter 5
Operating and observing the system is the phase where we can witness system behavior as it runs in production, which can reveal where our mental models are inaccurate. It is when we can glean valuable insights about our systems and incorporate this data into our feedback loops. Security is woven into all three key aspects of reliability that reflect user expectations: availability, performance, and correctness. Site reliability engineering (SRE) goals and security goals overlap to a significant degree, making those teams natural allies in solving reliability and resilience challenges. A key difference is that SRE understands that moving quickly is correlated with reducing the impact of failure; security must adopt this mindset too. Attackers can directly measure success and immediately receive feedback, giving them an asymmetric advantage. We must strive to replicate this for our goals too. To measure operational success, we can borrow established metrics like the DORA metrics and craft thoughtful SLOs that help us learn more about the system. Success is an active process, not a one-time achievement. We must support graceful extensibility: the capability to anticipate bottlenecks and “crunches,” learn about evolving conditions, and adapt responses to stressors and surprises as they change. We want to mimic the interactive, overlapping, and decentralized sensitivities of biological systems in our observability strategy. In particular, we want to observe system interactions across space and time. We must maintain the ability to reflect on three key questions: How well is the system adapted to its environment? What is the system adapted to? What is changing in the system’s environment? Tracking when a system is repeatedly stretching toward its limit (“thresholding”) helps us uncover the system’s boundaries of safe operation. Increasingly “laggy” recovery from disturbances in both the socio and technical parts of the system can indicate erosion of adaptive capacity. Attack observability refers to collecting information about the interaction between attackers and systems. It involves tracing attacker behavior to reveal how it looks in reality versus our mental models. Deception environments can facilitate attacker tracing, fuel a feedback loop for resilient design, and serve as an experimentation platform. A scalable system is a safer system. System signals used to measure scalability can be used as indicators of attack too; we discussed many, including autoscaling replica count, heartbeat response time, and resource consumption. Being a gatekeeper to growth is not an effective way to achieve security outcomes. Scalability forces high-friction processes and procedures to adapt to growth, which is healthy for sustaining resilience. We should apply the concept of toil, from SRE, to security. For any task that a computer can perform better than a human — like those requiring accurate repetition — we should automate it. Doing so frees up effort capital that we can expend on higher-value activities that leverage human strengths like creativity and adaptability. Responding and Recovering for Software Resilience Takeaways from Chapter 6
Incidents are like a pop quiz. To prepare for them and ensure we can respond with grace, we must practice incident response activities—and can do so through chaos experimentation. The Effort Investment Portfolio applies to incident response too. Effort expended earlier in the software delivery lifecycle will reduce the effort required when responding to incidents (this does not mean “shift left,” at least in its popularized / monetized form). Humans often feel an impulse toward action (action bias), which can reduce effectiveness during incident response. Practicing “watchful waiting” can curtail knee-jerk reactions. There is no “best practice” for all incidents. The best we can do is practice incident response activities to nurture human responders’ adaptive capabilities. Repeated practice of response activities through chaos experimentation can turn incidents from stressful, scary situations into confidence-building, problem-solving scenarios. Recovering from incidents requires adaptation, and learning is a prerequisite for this adaptation. Learning from incidents to develop memory of failure is about community, so if we blame community members for the incident, we will struggle to learn. A blameless culture helps organizations stay in a learning mindset — uncovering problems early and gaining clarity around incidents — rather than play the “blame game.” It encourages people to speak up about issues without fear of being punished for doing so. There are two contributing factors always worth discussing during incident review: relevant production pressures and system properties. Humans at the “sharp end,” who interact directly with the system, are often blamed for incidents by humans at the “blunt end,” who influence the system but interact indirectly (like administrators, policy prescribers, or system designers). The disconnect between the two can be summarized as the delta between “work-as-practiced” and “work-as-imagined.” The cybersecurity industry often (unproductively) blames users for causing failures, as evidenced by the acronym PEBKAC: problem exists between keyboard and chair. A more useful heuristic is PEBRAMM: problem exists between reality and mental model. An error represents a starting point for investigation; it is a symptom that indicates we should reevaluate design, policy, incentives, constraints, or other system properties. There are numerous biases that tempt us to blame human error during incidents, which hinders our capacity to constructively learn from and adapt to failure. With hindsight bias, we allow our present knowledge to taint our perception of past events (the “I knew it all along” effect). With outcome bias, we judge the quality of a decision based on its eventual outcomes. The just-world hypothesis refers to our preference for believing the world is an orderly, just, and consequential place. All of these biases warp our perception of reality. During incident review, use neutral practitioner questions to stay curious and intellectually honest. Neutral practitioner questions re-create the context surrounding an event and ask practitioners what actions they would take given this context. It helps sketch a portrait of local rationality: the reasonable course of action in the presence of contextual trade-offs and constrained information-processing capabilities. Platform Resilience Engineering Takeaways from Chapter 7
At the “meta-design” level, we can sustain resilience through organizational structure and practices—transforming from a siloed security program into a platform engineering model (“platform resilience engineering”). We must be aware of production pressures and how they tip sociotechnical systems toward failure. Production pressures involve the incentivization of less expensive and more efficient work, with quality (and security as its subset) as the typical sacrifice. A platform engineering approach to resilience treats security as a product with end users, as something created through a process that provides benefits to a market (with internal teams as our customers). Platform Engineering teams identify real problems, iterate on a solution, and prioritize usability to promote adoption. Resilience is a natural fit for their purview. Any product requires a long-term vision — a unifying theme for all your projects toward a defined end. The vision tells a story of what is being built and why. Treating resilience — including security — as a product starts with identifying the right user problems to tackle. To accurately define user problems, we must understand their local context. We must understand how our users make tradeoffs under pressure, maintain curiosity about the workarounds they create, and respect the limitations of their brains’ computational capacity (“cognitive load”). Security solutions become less reliable as their dependence on human behavior increases. The Ice Cream Cone Hierarchy of Safety Solutions helps us prioritize how we design security solutions, from best to least effective. Starting from the top of the cone, we can eliminate hazards by design; substitute less hazardous methods or materials; incorporate safety devices and guards; provide warning and awareness systems; and, last and least effective, apply administrative controls (like guidelines and training). There are two possible paths we can pursue when solving user problems: the control strategy or the resilience strategy. The control strategy designs security programs based on what security humans think other humans should do; it is convenient for the Security team at the expense of others’ convenience. The resilience strategy promotes and designs security based on how humans actually behave; success is when our solutions align with the reality of work-as-done. The control strategy makes users responsible for security while the resilience approach makes those designing security programs and solutions responsible for it. We should build minimum viable products (MVPs) and pursue an iterative change model informed by user feedback. We should gain consensus about our plans for solving resilience problems — from vision through to implementation of a specific solution — and ensure stakeholders understand the why behind our solutions. Success is solving a real problem in a way that delivers consistent value. To facilitate solution adoption, we must plan for migration and pave the road for our customers to adopt what we’ve created for them (hence the strategy of creating “paved roads”). We should never force solutions on other humans; if that is the only way to drive adoption, then it is a failure of our design, strategy, and communication. Measuring product success is necessary for our feedback loops, but can be tricky. If we design solutions for use by engineering teams, the SPACE framework offers numerous success criteria we can measure. In general, we should be curious about the factors contributing to success and failure for our internal customers. Any metrics related to how “secure” or “risky” something is, like percentage of “risk coverage,” are busywork based on measuring the (highly subjective) unmeasurable. We need to measure our program’s success—and any solutions we design as part of it—based on tangible, realistic goals. Security Chaos Experiments Takeaways from Chapter 8
Experimentation is a cycle of discovery and learning, which is what drives scientific progress. Resilience stress tests (aka security chaos experiments) are like applying the scientific method to software and systems security. Early adopters of security chaos experimentation learned three key lessons: first, it’s fine to start in nonproduction environments because you can still learn a lot; second, use past incidents as inspiration for experiments and to leverage organizational memory; third, make sure to publish and evangelize your experimental findings because expanding adoption will become your hardest challenge (the technical work is comparatively easy). To set chaos experiments up for success, especially the first time, we need to socialize the experiment with relevant stakeholders. Investing in the right messaging and framing at the beginning will reduce friction later. The next step is designing an experimental hypothesis. Hypotheses typically take the form of: “In the event of the following X condition, we are confident that our system will respond with Y.” Once we have a hypothesis, we can design our experiment so we uncover the behavior about which we want to learn. There are numerous considerations: where we conduct the experiment, how we measure success, potential impacts, fallback procedures, and more. Documenting a precise, exact experiment design specification (“spec”) is critical. Our goal with the spec is for our organization to gain a luculent understanding of why we’re conducting this experiment, when and where we’re conducting it, what it will involve, and how it will unfold. Launching an experiment is not unlike a feature release. Our preparation in socializing the experiment, designing the hypothesis, and defining the experiment specifications makes this one of the easier phases. What evidence we collect when conducting an experiment is defined by the spec; we should already know what we’re monitoring and what evidence we expect. The first step after we’ve collected evidence is confirming we collected the evidence we sought from the experiment. The second step is to analyze the data with regard to the hypothesis. Our goal is to compare our observations with our predictions — to verify and refine our mental models of the system, which informs what actions we can take to sustain its resilience to adversity. We should communicate our experimental findings through release notes. Most stakeholders don’t need lots of detail; we should synthesize and summarize our experimental insights, highlighting any action items. Once those action items are performed, we can rerun the experiment. After your first experiment, or after you run an experiment the first time, you can automate it for continuous use. Because our systems — and the reality around them — are constantly changing, we must continuously generate evidence lest it grow stale. Game days, a more manual form of conducting a security chaos experiment, can help more hesitant organizations ease into chaos experimentation. There is no end to the kinds of security chaos experiments you can conduct in your systems. In this chapter, I enumerated many applicable to production infrastructure, build pipelines, service-oriented environments, and Windows environments. Case Studies Takeaways from Chapter 9
It’s hard to capture the cliff notes for the case studies Aaron Rinehart compiled from UnitedHealth Group, Verizon, OpenDoor, Cardinal Health, Accenture Global, and Capital One. Plus, it’s the only chapter I didn’t write on my own in the book, so I feel I wouldn’t do it justice.
But, my personal takeaways from the case studies are:
Collaboration is key; to succeed in security, we must not only establish healthy communication with other teams but be open to them teaching us, too — especially platform engineering and SRE teams, who possess a wealth of experience we can leverage in our resilience journey.
The resilience / SCE transformation is not exclusive to a certain type of organization. Fortune 10, highly-regulated organizations can pursue it. So can smaller, scrappy startups. And this speaks to something I really tried to emphasize in the book: the resilience approach isn’t about revolutionizing X, Y, Z overnight and doing them perfectly; it’s about iteratively changing how you do things towards more resilient outcomes. It’s about experimentation at basically all levels, whether experimenting with new modes of collaboration with software engineering teams, conducting resilience stress tests, or trying any of the zillion strategies I described in the book. For some organizations, it’ll be easier to iteratively migrate to memory safe languages; for others, it’ll be easier to migrate to running workloads on immutable infrastructure or implementing integration testing or so many other things that can make a difference outcome-wise.
A learning culture is critical. This doesn’t mean we defenestrate all caution so it flails on the restless winter winds. It means we conduct small experiments to generate evidence and inform what we do next. It means we assume by default that most humans are just trying to do their jobs, and that means we don’t automatically blame them when something goes wrong — nor do we treat our colleagues like our adversaries (as it turns out, cyber criminals are our real adversaries4, who knew!). We look critically at how our systems and processes are designed, then brainstorm how to iteratively improve them over time. If a system is confusing or cumbersome to use, then that’s a design problem, not a “human error” problem. Nearly every case study in this chapter highlights the imporance of psychological safety in the transformation towards resilience for a reason.
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
The book has like nine million citations, so in no way do I think I am solely responsible for software resilience being a thing. However, to my knowledge, I am the only person to have synthesized disparate research across dozens of disciplines; filled in gaps both philosophically and practically; extended all of that with tons of original contributions from concepts to specific activities; and packaged it into an end-to-end strategy for organizations of all kinds to adopt. While I’ve often been the Agitator in the Agitator-Innovator-Orchestrator model of change, the book is my attempt at being the Innovator for the movement – conceptualizing and communicating, at great length, potential solutions to the problems wrought by traditional cybersecurity and poor software quality. ↩︎
I am purposefully using the misnomer “cliff notes” to avoid getting sued, jk hopefully ↩︎
Presenting the takeaways as digestible bullet points also makes it easy for you to copy and paste into your next LinkedIn Thought Leadership post (with attribution :) so you can dazzle your followers with your newfound knowledge as they doomscroll. ↩︎
Yes, I am aware nation state actors exist. But per the Verizon Data Breach Investigations Report year after year, nation states are like less than 5% of incidents, while money-driven cyber crimes are like 95%. Your threat model should start with cyber criminals before you assume you’re gonna get a comex- / PinkiePie-style chain of 0day thrown your way. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>The software resilience transformation I pioneered with my book — coalescing<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>, defining, and innovating the principles, practices, and patterns needed to pursue a comprehensive resilience strategy — is gathering momentum in mindshare. This means humans, in their general propensity towards least effort, want the cliff notes<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> version of what this &ldquo;software resilience&rdquo; and &ldquo;security chaos engineering&rdquo; stuff means. This is a reasonable request, which I am happy to oblige in this post.</p>
<p>Below are the chapter takeaways from <a href="https://www.securitychaoseng.com/">my book</a>, <em>Security Chaos Engineering: Sustaining Resilience in Software and Systems</em>. My hope is these summaries can serve as a cliff notes study guide on the mosaic of concepts within the tome<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> — at least the basics — and make the software resilience approach more accessible to curious beings wanting to do software security better.</p>
<p>As a sneaky bonus, this post serves as a suitable citation for those of you who are only allowed to cite public / non-commercial sources (which my book is not):</p>
<blockquote>
<p>Shortridge, Kelly. &ldquo;The Basics of Software Resilience and Security Chaos Engineering.&rdquo; <em>Sensemaking by Shortridge</em> (blog). January 4, 2024. <a href="https://kellyshortridge.com/blog/posts/security-chaos-engineering-sustaining-software-systems-resilience-cliff-notes/">https://kellyshortridge.com/blog/posts/security-chaos-engineering-sustaining-software-systems-resilience-cliff-notes/</a></p>
</blockquote>
<hr>
<p><em>tl;dr — My standard definition of Security Chaos Engineering is &ldquo;a socio-technical transformation that enables the organizational ability to gracefully respond to failure and adapt to evolving conditions.&rdquo; This applies to my concept of Platform Resilience Engineering, too. Really, it&rsquo;s all about sustaining resilience in practice.</em></p>
<hr>
<h2 id="resilience-in-software-and-systems">Resilience in Software and Systems</h2>
<p><em>Takeaways from Chapter 1</em></p>
<ul>
<li>All of our software systems are complex. Complex systems are filled with variety, are adaptive, and are holistic in nature.</li>
<li>Failure is when systems — or components within systems — do not operate as intended. In complex systems, failure is inevitable and happening all the time. What matters is how we prepare for it.</li>
<li>Failure is never the result of one factor; there are multiple influencing factors working in concert. Acute and chronic stressors are factors, as are computer and human surprises.</li>
<li>Resilience is the ability for a system to gracefully adapt its functioning in response to changing conditions so it can continue thriving.</li>
<li>Resilience is the foundation of security chaos engineering. Security Chaos Engineering (SCE) is a set of principles and practices that help you design, build, and operate complex systems that are more resilient to attack (and other types of failures, too).</li>
<li>The five ingredients of the “resilience potion” include understanding a system’s critical functionality; understanding its safety boundaries; observing interactions between its components across space and time; fostering feedback loops and a learning culture; and maintaining flexibility and openness to change.</li>
<li>Resilience is a verb. <a href="https://kellyshortridge.com/blog/posts/what-does-the-word-security-mean/">Security</a>, as a subset of resilience, is something a system <em>does</em>, not something a system <em>has</em>.</li>
<li>SCE recognizes that a resilient system is one that performs as needed under a variety of conditions and can respond appropriately both to disturbances — like threats — as well as opportunities. Security programs are meant to help the organization anticipate new types of hazards as well as opportunities to innovate to be even better prepared for the future (whether new incidents, market conditions, or more).</li>
<li>There are many myths about resilience, four of which we covered: resilience is conflated with robustness, the ability the “bounce back” to normal after an attack; the belief that we can and should prevent failure (which is impossible); the myth that the security of each component adds up to the security of the whole system (it does not); and that creating a “security culture” fixes the “human error” problem (it never does).</li>
<li>SCE embraces the idea that failure is inevitable and uses it as a learning opportunity. Rather than preventing failure, we must prioritize handling failure gracefully — which better aligns with organizational goals, too.</li>
</ul>
<h2 id="systems-oriented-security">Systems-Oriented Security</h2>
<p><em>Takeaways from Chapter 2</em></p>
<ul>
<li>If we want to protect complex systems, we can’t think in terms of components. We must infuse systems thinking in our security programs.</li>
<li>No matter our role, we maintain some sort of “mental model” about our systems — assumptions about how a system behaves. Because our systems and their surrounding context constantly evolve, our mental models will be incomplete.</li>
<li>Attackers take advantage of our incomplete mental models. They search for our “this will always be true” assumptions and hunt for loopholes and alternative explanations (much like lawyers).</li>
<li>We can proactively find loopholes in our own mental models through resilience stress testing. That way, we can refine our mental models before attackers can take advantage of inaccuracies in them.</li>
<li>Resilience stress testing is a cross-discipline practice of identifying the confluence of conditions in which failure is possible; financial markets, healthcare, ecology, biology, urban planning, disaster recovery, and many other disciplines recognize its value in achieving better responses to failure (versus risk-based testing). In software, we call resilience stress testing &ldquo;chaos experimentation.&rdquo; It involves injecting adverse conditions into a system to observe how the system responds and adapts.</li>
<li>The E&amp;E Approach is a repeatable, standardized means to incrementally transform toward resilience. It involves two tiers of assessment: evaluation and experimentation. The evaluation tier is a readiness assessment that solidifies the first three resilience potion ingredients: understanding critical functions, mapping system flows to those functions, and identifying failure thresholds. The experimentation tier harnesses learning and flexibility: conducting chaos experiments to expose real system behavior in response to adverse conditions, which informs changes to improve system resilience.</li>
<li>The “fail-safe” mindset is anchored to prevention and component-based thinking. The “safe-to-fail” mindset nurtures preparation and systems-based thinking. Fail-safe tries to stop failure from ever happening (impossible) while safe-to-fail proactively learns from failure for continuous improvement. The fail-safe mindset is a driver of the status quo cybersecurity industry’s lack of systems thinking, its fragmentation, and its futile obsession with prediction.</li>
<li>Security Chaos Engineering (SCE) helps organizations migrate away from the security theater that abounds in traditional cybersecurity programs. Security theater is performative; it focuses on outputs rather than outcomes. Security theater punishes “bad apples” and stifles the organization’s capacity to learn; it is manual, inefficient, and siloed. Instead, SCE prioritizes measurable success outcomes, nurtures curiosity and adaptability, and supports a decentralized model for security programs.</li>
<li>RAV Engineering (or RAVE) reflects a set of principles—repeatability, accessibility, and variability — that support resilience across the software delivery lifecycle. When an activity is repeatable, it minimizes mistakes and is easier to mentally model. Accessible security means stakeholders don’t have to be experts to achieve our goal security outcomes. Supporting variability means sustaining our capacity to respond and adapt gracefully to stressors and surprises in a reality defined by variability.</li>
</ul>
<h2 id="architecting-and-designing-for-software-resilience">Architecting and Designing for Software Resilience</h2>
<p><em>Takeaways from Chapter 3</em></p>
<ul>
<li>Our systems are always “becoming,” an active process of change. What started as a simple system we could mental-model with ease will become complex as it grows and the context around it evolves.</li>
<li>When architecting and designing a system, your responsibility is not unlike that of Mother Nature: to nurture your system so it may recover from incidents, adapt to surprises, and evolve to succeed over time.</li>
<li>We — as individuals, teams, and organizations — only possess finite effort and must prioritize how we expend it. The Effort Investment Portfolio concept captures the need to balance our “effort capital” across activities to best achieve our objectives.</li>
<li>When we allocate our Effort Investment Portfolio during design and architecture, we must consider the local context of the entire sociotechnical system and preserve possibilities for both software and humans within it to adapt and evolve over time.</li>
<li>There are four macro failure modes for complex systems that can inform how we allocate effort when architecting and designing systems. We especially want to avoid the Danger Zone quadrant—where tight coupling and interactive complexity combine — because this is where surprising and hard-to-control failures, like cascading failures, manifest.</li>
<li>We can invest in looser coupling to stay out of the Danger Zone. In this chapter, I covered numerous opportunities to architect and design for looser coupling; the best opportunities depend on your local context.</li>
<li>Tight coupling is sneaky and may only be revealed during an incident; systems often inadvertently become more tightly coupled as changes are made and we excise perceived “excess.” We can use chaos experiments to expose coupling proactively and refine our design accordingly.</li>
<li>We can also invest in linearity to stay out of the Danger Zone. We described many opportunities to architect and design for linearity, including isolation, choosing “boring” technology, and functional diversity. The right opportunities depend, again, on your local context.</li>
<li>Scaling the sociotechnical system is where coupling and complexity especially matter. When immersed in the labyrinthine nest of teams and software interactions in larger organizations, we must tame tight coupling (by investing in looser coupling) and find opportunities to introduce linearity — or else find our forward progress crushed.</li>
<li>Experiments can generate evidence of how our systems behave in reality so we can refine our mental models during design and architecture. If we do our jobs well, our systems will grow and therefore become impossible to mentally model on our own. We can leverage experimentation to regain confidence in our understanding of system behavior.</li>
</ul>
<h2 id="building-and-delivering-for-software-resilience">Building and Delivering for Software Resilience</h2>
<p><em>Takeaways from Chapter 4</em></p>
<ul>
<li>When we build and deliver software, we are implementing intentions described during design, and our mental models almost certainly differ between the two phases. This is also the phase where we possess many opportunities to adapt as our organization, business model, market, or any other pertinent context changes.</li>
<li>Who owns application security (and resilience)? The transformation of database administration serves as a template for the shift in security needs; it migrated from a centralized, siloed gatekeeper to a decentralized paradigm where engineering teams adopt more ownership. We can similarly transform security.</li>
<li>There are four key opportunities to support critical functionality when building and delivering software: defining system goals and guidelines (prioritizing with the “airlock” approach); performing thoughtful code reviews; choosing “boring” technology to implement a design; and standardizing “raw materials” in software (like <a href="https://kellyshortridge.com/blog/posts/the-sux-rule-for-safer-code/">memory safe languages</a>).</li>
<li>We can expand safety boundaries during this phase with a few opportunities: anticipating scale during development; automating security checks <a href="https://kellyshortridge.com/blog/posts/rfi-open-source-security-response/">via CI/CD</a>; standardizing patterns and tools; and performing dependency analysis and vulnerability prioritization (the latter in a quite contrary approach to status quo cybersecurity).</li>
<li>There are four opportunities for us to observe system interactions across spacetime and make them more linear when building and delivering software and systems: adopting Configuration as Code; performing fault injection during development; crafting a thoughtful test strategy (prioritizing integration tests over unit tests to avoid “test theater”); and being especially cautious about the abstractions we create.</li>
<li>To foster feedback loops and learning during this phase, we can implement test automation; treat documentation as an imperative (not a nice-to-have), capturing both <em>why</em> and <em>when</em>; implement distributed tracing and logging; and refine how humans interact with our processes during this phase (keeping realistic behavioral constraints in mind).</li>
<li>To sustain resilience, we must adapt. During this phase, we can support this flexibility and willingness to change through five key opportunities: iteration to mimic evolution; modularity, a tool wielded by humanity over millennia for resilience; feature flags and dark launches for flexible change; preserving possibilities for refactoring through (programming language) typing; and pursuing the strangler fig pattern for incremental, elegant transformation.</li>
</ul>
<h2 id="operating-and-observing-for-software-resilience">Operating and Observing for Software Resilience</h2>
<p><em>Takeaways from Chapter 5</em></p>
<ul>
<li>Operating and observing the system is the phase where we can witness system behavior as it runs in production, which can reveal where our mental models are inaccurate. It is when we can glean valuable insights about our systems and incorporate this data into our feedback loops.</li>
<li>Security is woven into all three key aspects of reliability that reflect user expectations: availability, performance, and correctness.</li>
<li>Site reliability engineering (SRE) goals and security goals overlap to a significant degree, making those teams natural allies in solving reliability and resilience challenges. A key difference is that SRE understands that moving quickly is correlated with reducing the impact of failure; security must adopt this mindset too.</li>
<li>Attackers can directly measure success and immediately receive feedback, giving them an asymmetric advantage. We must strive to replicate this for our goals too.</li>
<li>To measure operational success, we can borrow established metrics like the DORA metrics and craft thoughtful SLOs that help us learn more about the system.</li>
<li>Success is an active process, not a one-time achievement. We must support <em>graceful extensibility</em>: the capability to anticipate bottlenecks and “crunches,” learn about evolving conditions, and adapt responses to stressors and surprises as they change.</li>
<li>We want to mimic the interactive, overlapping, and decentralized sensitivities of biological systems in our observability strategy. In particular, we want to observe system interactions across space and time. We must maintain the ability to reflect on three key questions: How well is the system adapted to its environment? What is the system adapted to? What is changing in the system’s environment?</li>
<li>Tracking when a system is repeatedly stretching toward its limit (“thresholding”) helps us uncover the system’s boundaries of safe operation. Increasingly “laggy” recovery from disturbances in both the <em>socio</em> and <em>technical</em> parts of the system can indicate erosion of adaptive capacity.</li>
<li><em>Attack observability</em> refers to collecting information about the interaction between attackers and systems. It involves tracing attacker behavior to reveal how it looks in reality versus our mental models. <em>Deception environments</em> can facilitate attacker tracing, fuel a feedback loop for resilient design, and serve as an experimentation platform.</li>
<li>A scalable system is a safer system. System signals used to measure scalability can be used as indicators of attack too; we discussed many, including autoscaling replica count, heartbeat response time, and resource consumption.</li>
<li>Being a gatekeeper to growth is not an effective way to achieve security outcomes. Scalability forces high-friction processes and procedures to adapt to growth, which is healthy for sustaining resilience.</li>
<li>We should apply the concept of <em>toil</em>, from SRE, to security. For any task that a computer can perform better than a human — like those requiring accurate repetition — we should automate it. Doing so frees up effort capital that we can expend on higher-value activities that leverage human strengths like creativity and adaptability.</li>
</ul>
<h2 id="responding-and-recovering-for-software-resilience">Responding and Recovering for Software Resilience</h2>
<p><em>Takeaways from Chapter 6</em></p>
<ul>
<li>Incidents are like a pop quiz. To prepare for them and ensure we can respond with grace, we must practice incident response activities—and can do so through chaos experimentation.</li>
<li>The Effort Investment Portfolio applies to incident response too. Effort expended earlier in the software delivery lifecycle will reduce the effort required when responding to incidents (this does not mean “shift left,” at least in its popularized / monetized form).</li>
<li>Humans often feel an impulse toward action (action bias), which can reduce effectiveness during incident response. Practicing “watchful waiting” can curtail knee-jerk reactions.</li>
<li>There is no “best practice” for all incidents. The best we can do is practice incident response activities to nurture human responders’ adaptive capabilities.</li>
<li>Repeated practice of response activities through chaos experimentation can turn incidents from stressful, scary situations into confidence-building, problem-solving scenarios.</li>
<li>Recovering from incidents requires adaptation, and learning is a prerequisite for this adaptation. Learning from incidents to develop memory of failure is about community, so if we blame community members for the incident, we will struggle to learn.</li>
<li>A blameless culture helps organizations stay in a learning mindset — uncovering problems early and gaining clarity around incidents — rather than play the “blame game.” It encourages people to speak up about issues without fear of being punished for doing so.</li>
<li>There are two contributing factors always worth discussing during incident review: relevant production pressures and system properties.</li>
<li>Humans at the “sharp end,” who interact directly with the system, are often blamed for incidents by humans at the “blunt end,” who influence the system but interact indirectly (like administrators, policy prescribers, or system designers). The disconnect between the two can be summarized as the delta between “work-as-practiced” and “work-as-imagined.”</li>
<li>The cybersecurity industry often (unproductively) blames users for causing failures, as evidenced by the acronym PEBKAC: problem exists between keyboard and chair. A more useful heuristic is PEBRAMM: problem exists between reality and mental model. An error represents a starting point for investigation; it is a symptom that indicates we should reevaluate design, policy, incentives, constraints, or other system properties.</li>
<li>There are numerous biases that tempt us to blame human error during incidents, which hinders our capacity to constructively learn from and adapt to failure. With hindsight bias, we allow our present knowledge to taint our perception of past events (the “I knew it all along” effect). With outcome bias, we judge the quality of a decision based on its eventual outcomes. The just-world hypothesis refers to our preference for believing the world is an orderly, just, and consequential place. All of these biases warp our perception of reality.</li>
<li>During incident review, use neutral practitioner questions to stay curious and intellectually honest. Neutral practitioner questions re-create the context surrounding an event and ask practitioners what actions they would take given this context. It helps sketch a portrait of local rationality: the reasonable course of action in the presence of contextual trade-offs and constrained information-processing capabilities.</li>
</ul>
<h2 id="platform-resilience-engineering">Platform Resilience Engineering</h2>
<p><em>Takeaways from Chapter 7</em></p>
<ul>
<li>At the “meta-design” level, we can sustain resilience through organizational structure and practices—transforming from a siloed security program into a platform engineering model (“platform resilience engineering”).</li>
<li>We must be aware of production pressures and how they tip sociotechnical systems toward failure. Production pressures involve the incentivization of less expensive and more efficient work, with quality (and security as its subset) as the typical sacrifice.</li>
<li>A platform engineering approach to resilience treats security as a product with end users, as something created through a process that provides benefits to a market (with internal teams as our customers). Platform Engineering teams identify real problems, iterate on a solution, and prioritize usability to promote adoption. Resilience is a natural fit for their purview.</li>
<li>Any product requires a long-term vision — a unifying theme for all your projects toward a defined end. The vision tells a story of what is being built and why.</li>
<li>Treating resilience — including security — as a product starts with identifying the right user problems to tackle. To accurately define user problems, we must understand their local context. We must understand how our users make tradeoffs under pressure, maintain curiosity about the workarounds they create, and respect the limitations of their brains’ computational capacity (“cognitive load”).</li>
<li>Security solutions become less reliable as their dependence on human behavior increases. The Ice Cream Cone Hierarchy of Safety Solutions helps us prioritize how we design security solutions, from best to least effective. Starting from the top of the cone, we can eliminate hazards by design; substitute less hazardous methods or materials; incorporate safety devices and guards; provide warning and awareness systems; and, last and least effective, apply administrative controls (like guidelines and training).</li>
<li>There are two possible paths we can pursue when solving user problems: <a href="https://kellyshortridge.com/blog/posts/control-vs-resilience-cybersecurity-strategy/">the control strategy or the resilience strategy</a>. The control strategy designs security programs based on what security humans think other humans <em>should</em> do; it is convenient for the Security team at the expense of others’ convenience. The resilience strategy promotes and designs security based on how humans <em>actually</em> behave; success is when our solutions align with the reality of work-as-done. The control strategy makes users responsible for security while the resilience approach makes those designing security programs and solutions responsible for it.</li>
<li>We should build minimum viable products (MVPs) and pursue an iterative change model informed by user feedback.</li>
<li>We should gain consensus about our plans for solving resilience problems — from vision through to implementation of a specific solution — and ensure stakeholders understand the <em>why</em> behind our solutions. Success is solving a real problem in a way that delivers consistent value.</li>
<li>To facilitate solution adoption, we must plan for migration and pave the road for our customers to adopt what we’ve created for them (hence the strategy of creating &ldquo;paved roads&rdquo;). We should never force solutions on other humans; if that is the only way to drive adoption, then it is a failure of our design, strategy, and communication.</li>
<li>Measuring product success is necessary for our feedback loops, but can be tricky. If we design solutions for use by engineering teams, the <a href="https://queue.acm.org/detail.cfm?id=3454124">SPACE framework</a> offers numerous success criteria we can measure. In general, we should be curious about the factors contributing to success and failure for our internal customers.</li>
<li>Any metrics related to how “secure” or “risky” something is, like percentage of “risk coverage,” are busywork based on measuring the (highly subjective) unmeasurable. We need to measure our program’s success—and any solutions we design as part of it—based on tangible, realistic goals.</li>
</ul>
<h2 id="security-chaos-experiments">Security Chaos Experiments</h2>
<p><em>Takeaways from Chapter 8</em></p>
<ul>
<li>Experimentation is a cycle of discovery and learning, which is what drives scientific progress. Resilience stress tests (aka security chaos experiments) are like applying the scientific method to software and systems security.</li>
<li>Early adopters of security chaos experimentation learned three key lessons: first, it’s fine to start in nonproduction environments because you can still learn a lot; second, use past incidents as inspiration for experiments and to leverage organizational memory; third, make sure to publish and evangelize your experimental findings because expanding adoption will become your hardest challenge (the technical work is comparatively easy).</li>
<li>To set chaos experiments up for success, especially the first time, we need to socialize the experiment with relevant stakeholders. Investing in the right messaging and framing at the beginning will reduce friction later.</li>
<li>The next step is designing an experimental hypothesis. Hypotheses typically take the form of: “In the event of the following X condition, we are confident that our system will respond with Y.”</li>
<li>Once we have a hypothesis, we can design our experiment so we uncover the behavior about which we want to learn. There are numerous considerations: where we conduct the experiment, how we measure success, potential impacts, fallback procedures, and more.</li>
<li>Documenting a precise, exact experiment design specification (“spec”) is critical. Our goal with the spec is for our organization to gain a luculent understanding of why we’re conducting this experiment, when and where we’re conducting it, what it will involve, and how it will unfold.</li>
<li>Launching an experiment is not unlike a feature release. Our preparation in socializing the experiment, designing the hypothesis, and defining the experiment specifications makes this one of the easier phases.</li>
<li>What evidence we collect when conducting an experiment is defined by the spec; we should already know what we’re monitoring and what evidence we expect.</li>
<li>The first step after we’ve collected evidence is confirming we collected the evidence we sought from the experiment. The second step is to analyze the data with regard to the hypothesis. Our goal is to compare our observations with our predictions — to verify and refine our mental models of the system, which informs what actions we can take to sustain its resilience to adversity.</li>
<li>We should communicate our experimental findings through release notes. Most stakeholders don’t need lots of detail; we should synthesize and summarize our experimental insights, highlighting any action items. Once those action items are performed, we can rerun the experiment.</li>
<li>After your first experiment, or after you run an experiment the first time, you can automate it for continuous use. Because our systems — and the reality around them — are constantly changing, we must continuously generate evidence lest it grow stale.</li>
<li>Game days, a more manual form of conducting a security chaos experiment, can help more hesitant organizations ease into chaos experimentation.</li>
<li>There is no end to the kinds of security chaos experiments you can conduct in your systems. In this chapter, I enumerated many applicable to production infrastructure, build pipelines, service-oriented environments, and Windows environments.</li>
</ul>
<h2 id="case-studies">Case Studies</h2>
<p><em>Takeaways from Chapter 9</em></p>
<p>It&rsquo;s hard to capture the cliff notes for the case studies Aaron Rinehart compiled from UnitedHealth Group, Verizon, OpenDoor, Cardinal Health, Accenture Global, and Capital One. Plus, it&rsquo;s the only chapter I didn&rsquo;t write on my own in the book, so I feel I wouldn&rsquo;t do it justice.</p>
<p>But, my personal takeaways from the case studies are:</p>
<ol>
<li>
<p>Collaboration is key; to succeed in security, we must not only establish healthy communication with other teams but be open to them teaching us, too — especially platform engineering and SRE teams, who possess a wealth of experience we can leverage in our resilience journey.</p>
</li>
<li>
<p>The resilience / SCE transformation is not exclusive to a certain type of organization. Fortune 10, highly-regulated organizations can pursue it. So can smaller, scrappy startups. And this speaks to something I really tried to emphasize in the book: the resilience approach isn&rsquo;t about revolutionizing X, Y, Z overnight and doing them perfectly; it&rsquo;s about iteratively changing how you do things towards more resilient outcomes. It&rsquo;s about experimentation at basically all levels, whether experimenting with new modes of collaboration with software engineering teams, conducting resilience stress tests, or trying any of the zillion strategies I described in the book. For some organizations, it&rsquo;ll be easier to iteratively <a href="https://kellyshortridge.com/blog/posts/rfi-open-source-security-response/">migrate to memory safe languages</a>; for others, it&rsquo;ll be easier to migrate to running workloads on immutable infrastructure or implementing integration testing or so many other things that can make a difference outcome-wise.</p>
</li>
<li>
<p>A learning culture is critical. This doesn&rsquo;t mean we defenestrate all caution so it flails on the restless winter winds. It means we conduct small experiments to generate evidence and inform what we do next. It means we assume by default that most humans are just trying to do their jobs, and that means we don&rsquo;t automatically blame them when something goes wrong — nor do we treat our colleagues like our adversaries (as it turns out, cyber criminals are our real adversaries<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, who knew!). We look critically at how our systems and processes are designed, then brainstorm how to iteratively improve them over time. If a system is confusing or cumbersome to use, then that&rsquo;s a design problem, not a &ldquo;human error&rdquo; problem. Nearly every case study in this chapter highlights the imporance of psychological safety in the transformation towards resilience for a reason.</p>
</li>
</ol>
<hr>
<p><em>Enjoy this post? You might like <a href="https://www.securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>The book has like nine million citations, so in no way do I think I am solely responsible for software resilience being a thing. However, to my knowledge, I am the only person to have synthesized disparate research across dozens of disciplines; filled in gaps both philosophically and practically; extended all of that with tons of original contributions from concepts to specific activities; and packaged it into an end-to-end strategy for organizations of all kinds to adopt. While I&rsquo;ve often been the Agitator in the <a href="https://ssir.org/articles/entry/should_you_agitate_innovate_or_orchestrate">Agitator-Innovator-Orchestrator model of change</a>, the book is my attempt at being the Innovator for the movement &ndash; conceptualizing and communicating, at great length, potential solutions to the problems wrought by traditional cybersecurity and poor software quality.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>I am purposefully using the misnomer &ldquo;cliff notes&rdquo; to avoid getting sued, jk hopefully&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Presenting the takeaways as digestible bullet points also makes it easy for you to copy and paste into your next LinkedIn Thought Leadership post (with attribution :) so you can dazzle your followers with your newfound knowledge as they doomscroll.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Yes, I am aware nation state actors exist. But per <a href="https://kellyshortridge.com/blog/posts/kellys-kommentary-on-verizon-dbir-2023/">the Verizon Data Breach Investigations Report</a> year after year, nation states are like less than 5% of incidents, while money-driven cyber crimes are like 95%. Your threat model should start with cyber criminals before you assume you&rsquo;re gonna get a <a href="https://www.forbes.com/sites/andygreenberg/2011/08/01/meet-comex-the-iphone-uber-hacker-who-keeps-outsmarting-apple/">comex-</a> / <a href="https://www.wired.com/2012/03/zero-days-for-chrome/">PinkiePie</a>-style chain of 0day thrown your way.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Cybersecurity Isn&#39;t Special</title>
            <link>https://kellyshortridge.com/blog/posts/cybersecurity-isnt-special/</link>
            <pubDate>Wed, 13 Dec 2023 08:00:02 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/cybersecurity-isnt-special/</guid>
            <description>Cybersecurity programs are infamous as gatekeepers, power tripping in their virtual CAT machines to dump roadblocks and jackhammer potholes on software delivery. This draws ire from nearly every other software delivery stakeholder but is often justified due to the “fact” that cybersecurity reflects a uniquely complex set of challenges.
This is not, in fact, a fact. It’s closer to propaganda. Cybersecurity isn’t that special. And cybersecurity shouldn’t be that special if we want to minimize the damage of cyberattacks in our software systems.
In this post, I’ll excoriate special snowflake security programs then offer opportunities for how we can make our programs more constructive rather than constrictive.
Is cybersecurity special? For the sake of honest discourse, let’s consider some of the reasons cyber thought leaders often cite why cybersecurity is a special snowflake – and debunk them:
1. It’s hard to prove our value because it’s based on counterfactuals.
Site reliability engineering (SRE) teams face the same challenge, but mewl much less about it.
2. There are so many software changes across so many teams and how could we possibly make improvements to those activities?
This is literally the mission of platform engineering teams, and many can even provide evidence for the value of their improvements.
3. Attackers are especially clever humans whose purpose in life is to harm us.
Attackers can be clever, but often aren’t – and also the reliability failures we see from “especially clever” developers/teams can foment far more harm than attackers could ever dream. Interns were destroying data long before ransomware gangs were.
4. We’re seen as a cost center!
Again, talk to your SREs or platform engineers or, if you want to hear what it’s really like to be continually dismissed, your D&amp;I colleagues.
5. Software systems are so complex, how could we ever hope to understand them enough to secure them?
Ask basically anyone in your engineering org whether software complexity makes their job harder, especially anyone who mumbles about Lamport clocks during architecture reviews. You think having a nation state as an adversary is bad? They’re fighting against the fundamental laws of physics.
With all due respect In sum, cybersecurity really isn’t as esoteric and arcane a problem as we believe. Yet, I’m often met with disbelief by cyberppl that things are really that hard for their platform/infra/devops/SRE colleagues. Sometimes it feels like cybersecurity leaders and engineers think that all these teams were automagically bequeathed respect – autonomy, budget, authority – by the business.
And, in that vein, I’ve heard CISO described as “the hardest job in corporate America” and maybe it is if you’re scrambling to cover up felonies, but nearly every problem I hear security leaders complain about is mirrored on the infra and platform and SRE side of things1.
This respect must be earned. These other teams earn it by solving reliability and developer productivity challenges in clever ways. They do the hard work of thinky thinky and buildy buildy rather than foisting cumbersome policies and tools on software engineers in what I call the SSB model (for “sink or swim, bitch”)2. They don’t carve 100 security commandments into Confluence; they build patterns, frameworks, and tooling that encode the right requirements to make the better way the easier, faster way for software engineers.
If cybersecurity wants to earn similar respect, it can’t keep roadblocking and gatekeeping software. It can’t pretend like security failure is so distinct in importance and impact that it requires completely separate workflows, stacks, reviews, tooling, design, and basically everything else. Attackers accessing our systems without our consent is one type of failure, but not the only kind. Reliability failures are arguably both more frequent and more damaging when they occur; developer productivity failures can mean the difference between successful market differentiation and losing market share.
Resilience, not roadblocks This is precisely why I emphasize software resilience, because it encompasses our reliability and cybersecurity concerns. It’s about our goal outcome: we want systems that can adapt to failures and opportunities alike in an ever-changing world. Those failures can be borne by cyberattackers or by performance bugs or a broken developer experience (DX) – the difference really doesn’t matter as much as we think.
The common “enemy” is unintended behavior. Indeed, it is our ancient archnemesis, the eternal foe that formal methods could not vanquish. Having separate pipelines, observability stacks, or review processes for every contributing factor to unintended behavior would be an operational disaster, and yet cybersecurity insists on precisely this for itself.
It really doesn’t make sense for cybersecurity to have its own special snowflake process for things. It does not make sense operationally, philosophically, or socially. It does not make sense to sustain systems resilience. And it even does not make sense for software security.
If we want to sustain resilience at scale, then we should brainstorm strategies to improve system resilience across failures. In a practical sense, it doesn’t matter whether a service goes down due to a performance bug or an attacker exploiting a security bug; the outcome to the business is lost revenue either way.
For example, a queue or message broker could help us restore service health in either scenario3. Should we ignore this design-based solution in favor of the security team needing to review and approve every single software release because it potentially has exploitable bugs and now the backlog is at least six months long which leads to larger batch sizes by software engineers which leads to higher failures rates and by the Eight Divines4, why does anyone actually think this is an okay state of affairs? The empire building may feel good, but it’s bad for everyone.
By now, there are some security leaders5 fuming that I’m telling them to rip out their precious pet process. This is about the time when, in an IRL convo, they usually retort, “Well, what are you saying, that we shouldn’t care about cybersecurity at all?” That is never what I’m saying. We should care about cybersecurity but we should not silo it or treat its concerns as separate because it actually worsens the outcomes we purportedly care about long-term.
So, what do we do instead of roadblocking? There are a ton of opportunities cybersecurity and platform engineering teams can pursue to achieve the end goal we seek – and far more effectively than heavy-handed cyber design / code reviews. Let’s go through some.
What to do instead of roadblocking 1. Self-certification to guidelines. Let product engineering teams self-certify based on (relatively) complete guidelines we’ve written beforehand. Some cybersecurity teams claim they already do this, but software engineers often find these guidelines too vague and will thus work around them. Typically, this is because the cybersecurity team operates with a “I know it when I see it” mentality on how “secure” looks. And because the cybersecurity team does not have software engineering expertise, it’s often divorced from how software delivery actually works.
2. Follow Platform/SRE’s lead. Integrate security design reviews into the general design/architecture review process (and same with code reviews). Collaborate with site reliability engineering (SRE) or platform engineering teams on this; learn how they ensure their reliability requirements are considered during design review and… do exactly that, basically. Ultimately, both stakeholders (reliability and cybersecurity) want similar outcomes, so the more we can find design opportunities that eliminate or reduce hazards in the system – towards resilience and security by design – the safer and more reliable our code will be (i.e. higher quality).
3. Build standardized patterns. Build standard solutions for “cross-cutting concerns” that apply to all services and software (also called “paved roads” or “patterns”). These are typically built by platform engineering teams for cross-cutting concerns related to performance and reliability. Where platform engineering teams have built them for cybersecurity concerns, it is because they grew frustrated by the cybersecurity team’s inertia and inefficacy, and thus built the standard solutions themselves.
There are a few cybersecurity teams who already build these standard solutions in practice (like at Netflix, Block, and others who I wish would publicize their efforts, hint hint). But I also still hear CISOs who claim doing so is “impossible” (it is objectively not).
For instance, we can create architectural or coding patterns that make it easier to write safer code (or, conversely, make it harder to introduce certain classes of bugs). We can provide standardized libraries for middleware that’s fraught with hazards, like authentication. Product engineering teams thereby save time by not having to implement authN themselves and we gain confidence that there aren’t a bunch of potentially jank, ad-hoc authN disasters lurking.
4. Abandon the perimeter model. Abandon the perimeter, moat-and-castle model that requires all software to be “secure” always and forever to uphold the security properties we want – because guess what, that assumption will inevitably fail. Many organizations already have dev, test, or staging environments that allow us to evaluate our assumptions about how our software works. These environments also serve as a pattern for creating isolated environments that can contain failure impact (see #7).
If your organization doesn’t already have a test or staging environment, that’s a great starter project for your cybersecurity team to make an impact; while you’re at it, advocate for integration testing, too.
5. Advise, don’t dictate. Provide an advisory service where software engineers can ask us for assistance or guidance on security-related matters. It’s better if designers and engineers can balance security tradeoffs with the others they face such as reliability, maintainability, and time-to-market instead of the security team dictating design requirements. This is a key way to align the cybersecurity program with the business.
6. Ask platform teams to integrate security. Work proactively with infrastructure and platform teams to integrate security use cases into their designs, so that the wider product engineering community can delegate security concerns to the libraries and templates they’re using.
7. Provide isolation patterns. Provide software engineering teams with mechanisms to isolate COTS software. The goal is to, at a minimum, make it compliant and reduce incident impact by design. That way, our engineering teams don’t have to beg vendors to implement our organization’s special snowflake security requirements (what the rest of the world would consider “bizarre”) or fork open-source software (OSS) to implement those requirements ourselves.
8. Conduct user research.6 Make the more secure / safer way the more delightful way, too: faster, easier, simpler. By conducting user research – taking the time to understand our users’ goals, constraints, workflows, and emotional journeys in a given activity – we can tailor the solutions we build and implement to help them achieve what they want in a safe way.
As one relatively common example, rolling out SSO can result in a delightful experience for engineering teams because all their apps are now in one place; the user just needs to SSO into their apps for access rather than maintaining N separate accounts (reducing the number of steps)7. It means we’ve improved security while boosting speed and ease of use. Our business will be dead chuffed.
Towards better cybering I mentioned that I’ve heard people sincerely describe the CISO role as one of the hardest jobs in corporate America. They should be delighted to hear that the CISOs I know who follow at least some of the above seem to enjoy their jobs much more than those who are still attempting to use carriage whips to steer their devs.
In the model described above – and indeed in the Platform Resilience Engineering discipline I describe in my book – we get to build things and solve real problems for real humans in an empathetic way. We can behold tangible improvements, not vague successes like “improved risk coverage by X%.”8
We don’t have to feel so alone in the cybersecurity struggle. This approach gives us opportunities to learn from our platform engineering and SRE colleagues – to develop a collaborative vibe rather than a combative one. These teams want to help us solve security problems; they don’t want to create or implement roadblocks, but we shouldn’t either. We should want sustained resilience – because that means we can sustain our organization’s success.
tl;dr Paved roads, not roadblocks.
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
Thanks to Lita Cho and Gregory Poirier for feedback.
The exception is when the cybersecurity program also covers content moderation, including screening for sexual and child abuse; that is truly one of the most difficult jobs in corporate America. ↩︎
Instead, we want the SSB of “Science &amp; Sensemaking, Bitch [affectionate].” ↩︎
What especially troubles me is when I give this example, many security leaders do not know what queues or message brokers are… ↩︎
I know the Talos worshippers will be offended but like, really, he deserves a place alongside of Akatosh, a time lord dragon?? To be clear, I’m vehemently not on Team Thalmor and whatever they want to do with the Towers, but I’m also not really down with monarchy and how effective was Tiber’s unification of Tamriel in the long run, anyway? He’s basically a war lord that fled a climate crisis only to completely reshape an entire ecosystem because his people hated jungles and fuck the people already living there, right? Imperial and Nord stans, come at me. ↩︎
There are other security leaders who are excited by this prospect but still filled with disbelief. And a select few who are like “hells yeah, show me the way.” That tends to make the project (or conversation) more fun. ↩︎
There are eight opportunities described, which mirrors the eight divines quip from earlier because I’m a ho for foreshadowing. ↩︎
I refer to this goal of reducing the number of steps our users must perform as “What Would Gilbreth Do?”, which I introduced in my book. ↩︎
There are still some wayward engineering leaders who believe code coverage is a good metric, but way way more cybersecuritiy leaders who only recently discovered it as a metric and love it. This is why I will soon flee to the woods to live in a cottage and forsake my role as cybersecurity Cassandra. This also makes the eighth footnote, which, per footnote 6, is yet again a reprisal of the motif. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>Cybersecurity programs are infamous as gatekeepers, power tripping in their virtual CAT machines to dump roadblocks and jackhammer potholes on software delivery. This draws ire from nearly every other software delivery stakeholder but is often justified due to the “fact” that cybersecurity reflects a uniquely complex set of challenges.</p>
<p>This is not, in fact, a fact. It’s closer to propaganda. Cybersecurity isn’t that special. And cybersecurity <em>shouldn’t</em> be that special if we want to minimize the damage of cyberattacks in our software systems.</p>
<p>In this post, I’ll excoriate special snowflake security programs then offer opportunities for how we can make our programs more constructive rather than constrictive.</p>
<h2 id="is-cybersecurity-special">Is cybersecurity special?</h2>
<p>For the sake of honest discourse, let’s consider some of the reasons cyber thought leaders often cite why cybersecurity is a special snowflake – and debunk them:</p>
<p><strong>1. It’s hard to prove our value because it’s based on counterfactuals.</strong></p>
<p>Site reliability engineering (SRE) teams face the same challenge, but mewl much less about it.</p>
<p><strong>2. There are so many software changes across so many teams and how could we possibly make improvements to those activities?</strong></p>
<p>This is literally the mission of platform engineering teams, and many can even provide evidence for the value of their improvements.</p>
<p><strong>3. Attackers are especially clever humans whose purpose in life is to harm us.</strong></p>
<p>Attackers <em>can</em> be clever, but often aren’t – and also the reliability failures we see from “especially clever” developers/teams can foment far more harm than attackers could ever dream. Interns were destroying data long before ransomware gangs were.</p>
<p><strong>4. We’re seen as a cost center!</strong></p>
<p>Again, talk to your SREs or platform engineers or, if you want to hear what it’s really like to be continually dismissed, your D&amp;I colleagues.</p>
<p><strong>5. Software systems are so complex, how could we ever hope to understand them enough to secure them?</strong></p>
<p>Ask basically anyone in your engineering org whether software complexity makes their job harder, especially anyone who mumbles about Lamport clocks during architecture reviews. You think having a nation state as an adversary is bad? They’re fighting against the fundamental laws of physics.</p>
<h2 id="with-all-due-respect">With all due respect</h2>
<p>In sum, cybersecurity really isn’t as esoteric and arcane a problem as we believe. Yet, I’m often met with disbelief by cyberppl that things are really that hard for their platform/infra/devops/SRE colleagues. Sometimes it feels like cybersecurity leaders and engineers think that all these teams were automagically bequeathed respect – autonomy, budget, authority – by the business.</p>
<p>And, in that vein, I’ve heard CISO described as “the hardest job in corporate America” and maybe it is if you’re scrambling to <a href="https://www.justice.gov/usao-ndca/pr/former-chief-security-officer-uber-sentenced-three-years-probation-covering-data">cover up felonies</a>, but nearly every problem I hear security leaders complain about is mirrored on the infra and platform and SRE side of things<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>.</p>
<p>This respect must be earned. These other teams earn it by solving reliability and developer productivity challenges in clever ways. <em>They</em> do the hard work of thinky thinky and buildy buildy rather than foisting cumbersome policies and tools on software engineers in what I call the SSB model (for “sink or swim, bitch”)<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>. They don’t carve 100 security commandments into Confluence; they build patterns, frameworks, and tooling that <em>encode</em> the right requirements to make the better way the easier, faster way for software engineers.</p>
<p>If cybersecurity wants to earn similar respect, it can’t keep roadblocking and gatekeeping software. It can’t pretend like security failure is so distinct in importance and impact that it requires completely separate workflows, stacks, reviews, tooling, design, and basically everything else. Attackers accessing our systems without our consent is one type of failure, but not the only kind. Reliability failures are arguably both more frequent and more damaging when they occur; developer productivity failures can mean the difference between successful market differentiation and losing market share.</p>
<h2 id="resilience-not-roadblocks">Resilience, not roadblocks</h2>
<p>This is precisely why I emphasize software <em>resilience</em>, because it encompasses our reliability and cybersecurity concerns. It’s about our goal outcome: we want systems that can adapt to failures and opportunities alike in an ever-changing world. Those failures can be borne by cyberattackers or by performance bugs or a broken developer experience (DX) – the difference really doesn’t matter as much as we think.</p>
<p>The common “enemy” is unintended behavior. Indeed, it is our ancient archnemesis, the eternal foe that formal methods could not vanquish. Having separate pipelines, observability stacks, or review processes for every contributing factor to unintended behavior would be an operational disaster, and yet cybersecurity insists on precisely this for itself.</p>
<p>It really doesn’t make sense for cybersecurity to have its own special snowflake process for things. It does not make sense operationally, philosophically, or socially. It does not make sense to sustain systems resilience. And it even does not make sense for software security.</p>
<p>If we want to sustain resilience at scale, then we should brainstorm strategies to improve system resilience <em>across</em> failures. In a practical sense, it doesn’t matter whether a service goes down due to a performance bug or an attacker exploiting a security bug; the outcome to the business is lost revenue either way.</p>
<p>For example, a queue or message broker could help us restore service health in either scenario<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>. Should we ignore this design-based solution in favor of the security team needing to review and approve every single software release because it potentially has exploitable bugs and now the backlog is at least six months long which leads to larger batch sizes by software engineers which leads to higher failures rates and by the Eight Divines<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, why does anyone actually think this is an okay state of affairs? The empire building may feel good, but it’s bad for everyone.</p>
<p>By now, there are some security leaders<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> fuming that I’m telling them to rip out their precious pet process. This is about the time when, in an IRL convo, they usually retort, “Well, what are you saying, that we shouldn’t care about cybersecurity <em>at all</em>?” That is never what I’m saying. We <em>should</em> care about cybersecurity but we should not silo it or treat its concerns as separate because it actually worsens the outcomes we <a href="https://kellyshortridge.com/blog/posts/what-does-the-word-security-mean/">purportedly care about</a> long-term.</p>
<p>So, what do we do instead of roadblocking? There are a ton of opportunities cybersecurity and platform engineering teams can pursue to achieve the end goal we seek – and far more effectively than heavy-handed cyber design / code reviews. Let’s go through some.</p>
<h2 id="what-to-do-instead-of-roadblocking">What to do instead of roadblocking</h2>
<p><strong>1. Self-certification to guidelines.</strong> Let product engineering teams self-certify based on (relatively) complete guidelines we’ve written beforehand. Some cybersecurity teams claim they already do this, but software engineers often find these guidelines too vague and will thus work around them. Typically, this is because the cybersecurity team operates with a “I know it when I see it” mentality on how “secure” looks. And because the cybersecurity team does not have software engineering expertise, it’s often divorced from how software delivery actually works.</p>
<p><strong>2. Follow Platform/SRE’s lead.</strong> Integrate security design reviews into the general design/architecture review process (and same with code reviews). Collaborate with site reliability engineering (SRE) or platform engineering teams on this; learn how they ensure their reliability requirements are considered during design review and… do exactly that, basically. Ultimately, both stakeholders (reliability and cybersecurity) want similar outcomes, so the more we can find design opportunities that eliminate or reduce hazards in the system – towards resilience and security by design – the safer and more reliable our code will be (i.e. higher quality).</p>
<p><strong>3. Build standardized patterns.</strong> Build standard solutions for “cross-cutting concerns” that apply to all services and software (also called “paved roads” or “patterns”). These are typically built by platform engineering teams for cross-cutting concerns related to performance and reliability. Where platform engineering teams have built them for cybersecurity concerns, it is because they grew frustrated by the cybersecurity team&rsquo;s inertia and inefficacy, and thus built the standard solutions themselves.</p>
<p>There are a few cybersecurity teams who already build these standard solutions in practice (like at <a href="https://netflixtechblog.com/the-show-must-go-on-securing-netflix-studios-at-scale-19b801c86479">Netflix</a>, <a href="https://developer.squareup.com/blog/connecting-block-business-units-with-aws-api-gateway/">Block</a>, and others who I wish would publicize their efforts, hint hint). But I also still hear CISOs who claim doing so is “impossible” (it is objectively not).</p>
<p>For instance, we can create architectural or coding patterns that make it easier to write safer code (or, conversely, make it harder to introduce certain classes of bugs). We can provide standardized libraries for middleware that’s fraught with hazards, like authentication. Product engineering teams thereby save time by not having to implement authN themselves and we gain confidence that there aren’t a bunch of potentially jank, ad-hoc authN disasters lurking.</p>
<p><strong>4. Abandon the perimeter model.</strong> Abandon the perimeter, moat-and-castle model that requires all software to be “secure” always and forever to uphold the security properties we want – because guess what, that assumption will inevitably fail. Many organizations already have dev, test, or staging environments that allow us to evaluate our assumptions about how our software works. These environments also serve as a pattern for creating isolated environments that can contain failure impact (see #7).</p>
<p>If your organization doesn’t already have a test or staging environment, that’s a great starter project for your cybersecurity team to make an impact; while you&rsquo;re at it, advocate for <a href="https://youtu.be/ZMbqbXxRthE?t=915">integration testing</a>, too.</p>
<p><strong>5. Advise, don’t dictate.</strong> Provide an advisory service where software engineers can ask us for assistance or guidance on security-related matters. It’s better if designers and engineers can balance security tradeoffs with the others they face such as reliability, maintainability, and time-to-market instead of the security team dictating design requirements. This is a key way to align the cybersecurity program with the business.</p>
<p><strong>6. Ask platform teams to integrate security.</strong> Work proactively with infrastructure and platform teams to integrate security use cases into their designs, so that the wider product engineering community can delegate security concerns to the libraries and templates they’re using.</p>
<p><strong>7. Provide isolation patterns.</strong> Provide software engineering teams with mechanisms to isolate COTS software. The goal is to, at a minimum, make it compliant and reduce incident impact by design. That way, our engineering teams don’t have to beg vendors to implement our organization’s special snowflake security requirements (what the rest of the world would consider “bizarre”) or fork open-source software (OSS) to implement those requirements ourselves.</p>
<p><strong>8. Conduct user research.</strong><sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup> Make the more secure / safer way the more delightful way, too: faster, easier, simpler. By conducting user research – taking the time to understand our users&rsquo; goals, constraints, workflows, and emotional journeys in a given activity – we can tailor the solutions we build and implement to help them achieve what they want in a safe way.</p>
<p>As one relatively common example, rolling out SSO can result in a delightful experience for engineering teams because all their apps are now in one place; the user just needs to SSO into their apps for access rather than maintaining N separate accounts (reducing the number of steps)<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>. It means we&rsquo;ve improved security while boosting speed and ease of use. Our business will be dead chuffed.</p>
<h2 id="towards-better-cybering">Towards better cybering</h2>
<p>I mentioned that I’ve heard people sincerely describe the CISO role as one of the hardest jobs in corporate America. They should be delighted to hear that the CISOs I know who follow at least some of the above seem to enjoy their jobs much more than those who are still attempting to use carriage whips to steer their devs.</p>
<p>In the model described above – and indeed in the Platform Resilience Engineering discipline I describe <a href="https://www.securitychaoseng.com/">in my book</a> – we get to <em>build</em> things and solve real problems for real humans in an empathetic way. We can behold tangible improvements, not vague successes like “improved risk coverage by X%.”<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup></p>
<p>We don’t have to feel so alone in the cybersecurity struggle. This approach gives us opportunities to learn from our platform engineering and SRE colleagues – to develop a collaborative vibe rather than a combative one. These teams want to help us solve security problems; they don’t want to create or implement roadblocks, but we shouldn’t either. We should want sustained resilience – because that means we can sustain our organization’s success.</p>
<p>tl;dr Paved roads, not roadblocks.</p>
<hr>
<p><em>Enjoy this post? You might like <a href="https://www.securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<hr>
<p>Thanks to Lita Cho and Gregory Poirier for feedback.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>The exception is when the cybersecurity program also covers content moderation, including screening for sexual and child abuse; that is truly one of the most difficult jobs in corporate America.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Instead, we want the SSB of “Science &amp; Sensemaking, Bitch [affectionate].”&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>What especially troubles me is when I give this example, many security leaders do not know what queues or message brokers are…&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>I know the Talos worshippers will be offended but like, really, he deserves a place alongside of Akatosh, a <em>time lord dragon</em>?? To be clear, I’m vehemently not on Team Thalmor and whatever they want to do with the Towers, but I’m also not really down with monarchy and how effective was Tiber’s unification of Tamriel in the long run, anyway? He’s basically a war lord that fled a climate crisis only to completely reshape an entire ecosystem because his people hated jungles and fuck the people already living there, right? Imperial and Nord stans, come at me.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>There are other security leaders who are excited by this prospect but still filled with disbelief. And a select few who are like &ldquo;hells yeah, show me the way.&rdquo; That tends to make the project (or conversation) more fun.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>There are eight opportunities described, which mirrors the eight divines quip from earlier because I&rsquo;m a ho for foreshadowing.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>I refer to this goal of reducing the number of steps our users must perform as &ldquo;What Would Gilbreth Do?&rdquo;, which I introduced in my book.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>There are still some wayward engineering leaders who believe code coverage is a good metric, but way way more cybersecuritiy leaders who only recently discovered it as a metric and love it. This is why I will soon flee to the woods to live in a cottage and forsake my role as cybersecurity Cassandra. This also makes the eighth footnote, which, per footnote 6, is yet again a reprisal of the motif.&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Open-Source Software Security RFI Response from Shortridge Sensemaking LLC</title>
            <link>https://kellyshortridge.com/blog/posts/rfi-open-source-security-response/</link>
            <pubDate>Thu, 02 Nov 2023 08:12:52 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/rfi-open-source-security-response/</guid>
            <description>My frequent co-conspirator, Ryan Petrich, and I submitted a response to the U.S. government’s Request for Information on Open-Source Software Security: Areas of Long-Term Focus and Prioritization (ONCD-2023-0002). This moment in spacetime is a critical juncture in software, not just open-source software (OSS), and we feel privileged to submit our recommendations for the requesting agencies – ONCD, CISA, NSF, DARPA, and OMB – to consider as they traverse these challenges.
It is admittedly exhaustive since we wanted to offer our expertise across as many problems and areas of focus as were relevant. Our response begins by describing multiple “Gordian Knots” we believe will offer the requesting agencies alternative perspectives on the problem at hand. The rest of the response is structured with recommendations in the areas and subareas where our expertise is relevant, in the same order as presented in the RFI. Additionally, we identify and recommend multiple new subareas of focus for prioritization, including isolation, modular design, automation (CI/CD), resilience stress testing, and others; many of these are suffused with the spirit of Gordian Knots.
We are publishing our response in the spirit of transparency; you can read it at the following link: https://kellyshortridge.com/papers/ONCD-2023-0002-Shortridge-Sensemaking.pdf
Note that we are submitting as Shortridge Sensemaking LLC. The views expressed in our response are not necessarily the views of our employers or any of their affiliates. The information contained herein is not intended to provide, and should not be relied upon for, investment advice (which we would hope is obvious).
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
</description>
            <atom:content type="html"><![CDATA[<p>My frequent co-conspirator, Ryan Petrich, and I <a href="https://kellyshortridge.com/papers/ONCD-2023-0002-Shortridge-Sensemaking.pdf">submitted a response</a> to the U.S. government&rsquo;s <a href="https://www.federalregister.gov/documents/2023/08/10/2023-17239/request-for-information-on-open-source-software-security-areas-of-long-term-focus-and-prioritization">Request for Information</a> on Open-Source Software Security: Areas of Long-Term Focus and Prioritization (ONCD-2023-0002). This moment in spacetime is a critical juncture in software, not just open-source software (OSS), and we feel privileged to submit our recommendations for the requesting agencies – ONCD, CISA, NSF, DARPA, and OMB – to consider as they traverse these challenges.</p>
<p>It is admittedly exhaustive since we wanted to offer our expertise across as many problems and areas of focus as were relevant. Our response begins by describing multiple &ldquo;Gordian Knots&rdquo; we believe will offer the requesting agencies alternative perspectives on the problem at hand. The rest of the response is structured with recommendations in the areas and subareas where our expertise is relevant, in the same order as presented in the RFI. Additionally, we identify and recommend multiple new subareas of focus for prioritization, including isolation, modular design, automation (CI/CD), resilience stress testing, and others; many of these are suffused with the spirit of Gordian Knots.</p>
<p>We are publishing our response in the spirit of transparency; you can read it at the following link: <a href="https://kellyshortridge.com/papers/ONCD-2023-0002-Shortridge-Sensemaking.pdf">https://kellyshortridge.com/papers/ONCD-2023-0002-Shortridge-Sensemaking.pdf</a></p>
<p>Note that we are submitting as Shortridge Sensemaking LLC. The views expressed in our response are not necessarily the views of our employers or any of their affiliates. The information contained herein is not intended to provide, and should not be relied upon for, investment advice (which we would hope is obvious).</p>
<hr>
<p><em>Enjoy this post? You might like <a href="https://www.securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
]]></atom:content>
        </item>
        
        <item>
            <title>When we say &#34;security&#34;, what do we mean?</title>
            <link>https://kellyshortridge.com/blog/posts/what-does-the-word-security-mean/</link>
            <pubDate>Thu, 26 Oct 2023 08:00:02 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/what-does-the-word-security-mean/</guid>
            <description>
Introduction We say the word “security” a lot in tech. Whether we refer to “cybersecurity” or “information security” (or “infosec”), how often do we pause to question what we mean when we say the word security itself?
In general, arguing that words should mean things in infosec is like fighting against the gravity of a supermassive black hole1. Unfortunately for me, I will die on this hill until my inevitable spaghettification. From what I understand from cybersecurity journalism, this persistence makes me a “sophisticated” attacker, perhaps even one of those fabled advanced persistent threats (APTs). My cyberweapon of choice is words. My action on target? Destabilizing the industry’s dereliction of meaning. My APT group name will be SOCRATIC KITTEN.
So, true to the spirit of being advanced and persistent and threatening, I write this to challenge and, with any luck2, overthrow incumbent notions of the security concept while nurturing new notions that inspire and uplift3.
The security concept, like other words-as-concepts (happiness, courage, justice) is an idea, per Plato, perceivable only by the eyes of the mind4. To borrow from Hannah Arendt5, the word “security” is “something like a frozen thought that thinking must unfreeze whenever it wants to find out the original meaning.” To thaw it, we must meditate upon it, seep ourselves in it, let the currents of concept cleanse our preconceptions.
“But Kelly,” you sigh, “shouldn’t you be posting about something practical?” Like, what, the growing ATTACK SURFACE due to HUMAN ERROR that will surely be solved with SECURITY AWARENESS? Much like the spherical cow, such metaphors6 simplify our understanding of the world so it feels comforting and calculable as escapism from the real world, which is very messy. The security people, in no shortage of irony, choose convenience in this trade-off. Humans will interact with systems and do very natural human things and the security people will clutch their pearls and gasp, “But why would they do such a thing?!” Maybe spherical SBOMs will solve security so we can all finally stop being aware of it.
Because requiring awareness is part of the problem. We have a word for when humans are excellent at being of aware of threats in their environment: hypervigilance. It is not good when humans are hypervigilant! It means the human is likely traumatized and their nervous system is dysregulated. Unfortunately, the security people want us all to be hypervigilant because nothing says accountability for a problem like telling the potential victims they’re responsible for it.
Imagine, if you will, a parallel SKYSECURITY AWARENESS MONTH where we tell people to be careful whenever walking outside because a piano might fall on their head or that they should be scrutinizing the clouds – their trajectory, color, fullness, and other patterns – to figure out whether they are safe or not. In real life, we have meteorologists and can open an app that tells us whether we probably need an umbrella or sunglasses or to just stay inside to stay safe. Sometimes people will still go outside because that hurricane isn’t going to Instagram itself but there have been and will always be fools and our strategy in a problem domain should not be focused on the minority of fools who will not be persuaded by facts or logic and will gladly jump over guardrails while wondering why they were there in the first place.
My point is that the security people have collapsed upon a meaning of “security” as a concept that is not serving them or users or organizations or society particularly well. The cybersecurity industry’s meaning of “security” is a distortion, in many cases the exact opposite, of what the word means and has meant throughout its long, storied history. That history has much to teach us, which is why it is, in fact, entirely practical and pertinent to explore it on our upcoming semantic safari.
Thus, this essay will illuminate why our current notion of (cyber)security, the concept, is worth re-evaluating through the lens of what “security” has meant over time. True to Socratic tradition7, these essays will not provide a definitive answer. Our path will be circuitous, but we will perhaps absorb a superior sense of what this ineffable concept of “security” is through ouroboric osmosis by the end of our journey8.
We may not produce a definition of “security” by the end (although we will try) but, having pondered the meaning of “security,” we might be able to make our own attempts at it better.
To begin our journey, we must time travel.
The Curious Nature of Securus It’s a few hundred years before the common era in Rome. You’re chilling in a thermae with your bae admiring the intricate stone mosaic of a rather fetching deity beneath your feet as you feel your pores cleansing in the luscious steam.
Your beloved anaticula9 looks at you and smiles, “If only all our days together could be securus like this,” they say. You smile back and nod in blissful agreement, watching them rest their eyes with a satisfied sigh.
For the securus life is one without care. Securus starts with sē, the Latin prefix for “without,” which combines with cūra, the noun for care, concern, thought, trouble, solicitude, anxiety, grief, and sorrow.
Hence, securus is to enjoy piece of mind (securo animo esse10). Securus is the absence of concern, the absence of a troubled mind. The opposite of securus was sollicitus — the restlessness arising from being filled with fear, apprehension, anxiety, alarm.
Hurtling forward in time to 2023 CE, we can observe that the typical traditionalist infosec program is closer to sollicitus than securus. Fear, uncertainty, and doubt (FUD) pervade – and perhaps define – the industry. FUD are the foundational emotions industry vendors, journalists, and less scrupulous thought leaders exploit for fortune and fame.
Our world is increasingly software and internet but there is a powerful industry that tells us that we should be scared to use software and internet, that it is desirable for us to be uncertain at all times when using software and internet, that we should doubt our perceptions at all times because what if the 13,371,337th link you click or line of code you write in your lifetime causes CYBERGEDDON. All of this anti-securus rhetoric is supposedly in our best interests.
FUD pervades cybersecurity to such an extent that we take for granted that these emotions need not define the security we seek to cultivate. Could FUD not instead be seen as the explicit enemy of security?
Thus, a worthy thought experiment is: how might cybersecurity programs look if they actually pursued the state of being securus? How would an information security program designed to ensure the organization is “without care or concern or anxiety” appear? How would cybersecurity strategy differ if the goal outcome was for users – whether end consumers, software engineers, or employees – to feel care-free and untroubled?
We will explore those questions as we continue our journey. Our next stop is even further back in history, inspecting the inspiration for the word securus in Ancient Greece.
A Platonic Dialogue Persons of the Dialogue11
SECRATES
THEOXORUS
Theoxorus: I am feeling secure in my knowledge today, Secrates, yet have no doubt you shall shortly annoy me with indistinct inquiries into something simple that we should enjoy simply for simplicity’s sake.
Secrates: Secure! O, my dear Theoxorus, do my ears truly witness you bringing a conversation to me on a shining platter?
Theoxorus: What do you mean, Secrates?
Secrates: I mean that you used the word “secure.” And what does “secure” mean? We should always be on the lookout for such answerless words. I know you do not wish to examine it, but now we must!
Theoxorus: Secrates, no –
Secrates: Do you truly object? Is it not your own lips which revealed the relevance of this word to your very life?
Theoxorus: I… I cannot object.
Secrates: And neither can I. We must proceed and, another time, we can discuss what your hesitance for exploration means. I observe that you grow weary of dissecting words and essences of late. And yet with what else do you fill your days? Is it not your own shame you are unwilling to confront? Is it not this regular discourse that exposes the inner self you wish to –
Theoxorus: Secrates! Let us examine “secure” now and my own soul later.
Secrates: As you wish, my dear Theoxorus, I will now proceed with this inquiry, for which I owe you many thanks. There are two words from our current civilization that serve as the inspiration for securus: ataraksia and asphaleia. Let us proceed first with ataraksia.
Theoxorus: What tongue is the word “securus”?
Secrates: Latin.
Theoxorus: “Latin”?
Secrates: Yes. I have seen into the future during a bathing ritual.
Theoxorus: Ah, Secrates, you indulge in the pleasures of the oracles!
Secrates: Believe what you wish. But let us now proceed, as you insisted. What defines ataraksia but what it is not? It is the negation of taraksia, from tarrassein, which means to trouble the mind, to agitate, to disturb, to stir.
Theoxorus: Just as your incessant inquiries do to me.
Secrates: Precisely. And if ataraksia is the negation of these verbs, may it not be said to reflect calmness, equanimity, tranquility? It is as Pyrrho said, a form of freedom from distress and concern, and as the public says in their less formal dialogue, it is the mental state soldiers must cultivate before battle. Is it a goal, a kind of goodness, that a person must pursue in their lives? The Pyrrhonists, Epicureans, and Stoics would agree with this, each for different reasons reflecting their different philosophical foundations.
Theoxorus: What do you think, Secrates?
Secrates: I know nothing, as you know well, Theoxorus. What matters for our conversation is the essence of ataraksia: a freedom from disturbances, especially of the mental variety. And, then, as I have seen in my bathtub in a very distant future, what matters is that the verbs ataraksia is meant to negate – to disturb, to agitate, to trouble, to stir – are the verbs most associated with traditional cybersecurity. Does this not suggest security then means its very opposite?
Theoxorus: To be sure.
Secrates: And does this not trouble the mind in itself?
Theoxorus: Certainly. But how can you know such contraction abounds?
Secrates: This future world seems designed by contradiction. Their “security awareness training” exercises, such as those meant to phish humans as one lures a fish with a decoy worm, have the explicit goal of “troubling the mind” to keep persons vigilant for danger. In this future, application security tools are infamous for how they disturb software development and delivery practices – and does that not trouble the minds of software engineers? The list of security rules and policies are unending, often arbitrary – and have they not found a most effective means to agitate the subjects under their dominion?
Theoxorus: They have.
Secrates: And do we believe that such activities result in greater defenses?
Theoxorus: Certainly not, unless one believes that defense is impossible through design. This reminds me of our prior dialogue on beauty, Secrates, as what you describe of this “infosec industry” must make it beauty’s enemy.
Secrates: Are you surprised, Theoxorus, that infosec makes enemies when its goal is to disrupt tranquility? And how could cybersecurity achieve beauty when it sees ugliness and danger in all things outside itself?
Theoxorus: Of course. But surely some interpretation by other schools of thought justifies this perversion?
Secretes: They will not. Atarksia is seen as a strict requirement to attain the true, full happiness referred to as eudaimonia. It may surprise you, my dear Theoxorus, that the word atarksia is associated with Epicurean philosophy.
Theoxorus: But calmness seems harmonious with Epicureanism.
Secrates: Did you wish to ask me a question, my friend?
Theoxorus: Your social skills are as crude as unfired amphorae, Secrates. So, then, what is shocking about this?
Secrates: It is shocking because it is hard to imagine a philosophy more opposed to cybersecurity than Epicurianism, which argues that the goal of a sentient life form is to maximize pleasure and minimize pain12. Epicurus is specific in defining pleasure as the absence of pain, and therefore “ethical hedonism” is the pursuit of avoiding pain, including pain that imparts pleasure near-term but pain longer-term. Without distracting ourselves by examining Epicureanism in more detail, we can say that the goal they espouse is to foster a life of tranquility. Does this cybersecurity community foster a life of such tranquility?
Theoxorus: They do not.
Secrates: I agree, my friend. Cybersecurity is not known for avoiding pain, regardless of temporal outlook. The cybersecurity community inflicts pain on others, whether by stoking fear or by making lives harder. Is it not fair to argue that infosec even inflicts pain on itself?13 Is it not cruel to cultivate obsession of vulnerabilities that kindle fear, uncertainty, and doubt when your stated aim is to eliminate them? Do we believe this fetishization of vulnerabilities and lascivious focus on blaming what they call “human error” can be called “ethical hedonism”? Or is it a societal mechanism to stifle introspection and to instead reenact shame? I regret that these questions reflect a topic for another time in the realm of psychology14, which has yet to be invented.
Theoxorus: You tease me, Secrates.
Secrates: Yes, but there is another thing, Theoxorus: what of asphaleia?
Theoxorus: You are unfailing in your pursuit, Secrates.
Secrates: Well, I suspect you might find its origin amusing. Asphaleia originates from wrestling and reflects the capacity to prevent being overthrown, being immovable and steadfast, like the throne of the gods, or like me in the presence of your lamentations and tantrums about our discourse. By roughly a century before our time15, asphaleia also came to mean the stability of the city state, to prevent being overthrown, and, if I permit myself to indulge in speculation, it could be extended to describe the stability of an organization (a kind of social entity I beheld in my bath). And, as some great scientists will prove thousands of years from our day, speed and stability of work harmonize and impart greater value together than apart. And, well, now I can put the matter as: is this infosec, that which slows down work, an enemy of asphaleia?
Theoxorus: Yes, certainly.
Secrates: Very good; and can you tell me how this might be despite asphaleia serving as the seed of the “security” concept’s own existence?
Theoxorus: I must confess, Secrates, that this “security” society of the future seems very lost.
Secrates: I dare say, my friend, that you spend too much time with me if you think it is an uncommon human desire to seek power and control, even at the expense of integrity. And can we truly argue that such desires are always conscious to the subject?
Theoxorus: Alas, they are not.
Secrates: If what you say is true, I ask you, then: what is this cybersecurity society not most of all?
When Secrates had asked his question, for a considerable time there was silence; Theoxorus furrowed his brow while meditating on this question; only Secrates made a sound when softly blowing on the delicate seedheads of a dandelion.
Theoxorus: For what did you wish as you blew, Secrates?
Secrates looked up at Theoxorus and said, with a smile: For you to answer my question.
Theoxorus: I will tell you. My feeling is that this cybersecurity society lacks curiosity.
Secrates: Exactly. The traditional cybersecurity society is kin of Argus Panoptes; the role of enforcer grants them relevancy but not wisdom. Alas, my friend, if only they would follow the path of Daedalus instead. They feel ignorance as a sting and slight, as if ignorance was not the default condition of being alive! But there is more: how are they like the sophist?
Theoxorus: They both are paid to question without truth as their aim.
Secrates: And do they not both gain fortunes from this?
Theoxorus: They do.
Secrates: And are they not both hunters after a living prey, servants of the powerful, cousins of opportunists exploiting emotion for control?
Theoxorus: They are.
Secrates: But where the cybersecurity society differs is they seek the impossible void – the not-being of weakness – and they are willing to destroy whatever being stands in the way of this pyrrhic quest.
The Dawn of “Security” as a Noun: Securitas The ancient association of securus with Epicureanism was not to last. Epicurianism was outlawed once the Roman Empire entered its rebellious Christianity phase because, as a philosophy, it’s quite incompatible with the idea that souls must be “saved” and that God is relevant to everyday life (Epicurus literally argued that the gods do not gaf about human affairs and do not punish or reward human behavior).
Why is this relevant to cybersecurity? Because – as a collection of entities across vendors, consultants, thought leaders, and practitioners incentivized to increase influence on the world – information security has sanctified itself as a secular authority who can deem worthiness from on high and reward or punish according to behavior16.
Most security advice roughly goes, “if you’re interacting with a computer and what you’re doing feels convenient, you are actually doing something BAD.” We’re supposed to report when we’ve done something wrong, like a Catholic at confessional. We can gain an “exception” from the security authority like the medieval Catholic Church granting indulgences17 to partially reduce the punishment of the sin.
Naturally, only the ordained can read and interpret the sacred texts. The unwashed masses may only receive the good word. The divine wisdom is so complex, so arcane, far too difficult for anyone else to transform into action. Does this not imply that the non-security “normies” cannot be secure without the blessing of the security establishment? The human “users” must suffer in this life for their sins, for turning away from the path of security18.
In the eyes of the Cybersec Church, users are weak sheep who must be told what to do and guided with a strong hand in the ways of natural security law, less they drift wayward into wickedness. We must practice chastity in all manners digital and resist the temptation of clicking on things unless we want the whole network to drown in depravity19.
For the Security Spirit is always watching. It knows when you allow incoming connections from cloud provider IPs even though attackers also use those IPs. It knows when you copy and paste something from Stack Overflow even though it could be backdoored. It knows when you don’t VPN on the hotel WiFi, where anyone, including a big, sexy scary APT could connect to it. Wicked, wicked user! A thousand years smoldering in hellfire and pestilence for your sins! Try clicking things now once the maggots have feasted upon your flesh!
We will return to security in the context of authority later, but now we must march onward to examine how securus, the adjective, evolved its noun form: securitas. In the Roman period, securitas specifically corresponded to intense emotions. And, it’s worth noting, the freedom from care represented by securitas does not require justification based on reality.
Securitas refers to a group of emotions (the things security and software “rationalists” alike pretend they don’t have) which relate to the absence of fear and include emotions like trust and confidence20. In fact, even the more modern notion of “job security” aligns to this older meaning; it is a feeling, specifically that you don’t have to worry about losing employment. Threats to it aren’t the point, the feeling is the point.
Now, my dear mortals, can we imagine a cybersecurity program designed to ensure the organization is fearless and free from care, an infosec program that is quiet, easy, and composed? Cybersecurity as a discipline would be concerned with ensuring the organization could remain cheerful, tranquil, and serene. Servers would frolick in a field like fecund fawns. Software engineers would release code with confidence, trusting the safety designed into their languages, tools, platforms, and environments. If employees felt fear, uncertainty, or doubt when using technical systems, the security program would be curious and design solutions to alleviate their concerns.
Imagine a cybersecurity program with the goal of relieving the rest of the organization from anxiety about security… cybersecurity that promotes convenience and puts in the hard work of crafting design-based mitigations. Status quo infosec – manifesting as SecObs, Security Theatre, etc. – seeks quite the opposite. Traditional cybersecurity programs openly admit aiming to increase anxiety among the rest of the organization to ensure they are vigilant to threats and always looking over their proverbial shoulders for potential peril. The security people decry convenience and shame users for seeking it while simultaneously indulging in it like Scrooge McDuck in his pool of gold by relying on enforcement, behavioral control, and blame as cheap “mitigations.”
I often read security advice or policies or other prescriptions and have the sense that the authors are trying very hard to pretend that local context is irrelevant and that generalized control is possible. Convenience is often framed as the enemy. The question is: convenience for whom?
Sure, convenience is clicking on every link or adding a third-party library without a second thought. But convenience is also requiring a security tool that you will never have to use, without performing any user research with the humans who will use it in their workflows. Convenience is tapping the 10^10 Security Commandments when someone makes a mistake and blaming them in front of Congress. Are we shocked that a framework of “convenience for me, but not for thee” doesn’t seem to produce the bundle of positive emotions securitas represented?
Are there words less associated with cybersecurity than “cheerful,” “bright,” “serene, “composed,” “quiet,” or “easy”?21 The whole business seems antithetical to those traits. Traditional infosec is all cura and no se – better deemed “cybercurity” than cyber_se_curity: a discipline of increasing concern, thought, trouble, anxiety, and grief in the organization regarding “cyber” matters. Offensive security is especially nonsensical through this etymological lens because it then means “offensive tranquility.”
Or maybe it isn’t that strange. After all, don’t overpriced healing crystals and cyber wares have quite a bit in common?
The Multifaceted Meaning of &#34;Security&#34; as ‘Securitas’ in the Roman Era While a sense of dignity is not the vibe most of us feel when clicking through mandatory cybersecurity awareness training courses, dignity and security were seen as closely coupled concepts in the Roman era.
Cicero, living during the last century B.C., noted that whomever has tranquillitate animi (a tranquil mind) and securitas will have dignitas (dignity)22. Cicero’s meaning of securitas here involves the absence of care or fear as well – and he saw this tranquility of mind as a prerequisite for an individual’s personal happiness and prestige in society. (We will talk about respect and dignity in the context of security more once we time travel to more modern definitions of the word).
A century later, Seneca framed securitas as a mindset23, a lovely extension of the existing notion of security as a bundle of emotions. Inspired by Socrates, Seneca viewed securitas – and the absence of the fear emotion24 – as how the wise can come closer to god because only a god has no reason to fear death25.
Securitas as this nearly-divine mindset quickly morphed into an association with divinity itself during the reign of Nero (the 1st century AD), specifically reflecting the divinity of the Emperor26 on coinage. It also started to reflect an environmental vibe rather than emotions or mindset; the surrounding world, the genesis of a subject’s freedom from care, could also be securitas, possessing a peaceful and tranquil atmosphere.
But Seneca also laid the groundwork for coupling the security of the state with security of individuals, in the specific sense of public securitas contributing to the capacity to live according to virtue. In this framing, securitas was explicitly based on mutual trust between the ruler and the ruled.
This reflects yet another semantic deviation with cybersecurity, which is generally mistrustful of any parties outside infosec, including those who should be allies (like software engineers). How else should we characterize the common refrain that any employee is a potential “insider threat”? And the cybersecurity status quo certainly does not seem to foster mutual trust by helping potential allies live well, offering minimal proof that they have the best interests of the collective in mind; if anything, they often prove the opposite.
In fact, Seneca highlights that it is a mistake to think that “a ruler is [only] safe when nothing is safe from the ruler.”27 The pox of “shadow” assets – whether shadow IT, shadow SaaS, shadow containers, shadow APIs – shows that the infosec establishment succumbs to this mistake readily. In fact, cybersecurity’s general fetishization of control – vitalized by vendors – is a continuous realization of this mistake.
As anyone familiar with the motifs of history could predict, the subsequent rulers did not listen to Seneca’s admonition, which eventually led to an explicit rejection of hereditary rulership in the Nerva-Antonine dynasty. (Take that for what you will in the context of our current cybersecurity rulership). This led Tacitus to express the new securitas publica (public security) as the confidence of the citizens that the state will no longer threaten them28. That mutual trust was the core ingredient of securitas during that phase and reflected a check on authority.
It is interesting to ponder the notion of securitas publica in the organizational context; an organization’s citizens would be confident that the enforcers of security policy can no longer disrupt their way of life or erode their pursuit of fulfillment. How many cybersecurity programs might be characterized as such today? How many programs instead feel disruptive, corrosive to productivity, and fostering anything but a “peaceful and tranquil atmosphere”?
Securitas’ confident spirit evolved into meaning of “assurance of faith” (as opposed to doubt) during Roman Antiquity, as espoused by Tertullian and, later, Saint Augustine. This “opposition to doubt” again is at odds with one of the letters of the acronym which defines traditional cybersecurity: F.U.D. (fear, uncertainty, and doubt). As we’ve seen time and again throughout this essay (and we’re only into the 2nd - 4th century A.D. here!), the earlier and variegated meanings of securitas fly in the face of traditional infosec. Traditional infosec wants to doubt everything. It takes pride in doubting everything. Assurance of faith is seen as a security sin.29
Speaking of faith, early Roman Antiquity also saw the creation of Securitas as a deity. While the mythos surrounding Securitas, the goddess, is lamentably shallow, it’s worth noting that she was the goddess of security and stability30. Given the evidence from the DORA research and metrics that stability and speed work in tandem and are complimentary, this suggests that if we worship at the altar of security, then we must also worship at the altar of speed.
The Roman god of speed, Mercurius (aka Mercury), was also the god of shopkeepers, merchants, travelers, transporters of goods, thieves, and tricksters. It’s worth noting that the coupling of commerce and prosperity with security is quite common throughout its history (more on this once we get to the 16th century). Traditional cybersecurity, in contrast, often pretends like prosperity is not the primary goal or, worse, views prosperity as a foe to security.
“Why don’t companies prioritize security? Don’t they know THE THINGS can be HACKED??” Well, dear security people, what do you think allows the companies to pay for your six or seven figure salary? It is because they prioritize money that they can afford to spend it on security endeavors that do not remunerate them and often cannot even be tied to tangible success outcomes beyond “we saw these malware samples or known bad IPs this month” spoonfed from vendor dashboards in symbiotic self-perpetuation.
The infosec industry forgets that security, even in its more modern meaning, is not just about protecting threats; it’s about protecting threats against something. In the business context, it’s about protecting threats against prosperity. Through this lens, is it not a victory if a security program waters the seeds of revenue growth? And is the security program not a tragic failure if it chokes and cages this material growth because of a “risk” that exists only as an incorporeal counterfactual?
Between the profligate spending on ineffectual security tools and the obstructionism imposed by security programs, it’s quite possible that the threat to enterprise prosperity by traditionalist cybersecurity rivals that posed by actual attackers.
This distinction is also emphasized in the term “national security,” even as we mean it today: national security is about defending threats to what? Liberty, prosperity, the pursuit of happiness… and we rightly dislike security measures that get in the way of these goals (often labeling them as “Security Theater”31).
Thus, we must ask, cybersecurity defends against threats to what? Largely the same things, but in businessy and computery contexts. If liberty or prosperity or the pursuit of happiness is choked out by security measures, then security is the threat in itself and the subjects are left in need of security against security32. Indeed, this is where we find ourselves with cybersecurity today.
But we are not yet done with this era. A few centuries after the deification of securitas, its meaning as “carefree” was twisted by religious leaders into an undesirable form: the state of being careless, reckless, heedless, and negligent33.
This notion of security is perhaps closest to the status quo in infosec today, which is quite careless with human (user, developer, colleague) time and attention, reckless with organizational budget, and negligent to design-based security solutions that are more reliable than attempting to control human behavior. The cybersecurity industry is heedless with its FUD-fueled zealotry, fretting about irreleventia while pretending nothing can be done about the grey rhinos charging into our systems.
Securitas was also relevant in the context of “Roman security” and specifically meant the Roman Empire’s peaceful and orderly domination of the world. Would we characterize traditional infosec programs as peaceful and orderly today? Even diehard zealots of the cybersecurity status quo readily admit that much of infosec in practice is firefighting and disorder. A worthy question is: who benefits from this paradigm?
Alas, the Roman Empire declined, as did securitas, whose meanings were largely stolen by the word certitudo. Thus, we must go to the provincial stables of the Middle Ages to continue our semantic safari. The two meanings of securitas not consumed by certitudo included pax (peace) and religious indifference.
The latter meaning persisted (albeit without nearly as much popularity as before) through to Martin Luther in the 16th century, who labeled “die Sicheren” as the people he was fighting against – people who did not truly trust the Holy Spirit and substituted true faith for religious rituals and conspicuous, performative acts. In his time, spiritual unity was preserved “through coercion and violence… dissent from orthodoxy was outlawed, heresy was rooted out and punished by fire and sword.” Luther was excommunicated for his “errors” about the Holy Spirit, including the “error” of believing the Christian god wouldn’t want heretics burned alive.
In our era, the traditionalist Security People put quite a bit of trust in their folk wisdom and rituals, despite their unclear success. It is still counterculture to suggest that humans shouldn’t be punished for security “errors.”34 And does it not benefit the vendors and research analysts to continue spoon feeding this advice to security leaders?
Just as Martin Luther felt centuries past about religious belief, is it wrong to want to reconstruct our entire approach to cybersecurity? Just because power structures are in place, incumbents entrenched, money flowing, does not mean something new, bold, and based on real acts of security rather than displays of it – on outcomes vs. outputs – could not supplant the status quo. Fatalism is not true to our nature as humans and certainly not true to the spirit of the “security” concept as we have seen.
But there is more for us to see and for that, we must venture onward into the pre-Enlightenment period and beyond…
The Evolving Meaning of Security as ‘Securitas’ in the Early Modern Era Centuries passed and the relevance of the word securitas faded. Thomas Hobbes, one of the founders of modern political philosophy in the 17th century, was really the hype man for securitas to keep it from dissolving into disuse35.
Hobbes’ depicts the goal of securitas as the genesis and maintenance of peace, which, as we’ve already discussed, is quite unlike the cybersecurity status quo. Securitas is cultivated through alliances to make it dangerous for the remaining “all” to attack. Samuel, baron von Pufendorf36 emphasized the need for allies with a less cynical angle, arguing that an individual human needs companions to aid them in order to realize securitas (which perhaps foreshadows the concept of “social security”).
Are cybersecurity professionals today known for gathering allies? Quite the opposite. For instance, the relationship between developers and security pros seems to only be getting worse37. Traditional infosec strategy does not enforce security policy through cooperation, but through coercion.
To keep a long journey into Hobbes’ rather paranoid – and exceptionally cynical – perspective short, he ultimately proposes that a sovereign should be the one to guarantee securitas by doling out punishments for violating agreements, which requires subjugation of the ruled by the ruler.
Punishing humans who step out of line and requiring obedience to their rules – for the ruled to subjugate their other wants as secondary to the needs of the sovereign… is this not the playbook of traditional cybersecurity? It is the easiest option to pursue because eliminating or reducing hazards by design requires far more effort than demanding obedience. And if there’s one thing Homo sapiens love above all else, it is cognitive efficiency.
It is quite interesting that securitas was used as imperial propaganda during the Roman era to insist that the state was necessary and by Hobbes to insist that the state must subjugate its citizens. Does this tell us something about status quo cybersecurity? Or should we instead deem it “security imperialism”?
Security, Welfare, Dignity, and the Early Modern Era Around the same time Hobbes was slandering humanity’s nature and proposing the need for a strong-armed state (the 16th century), securitas also started to absorb a financial meaning: something pledged as a guarantee that an obligation would be fulfilled – that the debtor has no need to worry because something has been pledged against the debt.
In this colloquial meaning (which persisted for centuries), securitas is rooted in a feeling – that the lender doesn’t need to worry. And, similarly, we see a theme throughout the Enlightenment that the state should assure citizens that they do not have to fear violence, not just ensure that they are free from violence in their everyday lives. Basically, that the state has a duty to consider the feelings of citizens, not just protect them.
It is in this era and through the Industrial era that security starts to be seen as a human right, as an essential requirement for humans to enjoy all of the other rights. After all, if you’re the victim of violence (particularly a violent death) – or in a perpetual state of worry about it – it’s pretty hard to pursue liberty or prosperity.
Thus, over time, security evolved to mean a guarantee or assurance that certain things would be accessible to an entity – like “water security” reflecting the assurance that a human individual will have access to clean water on an ongoing basis38.
The temporal implication of this meaning is important: it is not just about having access to a thing (whether a physical good or an intangible value) now, but about the guarantee that you will have access to it in the future, too. Not just that you do not have to fear a violent death now, but that you do not have to fear a violent death in the multitude of possible futures on the horizon, either.
We can trace this notion through to the more recent “social security.” The term was coined on a whim because “pension” carried too much baggage to be palatable to a wide audience. So, they defined social security as a “type of security which would… promote the welfare of society as a whole.”39 (emphasis mine)
Thus, the purpose of security is to promote the welfare of a particular entity. Extending this, the purpose of cyber security is to promote the welfare of cyber things (i.e. all things digital). While that may sound silly, there’s something important here: promoting welfare is not just about stopping threats.
What else is embedded in this purpose of promoting welfare? As we explored, dignity was tightly coupled with security during the Roman period and this association resurged with the concept of “human security,&#34; which arose from the rejection of Hobbsian state-centric security.
While the term’s precise meaning is still subject to ample debate40, a foundational facet of “human security” is respect: that a critical part of ensuring a human is secure is ensuring their humanity is respected. Because dehumanizing certain populations and stripping them of dignity is one of the ways authoritarianism cultivates power; it is how a society slips into fascism.
What, then, should we make of the fact that the infosec industry sneakily strips users – whether the accountant clicking on a link to wire money, the marketing professional who downloads a PDF, the developer who makes a mistake when writing code – of their dignity?
The disrespectful sneer is palpable in the designation of “human error” as the cause of incidents. Security awareness training requires users to remember dozens of rules that ignore the realities of their work on thing-clicking machines and implies that it will be their fault if something bad happens. There is no respect for their time, attention, intelligence, or autonomy.
To quote the legendary James Mickens, “This is uncivilized and I demand more from life.”
But imagine a world in which cybersecurity programs prioritized respect as a core value of security! Respect for users’ private data; respect for users’ time; respect for users’ cognitive and emotional energy; respect for users’ pursuit of their priorities; respect for the organization’s pursuit of its priorities as a collection of users serving other users.
In fact, the term “users” may even be part of the problem. Users are abstract, faceless, behind a screen; they lack intrinsic worth and must be “good for” something.41 It makes it easier to disrespect them and resent them for not supporting our own goals. It makes it easier to not see them as people, but as exploitable resources that either we control or attackers do. It’s perhaps harder to blame a sleep-deprived caretaker of a lover or child or parent who, just trying to do their job well enough to keep their health insurance, clicks on something designed to look urgent and important.
Blaming a “user” for being so careless as to click on an obfuscated link and enter in their VPN credentials on the malicious site makes it a more antiseptic affair. It makes us feel like it’s a more just world rather than a chaotic one – like the problem is a user stepping out of line rather than complexities conspiring towards compromise. This dehumanization makes it easier to absolve the ruler and deride the ruled – these “users” – who are simply resources towards our ends, ever holy, ever noble.
What &#34;Security&#34; Means in the Information Society We end our journey in the modern era, the information society42. The best way to summarize the concept of security in modern times is that it’s controversial af and dependent on context43. But, Wolfers’ two-part definition of “security” from 1962 is widely cited44:
In an objective sense, security measures the absence of threats to acquired values In a subjective sense, security reflects the absence of fear that such values will be attacked The term “values”, here, of course, is ambiguous and open-ended. But let’s think about what this means for the cybersecurity context.
A realist would say security is achieved when “the dangers posed by manifold threats, challenges, vulnerabilities and risks” in the digital realm are “avoided, prevented, managed, coped with, mitigated and adapted to” by individuals, groups, or organizations45.
A social constructivist would say security is achieved “once the perception and fears of security ‘threats’, ‘challenges’, ‘vulnerabilities’ and ‘risks’ are allayed and overcome.” That is, objective security is not enough; the subjective will always wield considerable influence in the cybersecurity context.
In my experience, tech bros really do not like the idea that emotions or subjectivity come into play in tech stuff at all. They tend to describe their emotions as “logic” and subjective experience as “facts.” We don’t have enough time to unpack all of that in this post (and if only more of them would go to therapy). But it’s a very real problem when traditional cybersecurity folk wisdom was often woven by people who think the objective is all that matters.
What’s worse about the cybersecurity status quo is that the subjective is dismissed but the objective isn’t even really measured. Again, objective security measures the absence of threats to acquired values. We do not have objective security in traditional infosec and, by that definition, it’s not even really what’s being pursued. Even when fleshing out the realist interpretation of objective security, traditional cybersecurity mostly focuses on the “avoided” or “prevented” part rather than the managed, coped with, adapted to part46.
From the perspective of modern scholars, security is meant to lead to more goal-oriented behavior while insecurity leads to threat-oriented behavior. As anyone who’s walked the RSAC vendor hall knows all too well, basically everything in cybersecurity today is about THREATS. Everything is a potential THREAT: your API, your CI/CD pipelines, your laptop, your phone, your fridge, your colleagues, your loved ones, even your own BRAIN is a THREAT because what if you make a MISTAKE and become the very INSIDER THREAT you swore to destroy!?!?!
Everything about the infosec status quo today reflects threat-oriented behavior, therefore implying insecurity rather than security. Traditional cybersecurity isn’t about preserving and upholding values – like prosperity or productivity or an inclusive work environment. Traditional cybersecurity is about preventing and avoiding threats, aiming for the impossible standard of attacks never successfully happening.
The cybersecurity status quo forgets the whole point of stopping the threats is to preserve certain values.
This fetishization of threats and elimination of them as an aim in itself is how we end up with cybersecurity programs which cause so much grief and anxiety and friction for everyone else in the organization. If the infosec industry actually focused on preservation of values, then UX would probably be one of the most important skills in the discipline (but how would incumbent cybersecurity vendors milk that for cash?).
After all, what’s the point of protecting the cherished organizational value of productivity from potential attacks – which likely only happen sometimes rather than continuously, from an impact perspective – if you’re going to erode that value daily through security policies that seem divorced from real goals, constraints, and workflows?
What’s the point in protecting the organization against a potential financial loss due to attack when you’re not only spending its money on security (which could be spent elsewhere), but also slowing down its ability to grow revenue due to security procedures? For an organization with $100 million in revenue wanting to gain market share, shipping 20% fewer features per year due to friction created by the security program has more material impact short, medium, and long term than a ransomware operator demanding $1 million, $5 million, or even $10 million.
Waldron’s quite recent definition of the word security summarizes this all nicely and is worth repeating here:
“… security now comprises protection against harm to one’s basic mode of life and economic values, as well as reasonable protection against fear and terror, and the presence of a positive assurance that these values will continue to be maintained in the future.”47
Cybersecurity as it stands today flunks this definition. It is impossible to provide assurance that basic and economic values will be maintained in the future if you do not know what they are, which the cybersecurity status quo does not know because they do not care because all of that is irrelevant to their noble need to sacrifice everyone’s time, energy, and money at the altar of the FUD gods to gain more budget, more headcount, more influence and they shroud this ritual in a lab coat of “rational” paranoia.
Before architecting a security program or allocating cybersecurity budget, we should understand the organization’s basic mode of life and economic values, including at the level of any teams who will be especially subject to security procedures (like software engineers). From there, we should aim to provide reasonable protection against fear and terror – that is, to provide subjective security, that ancient-school version of securitas which meant freedom from anxiety, fear, or care.
Our job as defenders should be to reduce the complexity of the security problem to such an extent that the rest of the organization is free from care about it (in fact, the systems theorist Niklas Luhmann argued that security efforts explicitly aim to reduce the complexity of the world48). And cybersecurity’s job should be to provide positive assurance that the organization’s values (like prosperity, productivity, inclusion, whatever) can be maintained going forward.
But all of the above requires user research and empathy and curiosity about things beyond cybersecurity’s viewing frustum. This modern definition of security means the organization must treat security as an interactive discipline, not a prescriptive one. The existence of a security program cannot be justified with “there is a risk here and it will never go away,”49 multiplied across all identified “risks,” which thereby implies a security organization that can only grow in scope and authority.
If those who provide security are the rulers and the users the ruled, what security really requires is the rulers respecting the ruled and the rulers earning the respect of the ruled rather than extracting it. This reflects a radical departure from traditional infosec and thus there is and will be resistance from the entrenched50.
Security, in practice, is supposed to reside at the beneficial balance between two evils: absolute fear and absolute security – and absolute security, per Kant, can only be found at the cemetery.51
Summarizing what we mean when we say “security” Security is now one of those big words like justice and freedom and liberty which serve more as symbols with fuzzy flavors of feeling – that is, as concepts – rather than as words with straightforward definitions. As we’ve seen, asking the titular question, “When We Say Security, What Do We Mean?” is an exploratory exercise rather than an excavation. There is no ground truth we shall hit with enough sweat and shoveling.
We traversed a tapestry of meanings throughout this essay. We finish it with a rough sense of, like, “Security is about preserving chill vibes in the presence of threats to those vibes.”
But more usefully (yet less concretely), we have a better mouthfeel for what “security” means. The threats aren’t the point; the poignant part is the potential absence of a valuable good or state of being which we very much wish to preserve.
An absence of threats is only worthwhile if it guarantees the presence of serenity and prosperity. In the word “security” is also a promise – that you hold onto something of value and that this value might grow in the future.
Perhaps most of all, this semantic journey of ours today reveals how wayward traditional cybersecurity is from these notions; it resembles a nemesis of the security concept rather than its descendent. I hope you join me on the quest to finally realize the full potential of the security concept, to grow peaches rather than lemons52, to build a sweeter future for ourselves and all the other stakeholders in this strange system we call society.
If you liked this post and want to learn more about pursuing a better notion of cybersecurity, check out my new book Security Chaos Engineering: Sustaining Resilience in Software and Systems available at Amazon and other major retailers.
The parallels between black hole firewalls and the infosec kind must remain a discussion for another time (if time isn’t just an abstraction). ↩︎
I performed a secret, arcane ritual to win the favor of the eldritch ones towards my quest of making the word security mean something better. But the gods are capricious and so the ultimate fate of this endeavor remains unknown. ↩︎
Like any self-respecting former author of angsty teen poetry, I originally chose as my medium a “literary concept album” featuring six essays as “tracks,” all exploring the title’s provocative question: When We Say Security, What Do We Mean? But no one reads blog series so this is now a single post. ↩︎
As Hannah Arendt described of such words, “when we try to define them, they get slippery; when we talk about their meaning, nothing stays put anymore, everything begins to move.” (From The Life of the Mind) ↩︎
Arendt, H. (1981). The life of the mind: The groundbreaking investigation on how we think. HMH. (In the section “Thinking” / “The answer of Socrates”) ↩︎
“Surface” is a spatial metaphor. Yet again, there is much to unpack with the language we use to talk about cybersecurity but, to keep with the metaphor, time marches onward… ↩︎
“The truth is rather that I infect them also with the perplexity I feel myself.” – Socrates ↩︎
This may sound like a journey up one’s ass, but it’s better than being a cookie-cutter infosec ass, I assure you. ↩︎
A term of endearment in ancient Rome; its literal translation is “duckling.” ↩︎
Carl Meißner; Henry William Auden (1894) Latin Phrase-Book, London: Macmillan and Co. ↩︎
I think the closest we get to Platonic dialogues in modern times is Ao3 fanfiction #slowbuild #lightangst #friendship #humor #confessions #aroace #college #dom/sub #drama #alpha/beta/omegadynamics ↩︎
Rorty, Mary. “Lecture 10.1: Epicurus and Lucretius.” Stanford University. http://web.stanford.edu/~mvr2j/ucsccourse/Lecture10.1.pdf ↩︎
Infosec as an entity truly exhibits a weird form of masochism that honestly becomes slightly uncomfortable to contemplate if we start untangling all the evidence in support of it. ↩︎
I am tempted to delve into the psychological concept of security and insecurity but I fear its revelations – despite being aimed at infosec as collective – would be interpreted as personal attacks. I will leave this one morsel for us to digest: the APA defines insecurity as a feeling of inadequacy, a lack of self-confidence, an inability to cope combined by general uncertainty about one’s goals, abilities, or relationships with others. To what degree does this notion of psychological insecurity accurately characterize the traditional infosec industry – its folk wisdom, zeitgeist, program priorities, prescribed procedures, policies, and so forth? ↩︎
“Our time” here is referring to the time of Socrates (the inspiration for “Secrates”), which was in the 4th century B.C.E. Therefore, the rise of asphaleia meaning the stability of the city state was around the 5th century B.C.E. ↩︎
In return, these stringent practices reinforce the status quo and uphold organizational power structures, which suits leadership just fine (and, besides, how would we expect them to know how security programs could look outside of the infosec status quo?). ↩︎
You watch the church leaders exchange influence for money, but instead of imparting the power of the Holy Spirit it’s your unfortunately unscrupulous CISO pushing some dogshit into your stack because their buddy invested in the startup and they do this back and forth and then blame the engineers or users who don’t want to interact with the dogshit for why everything is failing because nothing is your fault when you have the authority of something sacred, whether the Holy Spirit or the Security Spirit. ↩︎
I do find it interesting that when CISOs do not disclose a breach, instead laundering it through a bug bounty program, that is being “strategic” and showing security leadership, but when a software engineering team doesn’t fix a security bug immediately – no matter how contrived the exploit scenario – then they lack integrity. ↩︎
Perhaps we should be grateful there aren’t LinkedIn posts like “here’s the best way to run your sin response team #securitas #ciso #inquisition #SinSecOps. ↩︎
Wonderly, M. (2019). On the Affect of Security. philosophical topics, 47(2), 165-182. ↩︎
Intriguingly – and rather self-servingly, although I did not expect it to be so when delving into this thought exercise – the original meanings of securus and securitas align nicely with the goals of Software Resilience (aka Security Chaos Engineering (SCE)). Composure is something for which resilience strives through the practice of repeated experimentation. Resilience wants security to be quiet, it seeks to foster organizational confidence, to grant organizations the freedom to not fret about potential incidents because they feel so well-practiced through experimentation, strong feedback loops, and resilient design that they feel fearless about the inevitable. In fact, we explicitly encourage defenders to have fun with chaos experiments, getting infosec closer to that original connotation that security involves a feeling of being cheerful and bright. ↩︎
From De Officiis passage 69 (nice): “Vacandum autem omni est animi perturbatione, cum cupiditate et metu, tum etiam aegritudine et voluptate nimia[64] et iracundia, ut tranquillitas animi et securitas adsit, quae affert cum constantiam, tum etiam dignitatem.” This translates roughly to: “But it is necessary to be freed from all disturbance of the mind, with desire and fear, and also from sickness and excessive pleasure and anger, so that there may be peace of mind and security, which brings with it constancy, as well as dignity.” ↩︎
Pop-stoicism seems to be trending among security leaders lately but we do not have time to unpack why this is so, nor its troubling implications. ↩︎
From De constantia sapientis: Nullius ergo mouebitur contumelia; omnes enim inter se differant, sapiens quidem pares illos ob aequalem stultitiam omnis putat. Nam si semel se demiserit eo ut aut iniuria moueatur aut contumelia, non poterit umquam esse securus; securitas autem proprium bonum sapientis est. This translates roughly to: “Therefore no one will be moved by insults; for although they all differ from one another, the wise indeed think that they are equal because of their equal stupidity. For if he has once humbled himself to the point of being moved either by injury or insult, he will never be able to be secure; but security is the proper good of the wise.” ↩︎
This is another case where Software Resilience (aka Security Chaos Engineering) aligns with the historic meaning of securitas better than traditional security. Resilience accepts that failure is inevitable but gains confidence from preparation. It trusts that all our preparation will help us respond gracefully to failure. ↩︎
History is not without its ironies; Nero adopted the mantle of securitas ¬– the divinity imparted by a mindset of fearlessness of death – and then ultimately died by suicide. ↩︎
Schrimm-Heins, A. (1991). Gewissheit und Sicherheit: Geschichte und Bedeutungswandel der Begriffe certitudo und securitas (Teil I). Archiv für Begriffsgeschichte, 34, 123-213. ↩︎
Tacitus, The Agricola. ↩︎
Of course, with a Resilience strategy, we want to foster this sort of assurance through repeated experimentation – cultivating confidence through empirical evidence affirming or denying our hypotheses about the resilience of our systems. ↩︎
Upon learning this, I immediately updated my brain dictionary lookup to display Security as a gorgeous transbian goddess whose favorite language, naturally, is Rust. I am hoping for a crossover episode in which our representative enby god, Loki, woos her by donning thigh highs made of the tendons of her enemies. ↩︎
Levenson, E. (2014). The TSA Is in the Business of’Security Theater,’ Not Security. The Atlantic. ↩︎
Quis custodiet ipsos custodes? ↩︎
It was Pope Gregory I as hypeman for this interpretation and, yet again, the parallels between traditional infosec and the authoritarianism of the Catholic Church are… intriguing to say the least. ↩︎
It’s been fun watching the industry catch up to me. ~6 - 7 years ago when I was dropping spicy takes about how bullshit “gotchya” security tests are (along with a bunch of other behavioral science-informed takes), I got a ton of pushback and usually vitrol. BuT ReAL aTTaCkErS dOn’T CaRe AbOuT fEeLinGs. Many of those same people now launder those takes and pretend like they were always on board. There’s probably a post in itself about the adoption cycle of hot takes where, at the beginning, people bristle because it’s new and bold and different but eventually it’s accepted enough that it’s worth changing your beliefs and evangelizing it to look “thought leadering.” Hopefully one day I’ll be similarly vindicated with my (still wildly unpopular) take that “DevSecOps” is an unnecessary and harmful term. ↩︎
Hobbes, with the benefit of hindsight and historical documentation, viewed the Peloponnesian War as a civil war among the Greek people. It seems at the time it was not perceived that way by Athens, its allies, or its enemies. The Persians were the starkest “other” throughout much of ancient Greek history, but by the time of the Peoponnesian War, the Persian “threat” was more like a distant, hazy shadow. Thus, the “other” from Athens’ perspective was other city-states, including its own allies who they feared would betray them (which they did, although “betray” perhaps is not the best characterization of the affair). ↩︎
I promise I did not make this name up. ↩︎
Bridging the Developer and Security Divide, VMWare, Forrester Research (2021) ↩︎
UN-Water, 2013. Water security and the global water agenda—A UN- Water analytical brief . Hamilton: United Nations University. See also: https://www.unwater.org/publications/water-security-infographic/ ↩︎
Social Security: Origin of the Term at https://socialwelfare.library.vcu.edu/social-security/social-security-origin-of-the-term/ ↩︎
Christie, R., &amp; Amitav, A. (2008). Human security research: progress, limitations and new directions (pp. 11-08). Working Paper. Centre for Governance and International Affairs. http://www.bris.ac.uk/media-library/sites/spais/migrated/documents/christiearcharya1108.pdf ↩︎
Heidegger, M. (1977). The question concerning technology. New York, 214. https://www2.hawaii.edu/~freeman/courses/phil394/The%20Question%20Concerning%20Technology.pdf ↩︎
It also brings us to one of my favorite book quotes: “In the information society, nobody thinks. We expected to banish paper, but we actually banished thought.” (said by Ian Malcolm in Jurassic Park by Michael Crichton). ↩︎
“Security is ambiguous and elastic in its meaning.” – Art, 1993 ↩︎
Wolfers, A. (1962). Discord and collaboration: essays on international politics. Baltimore: Johns Hopkins Press. ↩︎
Brauch, H. G. (2011). Concepts of security threats, challenges, vulnerabilities and risks. In Coping with global environmental change, disasters and security (pp. 61-106). Springer, Berlin, Heidelberg. https://link.springer.com/content/pdf/10.1007/978-3-642-17776-7_2.pdf ↩︎
Yet again, this is a dynamic Software Resilience (aka Security Chaos Engineering (SCE)) is seeking to change. ↩︎
Waldron, J. (2006). Safety and security. Neb. L. Rev., 85, 454. ↩︎
Luhmann, N. (2018). Trust and power. John Wiley &amp; Sons. ↩︎
I have much, much more to say on this topic (inspired by this paper: Power, M. (2009). The risk management of nothing. Accounting, organizations and society, 34(6-7), 849-855.) ↩︎
Surprise, surprise, Software Resilience (aka Security Chaos Engineering (SCE) is aligned with the vibe of earning respect. ↩︎
Arenas, J. F. M. (2008). From Homer to Hobbes and Beyond—Aspects of ’security’ in the European Tradition. In Globalization and environmental challenges (pp. 263-277). Springer, Berlin, Heidelberg. ↩︎
Shortridge, Kelly (2022). From Lemons to Peaches: Improving Security ROI through Security Chaos Engineering. IEEE SecDev 2022. https://arxiv.org/abs/2307.03796 ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/sec-etymology/what-does-security-mean-hero.png" alt="A hero image with the blog title, When we say security, what do we mean? It is a philosophical essay by Kelly Shortridge. The image to the right of the text is an AI-generated image initialed by yours truly. It depicts an island floating in a sky filled with rainbow and pastel clouds in shades of periwinkle and violet. The island itself is a paradise, a blend of fantasy and cyberpunk aesthetics. Lush trees blanket its ledges while waterfalls cascade from each ledge, frozen in time and resembling a beautiful digital glitch. It is meant to reflect the utopia we might achieve with our systems &amp;ndash; our own islands &amp;ndash; if we embraced the original meanings of the word security."></p>
<h2 id="introduction">Introduction</h2>
<p>We say the word &ldquo;security&rdquo; a lot in tech. Whether we refer to &ldquo;cybersecurity&rdquo; or &ldquo;information security&rdquo; (or &ldquo;infosec&rdquo;), how often do we pause to question what we mean when we say the word <em>security</em> itself?</p>
<p>In general, arguing that words should mean things in infosec is like fighting against the gravity of a supermassive black hole<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. Unfortunately for me, I will die on this hill until my inevitable spaghettification. From what I understand from cybersecurity journalism, this persistence makes me a &ldquo;sophisticated&rdquo; attacker, perhaps even one of those fabled advanced persistent threats (APTs). My cyberweapon of choice is words. My action on target? Destabilizing the industry’s dereliction of meaning. My APT group name will be SOCRATIC KITTEN.</p>
<p>So, true to the spirit of being advanced and persistent and threatening, I write this to challenge and, with any luck<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>, overthrow incumbent notions of the <em>security</em> concept while nurturing new notions that inspire and uplift<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>.</p>
<p>The security concept, like other words-as-concepts (happiness, courage, justice) is an idea, per Plato, perceivable only by the eyes of the mind<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>. To borrow from Hannah Arendt<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>, the word &ldquo;security&rdquo; is &ldquo;something like a frozen thought that thinking must unfreeze whenever it wants to find out the original meaning.&rdquo; To thaw it, we must meditate upon it, seep ourselves in it, let the currents of concept cleanse our preconceptions.</p>
<p>&ldquo;But Kelly,&rdquo; you sigh, &ldquo;shouldn&rsquo;t you be posting about something practical?&rdquo; Like, what, the growing ATTACK SURFACE due to HUMAN ERROR that will surely be solved with SECURITY AWARENESS? Much like the spherical cow, such metaphors<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup> simplify our understanding of the world so it feels comforting and calculable as escapism from the real world, which is very messy. The security people, in no shortage of irony, choose convenience in this trade-off. Humans will interact with systems and do very natural human things and the security people will clutch their pearls and gasp, &ldquo;But why would they do such a thing?!&rdquo; Maybe spherical SBOMs will solve security so we can all finally stop being aware of it.</p>
<p>Because requiring awareness is part of the problem. We have a word for when humans are excellent at being of aware of threats in their environment: <a href="https://en.wikipedia.org/wiki/Hypervigilance">hypervigilance</a>. It is not good when humans are hypervigilant! It means the human is likely traumatized and their nervous system is dysregulated. Unfortunately, the security people want us all to be hypervigilant because nothing says accountability for a problem like telling the potential victims they&rsquo;re responsible for it.</p>
<p>Imagine, if you will, a parallel SKYSECURITY AWARENESS MONTH where we tell people to be careful whenever walking outside because a piano might fall on their head or that they should be scrutinizing the clouds &ndash; their trajectory, color, fullness, and other patterns &ndash; to figure out whether they are safe or not. In real life, we have meteorologists and can open an app that tells us whether we probably need an umbrella or sunglasses or to just stay inside to stay safe. Sometimes people will still go outside because that hurricane isn&rsquo;t going to Instagram itself but there have been and will always be fools and our strategy in a problem domain should not be focused on the minority of fools who will not be persuaded by facts or logic and will gladly jump over guardrails while wondering why they were there in the first place.</p>
<p>My point is that the security people have collapsed upon a meaning of &ldquo;security&rdquo; as a concept that is not serving them or users or organizations or society particularly well. The cybersecurity industry&rsquo;s meaning of &ldquo;security&rdquo; is a distortion, in many cases the exact <em>opposite</em>, of what the word means and has meant throughout its long, storied history. That history has much to teach us, which is why it is, in fact, entirely practical and pertinent to explore it on our upcoming semantic safari.</p>
<p>Thus, this essay will illuminate why our current notion of (cyber)security, the concept, is worth re-evaluating through the lens of what &ldquo;security&rdquo; has meant over time. True to Socratic tradition<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>, these essays will not provide a definitive answer. Our path will be circuitous, but we will perhaps absorb a superior sense of what this ineffable concept of “security” is through ouroboric osmosis by the end of our journey<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup>.</p>
<p>We may not produce a definition of &ldquo;security&rdquo; by the end (although we will try) but, having pondered the meaning of &ldquo;security,&rdquo; we might be able to make our own attempts at it better.</p>
<p>To begin our journey, we must time travel.</p>
<h2 id="the-curious-nature-of-securus">The Curious Nature of Securus</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-roman-mosaic.png" alt="A mosaic of a Roman goddess gazing at a giant key."></p>
<p>It’s a few hundred years before the common era in Rome. You’re chilling in a thermae with your bae admiring the intricate stone mosaic of a rather fetching deity beneath your feet as you feel your pores cleansing in the luscious steam.</p>
<p>Your beloved <em>anaticula</em><sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup> looks at you and smiles, “If only all our days together could be <em>securus</em> like this,” they say. You smile back and nod in blissful agreement, watching them rest their eyes with a satisfied sigh.</p>
<p>For the <em>securus</em> life is one without care. <em>Securus</em> starts with <em>sē</em>, the Latin prefix for “without,” which combines with <em>cūra</em>, the noun for care, concern, thought, trouble, solicitude, anxiety, grief, and sorrow.</p>
<p>Hence, <em>securus</em> is to enjoy piece of mind (<em>securo animo esse</em><sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup>). Securus is the absence of concern, the absence of a troubled mind. The opposite of securus was <a href="http://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1999.04.0059:entry=sollicitus"><em>sollicitus</em></a> — the restlessness arising from being filled with fear, apprehension, anxiety, alarm.</p>
<p>Hurtling forward in time to 2023 CE, we can observe that the typical traditionalist infosec program is closer to <em>sollicitus</em> than <em>securus</em>. Fear, uncertainty, and doubt (FUD) pervade – and <a href="https://www.csoonline.com/article/3302849/why-security-pros-are-addicted-to-fud-and-what-you-can-do-about-it.html">perhaps define</a> – the industry. FUD are the foundational emotions industry vendors, journalists, and less scrupulous thought leaders exploit for fortune and fame.</p>
<p>Our world is increasingly software and internet but there is a powerful industry that tells us that we should be scared to use software and internet, that it is <em>desirable</em> for us to be uncertain at all times when using software and internet, that we should doubt our perceptions at all times because <em>what if</em> the 13,371,337th link you click or line of code you write in your lifetime causes CYBERGEDDON. All of this anti-<em>securus</em> rhetoric is supposedly in our best interests.</p>
<p>FUD pervades cybersecurity to such an extent that we take for granted that these emotions need not define the security we seek to cultivate. Could FUD not instead be seen as the explicit <em>enemy</em> of security?</p>
<p>Thus, a worthy thought experiment is: how might cybersecurity programs look if they actually pursued the state of being <em>securus</em>? How would an information security program designed to ensure the organization is “without care or concern or anxiety” appear? How would cybersecurity strategy differ if the goal outcome was for users &ndash; whether end consumers, software engineers, or employees &ndash; to feel care-free and untroubled?</p>
<p>We will explore those questions as we continue our journey. Our next stop is even further back in history, inspecting the inspiration for the word <em>securus</em> in Ancient Greece.</p>
<h2 id="a-platonic-dialogue">A Platonic Dialogue</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-socrates-cat.png" alt="A painting of a cat in Socratic robes in an ancient greek temple."></p>
<p><strong>Persons of the Dialogue<sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup></strong></p>
<p>SECRATES</p>
<p>THEOXORUS</p>
<hr>
<p>Theoxorus: I am feeling secure in my knowledge today, Secrates, yet have no doubt you shall shortly annoy me with indistinct inquiries into something simple that we should enjoy simply for simplicity’s sake.</p>
<p>Secrates: Secure! O, my dear Theoxorus, do my ears truly witness you bringing a conversation to me on a shining platter?</p>
<p>Theoxorus: What do you mean, Secrates?</p>
<p>Secrates: I mean that you used the word “secure.” And what does “secure” mean? We should always be on the lookout for such answerless words. I know you do not wish to examine it, but now we must!</p>
<p>Theoxorus: Secrates, no –</p>
<p>Secrates: Do you truly object? Is it not your own lips which revealed the relevance of this word to your very life?</p>
<p>Theoxorus: I… I cannot object.</p>
<p>Secrates: And neither can I. We must proceed and, another time, we can discuss what your hesitance for exploration means. I observe that you grow weary of dissecting words and essences of late. And yet with what else do you fill your days? Is it not your own shame you are unwilling to confront? Is it not this regular discourse that exposes the inner self you wish to –</p>
<p>Theoxorus: Secrates! Let us examine “secure” now and my own soul later.</p>
<p>Secrates: As you wish, my dear Theoxorus, I will now proceed with this inquiry, for which I owe you many thanks. There are two words from our current civilization that serve as the inspiration for securus: <em>ataraksia</em> and <em>asphaleia</em>. Let us proceed first with <em>ataraksia</em>.</p>
<p>Theoxorus: What tongue is the word “securus”?</p>
<p>Secrates: Latin.</p>
<p>Theoxorus: “Latin”?</p>
<p>Secrates: Yes. I have seen into the future during a bathing ritual.</p>
<p>Theoxorus: Ah, Secrates, you indulge in the pleasures of the oracles!</p>
<p>Secrates: Believe what you wish. But let us now proceed, as you insisted. What defines ataraksia but what it is <em>not</em>? It is the negation of <em>taraksia</em>, from <em>tarrassein</em>, which means to trouble the mind, to agitate, to disturb, to stir.</p>
<p>Theoxorus: Just as your incessant inquiries do to me.</p>
<p>Secrates: Precisely. And if ataraksia is the negation of these verbs, may it not be said to reflect calmness, equanimity, tranquility? It is as Pyrrho said, a form of freedom from distress and concern, and as the public says in their less formal dialogue, it is the mental state soldiers must cultivate before battle. Is it a goal, a kind of goodness, that a person must pursue in their lives? The Pyrrhonists, Epicureans, and Stoics would agree with this, each for different reasons reflecting their different philosophical foundations.</p>
<p>Theoxorus: What do you think, Secrates?</p>
<p>Secrates: I know nothing, as you know well, Theoxorus. What matters for our conversation is the essence of ataraksia: a freedom from disturbances, especially of the mental variety. And, then, as I have seen in my bathtub in a very distant future, what matters is that the verbs ataraksia is meant to <em>negate</em> – to disturb, to agitate, to trouble, to stir – are the verbs most associated with traditional cybersecurity. Does this not suggest security then means its very opposite?</p>
<p>Theoxorus: To be sure.</p>
<p>Secrates: And does this not trouble the mind in itself?</p>
<p>Theoxorus: Certainly. But how can you know such contraction abounds?</p>
<p>Secrates: This future world seems designed by contradiction. Their “security awareness training” exercises, such as those meant to phish humans as one lures a fish with a decoy worm, have the explicit goal of “troubling the mind” to keep persons vigilant for danger. In this future, application security tools are infamous for how they disturb software development and delivery practices – and does that not trouble the minds of software engineers? The list of security rules and policies are unending, often arbitrary – and have they not found a most effective means to agitate the subjects under their dominion?</p>
<p>Theoxorus: They have.</p>
<p>Secrates: And do we believe that such activities result in greater defenses?</p>
<p>Theoxorus: Certainly not, unless one believes that defense is impossible through design. This reminds me of our prior dialogue on beauty, Secrates, as what you describe of this “infosec industry” must make it beauty’s enemy.</p>
<p>Secrates: Are you surprised, Theoxorus, that infosec makes enemies when its goal is to disrupt tranquility? And how could cybersecurity achieve beauty when it sees ugliness and danger in all things outside itself?</p>
<p>Theoxorus: Of course. But surely some interpretation by other schools of thought justifies this perversion?</p>
<p>Secretes: They will not. Atarksia is seen as a strict requirement to attain the true, full happiness referred to as <em>eudaimonia</em>. It may surprise you, my dear Theoxorus, that the word atarksia is associated with Epicurean philosophy.</p>
<p>Theoxorus: But calmness seems harmonious with Epicureanism.</p>
<p>Secrates: Did you wish to ask me a question, my friend?</p>
<p>Theoxorus: Your social skills are as crude as unfired amphorae, Secrates. So, then, what is shocking about this?</p>
<p>Secrates: It is shocking because it is hard to imagine a philosophy more opposed to cybersecurity than Epicurianism, which argues that the goal of a sentient life form is to maximize pleasure and minimize pain<sup id="fnref:12"><a href="#fn:12" class="footnote-ref" role="doc-noteref">12</a></sup>. Epicurus is specific in defining pleasure as the absence of pain, and therefore “ethical hedonism” is the pursuit of avoiding pain, including pain that imparts pleasure near-term but pain longer-term. Without distracting ourselves by examining Epicureanism in more detail, we can say that the goal they espouse is to foster a life of <em>tranquility</em>. Does this cybersecurity community foster a life of such tranquility?</p>
<p>Theoxorus: They do not.</p>
<p>Secrates: I agree, my friend. Cybersecurity is not known for avoiding pain, regardless of temporal outlook. The cybersecurity community inflicts pain on others, whether by stoking fear or by making lives harder. Is it not fair to argue that infosec even inflicts pain on itself?<sup id="fnref:13"><a href="#fn:13" class="footnote-ref" role="doc-noteref">13</a></sup> Is it not cruel to cultivate obsession of vulnerabilities that kindle fear, uncertainty, and doubt when your stated aim is to eliminate them? Do we believe this fetishization of vulnerabilities and lascivious focus on blaming what they call “human error” can be called “ethical hedonism”? Or is it a societal mechanism to stifle introspection and to instead reenact shame? I regret that these questions reflect a topic for another time in the realm of psychology<sup id="fnref:14"><a href="#fn:14" class="footnote-ref" role="doc-noteref">14</a></sup>, which has yet to be invented.</p>
<p>Theoxorus: You tease me, Secrates.</p>
<p>Secrates: Yes, but there is another thing, Theoxorus: what of <em>asphaleia</em>?</p>
<p>Theoxorus: You are unfailing in your pursuit, Secrates.</p>
<p>Secrates: Well, I suspect you might find its origin amusing. Asphaleia originates from wrestling and reflects the capacity to prevent being overthrown, being immovable and steadfast, like the throne of the gods, or like me in the presence of your lamentations and tantrums about our discourse. By roughly a century before our time<sup id="fnref:15"><a href="#fn:15" class="footnote-ref" role="doc-noteref">15</a></sup>, asphaleia also came to mean the stability of the city state, to prevent being overthrown, and, if I permit myself to indulge in speculation, it could be extended to describe the stability of an organization (a kind of social entity I beheld in my bath). And, as some great scientists <a href="https://itrevolution.com/accelerate-book/">will prove</a> thousands of years from our day, speed and stability of work harmonize and impart greater value together than apart. And, well, now I can put the matter as: is this infosec, that which slows down work, an enemy of asphaleia?</p>
<p>Theoxorus: Yes, certainly.</p>
<p>Secrates: Very good; and can you tell me how this might be despite asphaleia serving as the seed of the “security” concept’s own existence?</p>
<p>Theoxorus: I must confess, Secrates, that this “security” society of the future seems very lost.</p>
<p>Secrates: I dare say, my friend, that you spend too much time with me if you think it is an uncommon human desire to seek power and control, even at the expense of integrity. And can we truly argue that such desires are always conscious to the subject?</p>
<p>Theoxorus: Alas, they are not.</p>
<p>Secrates: If what you say is true, I ask you, then: what is this cybersecurity society <em>not</em> most of all?</p>
<p>When Secrates had asked his question, for a considerable time there was silence; Theoxorus furrowed his brow while meditating on this question; only Secrates made a sound when softly blowing on the delicate seedheads of a dandelion.</p>
<p>Theoxorus: For what did you wish as you blew, Secrates?</p>
<p>Secrates looked up at Theoxorus and said, with a smile: For you to answer my question.</p>
<p>Theoxorus: I will tell you. My feeling is that this cybersecurity society lacks <em>curiosity</em>.</p>
<p>Secrates: Exactly. The traditional cybersecurity society is kin of <a href="https://ethics.org.au/ethics-explainer-panopticon-what-is-the-panopticon-effect/"><em>Argus Panoptes</em></a>; the role of enforcer grants them relevancy but not wisdom. Alas, my friend, if only they would follow the path of Daedalus instead. They feel ignorance as a sting and slight, as if ignorance was not the default condition of being alive! But there is more: how are they like <a href="https://www.gutenberg.org/files/1735/1735-h/1735-h.htm">the sophist</a>?</p>
<p>Theoxorus: They both are paid to question without truth as their aim.</p>
<p>Secrates: And do they not both gain fortunes from this?</p>
<p>Theoxorus: They do.</p>
<p>Secrates: And are they not both hunters after a living prey, servants of the powerful, cousins of opportunists exploiting emotion for control?</p>
<p>Theoxorus: They are.</p>
<p>Secrates: But where the cybersecurity society differs is they seek the impossible void – the <em>not-being</em> of weakness – and they are willing to destroy whatever <em>being</em> stands in the way of this pyrrhic quest.</p>
<h2 id="the-dawn-of-security-as-a-noun-securitas">The Dawn of &ldquo;Security&rdquo; as a Noun: Securitas</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-priest-praying.png" alt="A painting of a classical priest praying to a stained glass painting depicting a fancy padlock."></p>
<p>The ancient association of <em>securus</em> with Epicureanism was not to last. Epicurianism was outlawed once the Roman Empire entered its rebellious Christianity phase because, as a philosophy, it’s quite incompatible with the idea that souls must be “saved” and that God is relevant to everyday life (Epicurus literally argued that the gods do not gaf about human affairs and do not punish or reward human behavior).</p>
<p>Why is this relevant to cybersecurity? Because – as a collection of entities across vendors, consultants, thought leaders, and practitioners incentivized to increase influence on the world – information security has sanctified itself as a secular authority who can deem worthiness from on high and reward or punish according to behavior<sup id="fnref:16"><a href="#fn:16" class="footnote-ref" role="doc-noteref">16</a></sup>.</p>
<p>Most security advice roughly goes, “if you’re interacting with a computer and what you’re doing feels convenient, you are actually doing something BAD.” We’re supposed to report when we’ve done something wrong, like a Catholic at confessional. We can gain an “exception” from the security authority like the medieval Catholic Church granting indulgences<sup id="fnref:17"><a href="#fn:17" class="footnote-ref" role="doc-noteref">17</a></sup> to partially reduce the punishment of the sin.</p>
<p>Naturally, only the ordained can read and interpret the sacred texts. The unwashed masses may only <em>receive</em> the good word. The divine wisdom is so complex, so arcane, far too difficult for anyone else to transform into action. Does this not imply that the non-security “normies” cannot be secure without the blessing of the security establishment? The human “users” must suffer in this life for their sins, for turning away from the path of security<sup id="fnref:18"><a href="#fn:18" class="footnote-ref" role="doc-noteref">18</a></sup>.</p>
<p>In the eyes of the Cybersec Church, users are weak sheep who must be told what to do and guided with a strong hand in the ways of natural security law, less they drift wayward into wickedness. We must practice chastity in all manners digital and resist the temptation of clicking on things unless we want the whole network to drown in depravity<sup id="fnref:19"><a href="#fn:19" class="footnote-ref" role="doc-noteref">19</a></sup>.</p>
<p>For the Security Spirit is always watching. It knows when you allow incoming connections from cloud provider IPs even though attackers <em>also</em> use those IPs. It knows when you copy and paste something from Stack Overflow even though it could be <em>backdoored</em>. It knows when you don&rsquo;t VPN on the hotel WiFi, where <em>anyone</em>, including a big, <del>sexy</del> scary APT could connect to it. Wicked, wicked user! A thousand years smoldering in hellfire and pestilence for your sins! Try clicking things now once the maggots have feasted upon your flesh!</p>
<p><img src="/blog/img/sec-etymology/what-is-security-tranquil-servers.png" alt="A pair of servers frolicking in a field of flowers."></p>
<p>We will return to security in the context of authority later, but now we must march onward to examine how <em>securus</em>, the adjective, evolved its noun form: <em>securitas</em>. In the Roman period, <em>securitas</em> specifically corresponded to intense emotions. And, it’s worth noting, the freedom from care represented by securitas does not require justification based on reality.</p>
<p>Securitas refers to a group of <strong>emotions</strong> (the things security and software “rationalists” alike pretend they don’t have) which relate to the absence of fear and include emotions like trust and confidence<sup id="fnref:20"><a href="#fn:20" class="footnote-ref" role="doc-noteref">20</a></sup>. In fact, even the more modern notion of “job security” aligns to this older meaning; it is a <em>feeling</em>, specifically that you don’t have to worry about losing employment. Threats to it aren’t the point, the <em>feeling</em> is the point.</p>
<p>Now, my dear mortals, can we imagine a cybersecurity program designed to ensure the organization is fearless and free from care, an infosec program that is quiet, easy, and composed? Cybersecurity as a discipline would be concerned with ensuring the organization could remain cheerful, tranquil, and serene. Servers would frolick in a field like fecund fawns. Software engineers would release code with confidence, trusting the safety designed into their languages, tools, platforms, and environments. If employees felt fear, uncertainty, or doubt when using technical systems, the security program would be <em>curious</em> and design solutions to alleviate their concerns.</p>
<p>Imagine a cybersecurity program with the goal of relieving the rest of the organization from anxiety about security&hellip; cybersecurity that <em>promotes</em> convenience and puts in the hard work of crafting design-based mitigations. Status quo infosec – manifesting as <a href="https://swagitda.com/blog/posts/the-security-obstructionism-secobs-market/">SecObs</a>, <a href="https://www.youtube.com/watch?v=kiunphALNKw">Security Theatre</a>, etc. – seeks quite the opposite. Traditional cybersecurity programs openly admit aiming to <em>increase</em> anxiety among the rest of the organization to ensure they are vigilant to threats and always looking over their proverbial shoulders for potential peril. The security people decry convenience and shame users for seeking it while simultaneously indulging in it like Scrooge McDuck in his pool of gold by relying on enforcement, behavioral control, and blame as cheap &ldquo;mitigations.&rdquo;</p>
<p>I often read security advice or policies or other prescriptions and have the sense that the authors are trying very hard to pretend that local context is irrelevant and that generalized control is possible. Convenience is often framed as the enemy. The question is: convenience for whom?</p>
<p>Sure, convenience is clicking on every link or adding a third-party library without a second thought. But convenience is also requiring a security tool that you will never have to use, without performing any user research with the humans who <em>will</em> use it in their workflows. Convenience is tapping the 10^10 Security Commandments when someone makes a mistake and blaming them in front of Congress. Are we shocked that a framework of &ldquo;convenience for me, but not for thee&rdquo; doesn&rsquo;t seem to produce the bundle of positive emotions <em>securitas</em> represented?</p>
<p>Are there words <em>less</em> associated with cybersecurity than “cheerful,” “bright,” “serene, “composed,” “quiet,” or “easy”?<sup id="fnref:21"><a href="#fn:21" class="footnote-ref" role="doc-noteref">21</a></sup> The whole business seems antithetical to those traits. Traditional infosec is all <em>cura</em> and no <em>se</em> – better deemed “cybercurity” than cyber_se_curity: a discipline of <em>increasing</em> concern, thought, trouble, anxiety, and grief in the organization regarding “cyber” matters. Offensive security is especially nonsensical through this etymological lens because it then means “offensive tranquility.”</p>
<p>Or maybe it isn&rsquo;t that strange. After all, don&rsquo;t overpriced healing crystals and cyber wares have quite a bit in common?</p>
<h2 id="the-multifaceted-meaning-of-security-as-securitas-in-the-roman-era">The Multifaceted Meaning of &quot;Security&quot; as &lsquo;Securitas&rsquo; in the Roman Era</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-goddess-with-laptop.png" alt="A marble statue of a goddess uses a laptop. She has a spear on her back and looks erudite and divine."></p>
<p>While a sense of dignity is not the vibe most of us feel when clicking through mandatory cybersecurity awareness training courses, dignity and security were seen as closely coupled concepts in the Roman era.</p>
<p><a href="https://plato.stanford.edu/entries/cicero/">Cicero</a>, living during the last century B.C., noted that whomever has <em>tranquillitate animi</em> (a tranquil mind) and <em>securitas</em> will have <em>dignitas</em> (dignity)<sup id="fnref:22"><a href="#fn:22" class="footnote-ref" role="doc-noteref">22</a></sup>. Cicero’s meaning of <em>securitas</em> here involves the absence of care or fear as well – and he saw this tranquility of mind as a prerequisite for an individual’s personal happiness and prestige in society. (We will talk about respect and dignity in the context of security more once we time travel to more modern definitions of the word).</p>
<p>A century later, <a href="https://plato.stanford.edu/entries/seneca/">Seneca</a> framed <em>securitas</em> as a mindset<sup id="fnref:23"><a href="#fn:23" class="footnote-ref" role="doc-noteref">23</a></sup>, a lovely extension of the existing notion of security as a bundle of emotions. Inspired by Socrates, Seneca viewed <em>securitas</em> – and the absence of the fear emotion<sup id="fnref:24"><a href="#fn:24" class="footnote-ref" role="doc-noteref">24</a></sup> – as how the wise can come closer to god because only a god has no reason to fear death<sup id="fnref:25"><a href="#fn:25" class="footnote-ref" role="doc-noteref">25</a></sup>.</p>
<p>Securitas as this nearly-divine mindset quickly morphed into an association with divinity itself during the reign of Nero (the 1st century AD), specifically reflecting the divinity of the Emperor<sup id="fnref:26"><a href="#fn:26" class="footnote-ref" role="doc-noteref">26</a></sup> <a href="https://en.numista.com/catalogue/pieces246399.html">on coinage</a>. It also started to reflect an environmental vibe rather than emotions or mindset; the surrounding world, the genesis of a subject’s freedom from care, could also be <em>securitas</em>, possessing a peaceful and tranquil atmosphere.</p>
<p>But Seneca also laid the groundwork for coupling the security of the state with security of individuals, in the specific sense of public securitas contributing to the capacity to live according to virtue. In this framing, securitas was explicitly based on mutual trust between the ruler and the ruled.</p>
<p>This reflects yet another semantic deviation with cybersecurity, which is generally mistrustful of any parties outside infosec, including those who should be allies (like software engineers). How else should we characterize the common refrain that any employee is a potential &ldquo;insider threat&rdquo;? And the cybersecurity status quo certainly does not seem to foster mutual trust by helping potential allies live well, offering minimal proof that they have the best interests of the collective in mind; if anything, they often prove the opposite.</p>
<p>In fact, Seneca highlights that it is a mistake to think that “a ruler is [only] safe when nothing is safe from the ruler.”<sup id="fnref:27"><a href="#fn:27" class="footnote-ref" role="doc-noteref">27</a></sup> The pox of “shadow” assets – whether shadow IT, shadow SaaS, shadow containers, shadow APIs – shows that the infosec establishment succumbs to this mistake readily. In fact, cybersecurity&rsquo;s general fetishization of control – vitalized by vendors – is a continuous realization of this mistake.</p>
<p>As anyone familiar with the motifs of history could predict, the subsequent rulers did not listen to Seneca’s admonition, which eventually led to an explicit rejection of hereditary rulership in the Nerva-Antonine dynasty. (Take that for what you will in the context of our current cybersecurity rulership). This led <a href="https://en.wikipedia.org/wiki/Tacitus">Tacitus</a> to express the new <em>securitas publica</em> (public security) as the confidence of the citizens that the state will no longer threaten them<sup id="fnref:28"><a href="#fn:28" class="footnote-ref" role="doc-noteref">28</a></sup>. That mutual trust was the core ingredient of <em>securitas</em> during that phase and reflected a check on authority.</p>
<p>It is interesting to ponder the notion of <em>securitas publica</em> in the organizational context; an organization&rsquo;s citizens would be confident that the enforcers of security policy can no longer disrupt their way of life or erode their pursuit of fulfillment. How many cybersecurity programs might be characterized as such today? How many programs instead feel disruptive, corrosive to productivity, and fostering anything but a &ldquo;peaceful and tranquil atmosphere&rdquo;?</p>
<p>Securitas&rsquo; confident spirit evolved into meaning of “assurance of faith” (as opposed to doubt) during Roman Antiquity, as espoused by Tertullian and, later, Saint Augustine. This “opposition to doubt” again is at odds with one of the letters of the acronym which defines traditional cybersecurity: F.U.D. (fear, uncertainty, and doubt). As we’ve seen time and again throughout this essay (and we’re only into the 2nd - 4th century A.D. here!), the earlier and variegated meanings of <em>securitas</em> fly in the face of traditional infosec. Traditional infosec wants to doubt everything. It takes <em>pride</em> in doubting everything. Assurance of faith is seen as a security sin.<sup id="fnref:29"><a href="#fn:29" class="footnote-ref" role="doc-noteref">29</a></sup></p>
<p>Speaking of faith, early Roman Antiquity also saw the creation of Securitas as a deity. While the mythos surrounding Securitas, the goddess, is lamentably shallow, it’s worth noting that she was the goddess of security <em>and</em> stability<sup id="fnref:30"><a href="#fn:30" class="footnote-ref" role="doc-noteref">30</a></sup>. Given the evidence from <a href="https://services.google.com/fh/files/misc/state-of-devops-2019.pdf">the DORA research</a> and metrics that stability and speed work in tandem and are complimentary, this suggests that if we worship at the altar of security, then we must also worship at the altar of speed.</p>
<p>The Roman god of speed, <a href="https://www.britannica.com/topic/Mercury-Roman-god">Mercurius</a> (aka Mercury), was also the god of shopkeepers, merchants, travelers, transporters of goods, thieves, and tricksters. It&rsquo;s worth noting that the coupling of commerce and prosperity with security is quite common throughout its history (more on this once we get to the 16th century). Traditional cybersecurity, in contrast, often pretends like prosperity is not the primary goal or, worse, views prosperity as a foe to security.</p>
<p>&ldquo;Why don&rsquo;t companies prioritize security? Don&rsquo;t they know THE THINGS can be HACKED??&rdquo; Well, dear security people, what do you think allows the companies to pay for your six or seven figure salary? It is because they prioritize money that they can afford to spend it on security endeavors that do not remunerate them and often cannot even be tied to tangible success outcomes beyond &ldquo;we saw these malware samples or known bad IPs this month&rdquo; spoonfed from vendor dashboards in symbiotic self-perpetuation.</p>
<p>The infosec industry forgets that security, even in its more modern meaning, is not just about protecting threats; it’s about protecting threats <em>against something</em>. In the business context, it’s about protecting threats <em>against prosperity</em>. Through this lens, is it not a victory if a security program waters the seeds of revenue growth? And is the security program not a tragic failure if it chokes and cages this material growth because of a “risk” that exists only as an incorporeal counterfactual?</p>
<p>Between the profligate spending on ineffectual security tools and the obstructionism imposed by security programs, it’s quite possible that the threat to enterprise prosperity by traditionalist cybersecurity rivals that posed by actual attackers.</p>
<p>This distinction is also emphasized in the term “national security,” even as we mean it today: national security is about defending threats <em>to what</em>? Liberty, prosperity, the pursuit of happiness… and we rightly dislike security measures that get in the way of these goals (often labeling them as “Security Theater”<sup id="fnref:31"><a href="#fn:31" class="footnote-ref" role="doc-noteref">31</a></sup>).</p>
<p>Thus, we must ask, cybersecurity defends against threats <em>to what</em>? Largely the same things, but in businessy and computery contexts. If liberty or prosperity or the pursuit of happiness is choked out by security measures, then security is the threat in itself and the subjects are left in need of security against security<sup id="fnref:32"><a href="#fn:32" class="footnote-ref" role="doc-noteref">32</a></sup>. Indeed, this is where we find ourselves with cybersecurity today.</p>
<p>But we are not yet done with this era. A few centuries after the deification of securitas, its meaning as “carefree” was twisted by religious leaders into an undesirable form: the state of being careless, reckless, heedless, and negligent<sup id="fnref:33"><a href="#fn:33" class="footnote-ref" role="doc-noteref">33</a></sup>.</p>
<p>This notion of security is perhaps closest to the status quo in infosec today, which is quite careless with human (user, developer, colleague) time and attention, reckless with organizational budget, and negligent to design-based security solutions that are more reliable than attempting to control human behavior. The cybersecurity industry is heedless with its FUD-fueled zealotry, fretting about irreleventia while pretending nothing can be done about <a href="https://blogs.cfainstitute.org/investor/2017/10/23/do-gray-rhinos-pose-a-greater-threat-than-black-swans/">the grey rhinos</a> charging into our systems.</p>
<p><em>Securitas</em> was also relevant in the context of “Roman security” and specifically meant the Roman Empire’s peaceful and orderly domination of the world. Would we characterize traditional infosec programs as peaceful and orderly today? Even diehard zealots of the cybersecurity status quo readily admit that much of infosec in practice is firefighting and disorder. A worthy question is: who benefits from this paradigm?</p>
<p>Alas, the Roman Empire declined, as did <em>securitas</em>, whose meanings were largely stolen by the word <em>certitudo</em>. Thus, we must go to the provincial stables of the Middle Ages to continue our semantic safari. The two meanings of securitas not consumed by certitudo included <em>pax</em> (peace) and religious indifference.</p>
<p>The latter meaning persisted (albeit without nearly as much popularity as before) through to Martin Luther in the 16th century, who labeled “die Sicheren” as the people he was fighting against &ndash; people who did not truly trust the Holy Spirit and substituted true faith for religious rituals and conspicuous, performative acts. In his time, spiritual unity <a href="https://www.nationalgeographic.com/history/article/martin-luther-freedom-protestant-reformation-500">was preserved</a> &ldquo;through coercion and violence&hellip; dissent from orthodoxy was outlawed, heresy was rooted out and punished by fire and sword.&rdquo; Luther was excommunicated for his &ldquo;errors&rdquo; about the Holy Spirit, including the &ldquo;error&rdquo; of believing the Christian god wouldn&rsquo;t want heretics burned alive.</p>
<p>In our era, the traditionalist Security People put quite a bit of trust in their folk wisdom and rituals, despite their unclear success. It is still counterculture to suggest that humans shouldn&rsquo;t be punished for security &ldquo;errors.&rdquo;<sup id="fnref:34"><a href="#fn:34" class="footnote-ref" role="doc-noteref">34</a></sup> And does it not benefit the vendors and research analysts to continue spoon feeding this advice to security leaders?</p>
<p>Just as Martin Luther felt centuries past about religious belief, is it wrong to want to reconstruct our entire approach to cybersecurity? Just because power structures are in place, incumbents entrenched, money flowing, does not mean something new, bold, and based on real acts of security rather than displays of it &ndash; on outcomes vs. outputs &ndash; could not supplant the status quo. Fatalism is not true to our nature as humans and certainly not true to the spirit of the &ldquo;security&rdquo; concept as we have seen.</p>
<p>But there is more for us to see and for that, we must venture onward into the pre-Enlightenment period and beyond&hellip;</p>
<h2 id="the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era">The Evolving Meaning of Security as &lsquo;Securitas&rsquo; in the Early Modern Era</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-knight-servers.png" alt="A painting of a knight in shining armor holding freshly conquered servers. He is standing in pastel clouds."></p>
<p>Centuries passed and the relevance of the word <em>securitas</em> faded. Thomas Hobbes, one of the founders of modern political philosophy in the 17th century, was really the hype man for <em>securitas</em> to keep it from dissolving into disuse<sup id="fnref:35"><a href="#fn:35" class="footnote-ref" role="doc-noteref">35</a></sup>.</p>
<p>Hobbes’ depicts the goal of <em>securitas</em> as the genesis and maintenance of peace, which, as we’ve already discussed, is quite unlike the cybersecurity status quo. Securitas is cultivated through alliances to make it dangerous for the remaining “all” to attack. Samuel, baron von Pufendorf<sup id="fnref:36"><a href="#fn:36" class="footnote-ref" role="doc-noteref">36</a></sup> emphasized the need for allies with a less cynical angle, arguing that an individual human needs companions to aid them in order to realize <em>securitas</em> (which perhaps foreshadows the concept of “social security”).</p>
<p>Are cybersecurity professionals today known for gathering allies? Quite the opposite. For instance, the relationship between developers and security pros seems to only be getting worse<sup id="fnref:37"><a href="#fn:37" class="footnote-ref" role="doc-noteref">37</a></sup>. Traditional infosec strategy does not enforce security policy through cooperation, but through coercion.</p>
<p>To keep a long journey into Hobbes’ rather paranoid – and exceptionally cynical – perspective short, he ultimately proposes that a sovereign should be the one to guarantee <em>securitas</em> by doling out punishments for violating agreements, which requires subjugation of the ruled by the ruler.</p>
<p>Punishing humans who step out of line and requiring obedience to their rules – for the ruled to subjugate their other wants as secondary to the needs of the sovereign… is this not the playbook of traditional cybersecurity? It is the easiest option to pursue because eliminating or reducing hazards by design requires far more effort than demanding obedience. And if there’s one thing <em>Homo sapiens</em> love above all else, it is cognitive efficiency.</p>
<p>It is quite interesting that <em>securitas</em> was used as imperial propaganda during the Roman era to insist that the state was necessary and by Hobbes to insist that the state must subjugate its citizens. Does this tell us something about status quo cybersecurity? Or should we instead deem it &ldquo;security imperialism&rdquo;?</p>
<h2 id="security-welfare-dignity-and-the-early-modern-era">Security, Welfare, Dignity, and the Early Modern Era</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-melting-time.png" alt="A painting of a padlock clock that is exploding."></p>
<p>Around the same time Hobbes was slandering humanity’s nature and proposing the need for a strong-armed state (the 16th century), <em>securitas</em> also started to absorb a financial meaning: something pledged as a guarantee that an obligation would be fulfilled – that the debtor has no need to worry because something has been pledged against the debt.</p>
<p>In this colloquial meaning (which persisted for centuries), <em>securitas</em> is rooted in a feeling – that the lender doesn’t need to worry. And, similarly, we see a theme throughout the Enlightenment that the state should <em>assure</em> citizens that they do not have to fear violence, not just <em>ensure</em> that they are free from violence in their everyday lives. Basically, that the state has a duty to consider the feelings of citizens, not just protect them.</p>
<p>It is in this era and through the Industrial era that security starts to be seen as a human right, as an essential requirement for humans to enjoy all of the other rights. After all, if you’re the victim of violence (particularly a violent death) – or in a perpetual state of worry about it – it’s pretty hard to pursue liberty or prosperity.</p>
<p>Thus, over time, security evolved to mean a guarantee or assurance that certain things would be accessible to an entity – like “water security” reflecting the assurance that a human individual will have access to clean water on an ongoing basis<sup id="fnref:38"><a href="#fn:38" class="footnote-ref" role="doc-noteref">38</a></sup>.</p>
<p>The temporal implication of this meaning is important: it is not just about having access to a thing (whether a physical good or an intangible value) now, but about the guarantee that you will have access to it in the <em>future</em>, too. Not just that you do not have to fear a violent death <em>now</em>, but that you do not have to fear a violent death in the multitude of possible futures on the horizon, either.</p>
<p>We can trace this notion through to the more recent “social security.” The term was <a href="https://www.ssa.gov/policy/docs/ssb/v55n1/v55n1p63.pdf">coined on a whim</a> because “pension” carried too much baggage to be palatable to a wide audience. So, they defined social security as a “type of security which would… <strong>promote the welfare</strong> of society as a whole.”<sup id="fnref:39"><a href="#fn:39" class="footnote-ref" role="doc-noteref">39</a></sup> (emphasis mine)</p>
<p>Thus, the purpose of security is to promote the welfare of a particular entity. Extending this, the purpose of cyber security is to promote the welfare of cyber things (i.e. all things digital). While that may sound silly, there’s something important here: <strong>promoting welfare is not just about stopping threats</strong>.</p>
<p>What else is embedded in this purpose of promoting welfare? As we explored, dignity was tightly coupled with security during the Roman period and this association resurged with the concept of “human security,&quot; which arose from the rejection of Hobbsian state-centric security.</p>
<p>While the term’s precise meaning is still subject to ample debate<sup id="fnref:40"><a href="#fn:40" class="footnote-ref" role="doc-noteref">40</a></sup>, a foundational facet of &ldquo;human security&rdquo; is respect: that a critical part of ensuring a human is secure is ensuring their humanity is respected. Because dehumanizing certain populations and stripping them of dignity is <a href="https://www.npr.org/2011/03/29/134956180/criminals-see-their-victims-as-less-than-human">one of the ways</a> authoritarianism cultivates power; it is how a society slips into fascism.</p>
<p>What, then, should we make of the fact that the infosec industry sneakily strips users – whether the accountant clicking on a link to wire money, the marketing professional who downloads a PDF, the developer who makes a mistake when writing code – of their dignity?</p>
<p>The disrespectful sneer is palpable in the designation of “human error” as the cause of incidents. Security awareness training requires users to remember dozens of rules that ignore the realities of their work on <a href="https://twitter.com/swagitda_/status/1451203420673740800?s=20&amp;t=8yiYulSDFV_Hdb7iV6pQ-g">thing-clicking machines</a> and implies that it will be their fault if something bad happens. There is no respect for their time, attention, intelligence, or autonomy.</p>
<p>To quote <a href="https://www.usenix.org/system/files/1401_08-12_mickens.pdf">the legendary James Mickens</a>, “This is uncivilized and I demand more from life.”</p>
<p>But imagine a world in which cybersecurity programs prioritized respect as a core value of security! Respect for users&rsquo; private data; respect for users&rsquo; time; respect for users&rsquo; cognitive and emotional energy; respect for users&rsquo; pursuit of their priorities; respect for the organization’s pursuit of its priorities as a collection of users serving other users.</p>
<p>In fact, the term “users” may even be part of the problem. Users are abstract, faceless, behind a screen; they lack intrinsic worth and must be &ldquo;good for&rdquo; something.<sup id="fnref:41"><a href="#fn:41" class="footnote-ref" role="doc-noteref">41</a></sup> It makes it easier to disrespect them and resent them for not supporting our own goals. It makes it easier to not see them as people, but as exploitable resources that either we control or attackers do. It’s perhaps harder to blame a sleep-deprived caretaker of a lover or child or parent who, just trying to do their job well enough to keep their health insurance, clicks on something designed to look urgent and important.</p>
<p>Blaming a “user” for being so careless as to click on an obfuscated link and enter in their VPN credentials on the malicious site makes it a more antiseptic affair. It makes us feel like it’s a more just world rather than a chaotic one – like the problem is a user stepping out of line rather than complexities conspiring towards compromise. This dehumanization makes it easier to absolve the ruler and deride the ruled – these “users” – who are simply resources towards our ends, ever holy, ever noble.</p>
<h2 id="what-security-means-in-the-information-society">What &quot;Security&quot; Means in the Information Society</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-enabling-tranquility.png" alt="A marble goddess sits on a gilded throne in pastel clouds. She is cleansing a laptop which is beaming with iridescent light."></p>
<p>We end our journey in the modern era, the information society<sup id="fnref:42"><a href="#fn:42" class="footnote-ref" role="doc-noteref">42</a></sup>. The best way to summarize the concept of security in modern times is that it’s controversial af and dependent on context<sup id="fnref:43"><a href="#fn:43" class="footnote-ref" role="doc-noteref">43</a></sup>. But, Wolfers’ two-part definition of “security” from 1962 is widely cited<sup id="fnref:44"><a href="#fn:44" class="footnote-ref" role="doc-noteref">44</a></sup>:</p>
<ol>
<li>In an objective sense, security measures the absence of threats to acquired values</li>
<li>In a subjective sense, security reflects the absence of fear that such values will be attacked</li>
</ol>
<p>The term “values”, here, of course, is ambiguous and open-ended. But let’s think about what this means for the cybersecurity context.</p>
<p>A realist would say security is achieved when “the dangers posed by manifold threats, challenges, vulnerabilities and risks” in the digital realm are “avoided, prevented, managed, coped with, mitigated and adapted to” by individuals, groups, or organizations<sup id="fnref:45"><a href="#fn:45" class="footnote-ref" role="doc-noteref">45</a></sup>.</p>
<p>A social constructivist would say security is achieved “once the perception and fears of security ‘threats’, ‘challenges’, ‘vulnerabilities’ and ‘risks’ are allayed and overcome.” That is, objective security is not enough; the subjective will always wield considerable influence in the cybersecurity context.</p>
<p>In my experience, tech bros really do not like the idea that emotions or subjectivity come into play in tech stuff at all. They tend to describe their emotions as “logic” and subjective experience as “facts.” We don’t have enough time to unpack all of that in this post (and if only more of them would go to therapy). But it’s a very real problem when traditional cybersecurity folk wisdom was often woven by people who think the <em>objective</em> is all that matters.</p>
<p>What’s worse about the cybersecurity status quo is that the subjective is dismissed but the objective isn’t even really measured. Again, objective security <strong>measures</strong> the absence of threats to acquired values. We do not have objective security in traditional infosec and, by that definition, it’s not even really what’s being pursued. Even when fleshing out the realist interpretation of objective security, traditional cybersecurity mostly focuses on the “avoided” or “prevented” part rather than the managed, coped with, adapted to part<sup id="fnref:46"><a href="#fn:46" class="footnote-ref" role="doc-noteref">46</a></sup>.</p>
<p>From the perspective of modern scholars, security is meant to lead to more <strong>goal-oriented</strong> behavior while insecurity leads to <strong>threat-oriented</strong> behavior. As anyone who&rsquo;s walked the RSAC vendor hall knows all too well, basically everything in cybersecurity today is about THREATS. Everything is a potential THREAT: your API, your CI/CD pipelines, your laptop, your phone, your fridge, your colleagues, your loved ones, even your own BRAIN is a THREAT because what if you make a MISTAKE and become the very INSIDER THREAT you swore to destroy!?!?!</p>
<p>Everything about the infosec status quo today reflects threat-oriented behavior, therefore implying <em>insecurity</em> rather than <em>security</em>. Traditional cybersecurity isn’t about preserving and upholding values – like prosperity or productivity or an inclusive work environment. Traditional cybersecurity is about preventing and avoiding threats, aiming for the impossible standard of attacks never successfully happening.</p>
<p>The cybersecurity status quo forgets the whole point of stopping the threats is to preserve certain <em>values</em>.</p>
<p>This fetishization of threats and elimination of them as an aim in itself is how we end up with cybersecurity programs which cause so much grief and anxiety and friction for everyone else in the organization. If the infosec industry actually focused on preservation of values, then UX would probably be one of the most important skills in the discipline (but how would incumbent cybersecurity vendors milk that for cash?).</p>
<p>After all, what’s the point of protecting the cherished organizational value of productivity from potential attacks – which likely only happen sometimes rather than continuously, from an impact perspective – if you’re going to erode that value daily through security policies that seem divorced from real goals, constraints, and workflows?</p>
<p>What’s the point in protecting the organization against a potential financial loss due to attack when you’re not only spending its money on security (which could be spent elsewhere), but also slowing down its ability to grow revenue due to security procedures? For an organization with $100 million in revenue wanting to gain market share, shipping 20% fewer features per year due to friction created by the security program has more material impact short, medium, and long term than a ransomware operator demanding $1 million, $5 million, or even $10 million.</p>
<p>Waldron’s quite recent definition of the word security summarizes this all nicely and is worth repeating here:</p>
<blockquote>
<p>“… security now comprises protection against harm to one’s basic mode of life and economic values, as well as reasonable protection against fear and terror, and the presence of a positive assurance that these values will continue to be maintained in the future.”<sup id="fnref:47"><a href="#fn:47" class="footnote-ref" role="doc-noteref">47</a></sup></p>
</blockquote>
<p>Cybersecurity as it stands today flunks this definition. It is impossible to provide assurance that basic and economic values will be maintained in the future if you do not know what they are, which the cybersecurity status quo does not know because they do not care because all of that is irrelevant to their noble need to sacrifice everyone’s time, energy, and money at the altar of the FUD gods to gain more budget, more headcount, more influence and they shroud this ritual in a lab coat of “rational” paranoia.</p>
<p>Before architecting a security program or allocating cybersecurity budget, we should understand the organization’s <strong>basic mode of life and economic values</strong>, including at the level of any teams who will be especially subject to security procedures (like software engineers). From there, we should aim to provide <strong>reasonable protection against fear and terror</strong> – that is, to provide <em>subjective</em> security, that ancient-school version of <em>securitas</em> which meant freedom from anxiety, fear, or care.</p>
<p>Our job as defenders <em>should</em> be to reduce the complexity of the security problem to such an extent that the rest of the organization <em>is</em> free from care about it (in fact, the systems theorist Niklas Luhmann argued that security efforts explicitly aim to reduce the complexity of the world<sup id="fnref:48"><a href="#fn:48" class="footnote-ref" role="doc-noteref">48</a></sup>). And cybersecurity&rsquo;s job <em>should</em> be to provide positive assurance that the organization’s values (like prosperity, productivity, inclusion, whatever) can be maintained going forward.</p>
<p>But all of the above requires user research and empathy and curiosity about things beyond cybersecurity&rsquo;s viewing frustum. This modern definition of security means the organization must treat security as an <em>interactive</em> discipline, not a prescriptive one. The existence of a security program cannot be justified with “there is a risk here and it will never go away,”<sup id="fnref:49"><a href="#fn:49" class="footnote-ref" role="doc-noteref">49</a></sup> multiplied across all identified “risks,” which thereby implies a security organization that can only grow in scope and authority.</p>
<p>If those who provide security are the rulers and the users the ruled, what security really requires is the rulers respecting the ruled and the rulers <em>earning</em> the respect of the ruled rather than extracting it. This reflects a radical departure from traditional infosec and thus there is and will be resistance from the entrenched<sup id="fnref:50"><a href="#fn:50" class="footnote-ref" role="doc-noteref">50</a></sup>.</p>
<p>Security, in practice, is supposed to reside at the beneficial balance between two evils: absolute fear and absolute security – and absolute security, per Kant, can only be found at the cemetery.<sup id="fnref:51"><a href="#fn:51" class="footnote-ref" role="doc-noteref">51</a></sup></p>
<h2 id="summarizing-what-we-mean-when-we-say-security">Summarizing what we mean when we say &ldquo;security&rdquo;</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-guardian-cat.png" alt="A magical cat curled upon a planet, protecting it."></p>
<p>Security is now one of those big words like justice and freedom and liberty which serve more as symbols with fuzzy flavors of feeling &ndash; that is, as concepts &ndash; rather than as words with straightforward definitions. As we’ve seen, asking the titular question, &ldquo;When We Say Security, What Do We Mean?&rdquo; is an exploratory exercise rather than an excavation. There is no ground truth we shall hit with enough sweat and shoveling.</p>
<p>We traversed a tapestry of meanings throughout this essay. We finish it with a rough sense of, like, &ldquo;Security is about preserving chill vibes in the presence of threats to those vibes.”</p>
<p>But more usefully (yet less concretely), we have a better mouthfeel for what “security” means. The threats aren’t the point; the poignant part is the potential <em>absence</em> of a valuable good or state of being which we very much wish to preserve.</p>
<p>An absence of threats is only worthwhile if it guarantees the presence of serenity and prosperity. In the word “security” is also a promise – that you hold onto something of value and that this value might grow in the future.</p>
<p>Perhaps most of all, this semantic journey of ours today reveals how wayward traditional cybersecurity is from these notions; it resembles a nemesis of the security concept rather than its descendent. I hope you join me on the quest to finally realize the full potential of the security concept, to grow peaches rather than <a href="https://en.wikipedia.org/wiki/The_Market_for_Lemons">lemons</a><sup id="fnref:52"><a href="#fn:52" class="footnote-ref" role="doc-noteref">52</a></sup>, to build a sweeter future for ourselves and all the other stakeholders in this strange system we call society.</p>
<hr>
<p>If you liked this post and want to learn more about pursuing a better notion of cybersecurity, check out my new book <em>Security Chaos Engineering: Sustaining Resilience in Software and Systems</em> available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a> and <a href="https://www.securitychaoseng.com/">other major retailers</a>.</p>
<hr>
<p><img src="/blog/img/sec-etymology/what-does-security-mean-cover-art.png" alt="A faux polaroid picture with an AI-generated image signed by yours truly. It depicts an island floating in a sky filled with rainbow and pastel clouds in shades of periwinkle and violet. The island itself is a paradise, a blend of fantasy and cyberpunk aesthetics. Lush trees blanket its ledges while waterfalls cascade from each ledge, frozen in time and resembling a beautiful digital glitch. It is meant to reflect the utopia we might achieve with our systems &amp;ndash; our own islands &amp;ndash; if we embraced the original meanings of the word security."></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>The parallels between black hole firewalls and the infosec kind must remain a discussion for another time (if time isn&rsquo;t just an abstraction).&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>I performed a secret, arcane ritual to win the favor of the eldritch ones towards my quest of making the word <em>security</em> mean something better. But the gods are capricious and so the ultimate fate of this endeavor remains unknown.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Like any self-respecting former author of angsty teen poetry, I originally chose as my medium a &ldquo;literary concept album&rdquo; featuring six essays as &ldquo;tracks,&rdquo; all exploring the title&rsquo;s provocative question: <em>When We Say Security, What Do We Mean?</em> But no one reads blog series so this is now a single post.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>As Hannah Arendt described of such words, &ldquo;when we try to define them, they get slippery; when we talk about their meaning, nothing stays put anymore, everything begins to move.&rdquo; (From <em>The Life of the Mind</em>)&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>Arendt, H. (1981). <em>The life of the mind: The groundbreaking investigation on how we think.</em> HMH. (In the section &ldquo;Thinking&rdquo; / &ldquo;The answer of Socrates&rdquo;)&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>&ldquo;Surface&rdquo; is a spatial metaphor. Yet again, there is much to unpack with the language we use to talk about cybersecurity but, to keep with the metaphor, time marches onward&hellip;&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>“The truth is rather that I infect them also with the perplexity I feel myself.” – Socrates&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>This may sound like a journey up one’s ass, but it’s better than being a cookie-cutter infosec ass, I assure you.&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>A <a href="http://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1999.04.0059:entry=anaticula">term of endearment</a> in ancient Rome; its literal translation is &ldquo;duckling.&rdquo;&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:10">
<p>Carl Meißner; Henry William Auden (1894) <a href="https://www.gutenberg.org/files/50280/50280-h/50280-h.htm"><em>Latin Phrase-Book</em></a>, London: Macmillan and Co.&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>I think the closest we get to Platonic dialogues in modern times is Ao3 fanfiction #slowbuild #lightangst #friendship #humor #confessions #aroace #college #dom/sub #drama #alpha/beta/omegadynamics&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:12">
<p>Rorty, Mary. “Lecture 10.1: Epicurus and Lucretius.” Stanford University. <a href="http://web.stanford.edu/~mvr2j/ucsccourse/Lecture10.1.pdf">http://web.stanford.edu/~mvr2j/ucsccourse/Lecture10.1.pdf</a>&#160;<a href="#fnref:12" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:13">
<p>Infosec as an entity truly exhibits a weird form of masochism that honestly becomes slightly uncomfortable to contemplate if we start untangling all the evidence in support of it.&#160;<a href="#fnref:13" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:14">
<p>I am tempted to delve into the psychological concept of security and insecurity but I fear its revelations &ndash; despite being aimed at infosec as <em>collective</em> &ndash; would be interpreted as personal attacks. I will leave this one morsel for us to digest: the APA defines insecurity as a feeling of inadequacy, a lack of self-confidence, an inability to cope combined by general uncertainty about one’s goals, abilities, or relationships with others. To what degree does this notion of psychological insecurity accurately characterize the traditional infosec industry &ndash; its folk wisdom, zeitgeist, program priorities, prescribed procedures, policies, and so forth?&#160;<a href="#fnref:14" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:15">
<p>&ldquo;Our time” here is referring to the time of Socrates (the inspiration for &ldquo;Secrates&rdquo;), which was in the 4th century B.C.E. Therefore, the rise of asphaleia meaning the stability of the city state was around the 5th century B.C.E.&#160;<a href="#fnref:15" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:16">
<p>In return, these stringent practices reinforce the status quo and uphold organizational power structures, which suits leadership just fine (and, besides, how would we expect them to know how security programs could look outside of the infosec status quo?).&#160;<a href="#fnref:16" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:17">
<p>You watch the church leaders exchange influence for money, but instead of imparting the power of the Holy Spirit it’s your unfortunately unscrupulous CISO pushing some dogshit into your stack because their buddy invested in the startup and they do this back and forth and then blame the engineers or users who don’t want to interact with the dogshit for why everything is failing because nothing is your fault when you have the authority of something sacred, whether the Holy Spirit or the Security Spirit.&#160;<a href="#fnref:17" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:18">
<p>I do find it interesting that when CISOs do not disclose a breach, instead laundering it through a bug bounty program, that is being &ldquo;strategic&rdquo; and showing security leadership, but when a software engineering team doesn&rsquo;t fix a security bug immediately &ndash; no matter how contrived the exploit scenario &ndash; then they lack integrity.&#160;<a href="#fnref:18" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:19">
<p>Perhaps we should be grateful there aren’t LinkedIn posts like “here’s the best way to run your sin response team #securitas #ciso #inquisition #SinSecOps.&#160;<a href="#fnref:19" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:20">
<p>Wonderly, M. (2019). On the Affect of Security. <em>philosophical topics, 47</em>(2), 165-182.&#160;<a href="#fnref:20" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:21">
<p>Intriguingly – and rather self-servingly, although I did not expect it to be so when delving into this thought exercise – the original meanings of <em>securus</em> and <em>securitas</em> align nicely with the goals of <a href="https://www.securitychaoseng.com/">Software Resilience (aka Security Chaos Engineering (SCE))</a>. Composure is something for which resilience strives through the practice of repeated experimentation. Resilience wants security to be <em>quiet</em>, it seeks to foster organizational <em>confidence</em>, to grant organizations the freedom to not fret about potential incidents because they feel so well-practiced through experimentation, strong feedback loops, and resilient design that they feel <em>fearless</em> about the inevitable. In fact, we explicitly encourage defenders to have fun with chaos experiments, getting infosec closer to that original connotation that security involves a feeling of being <em>cheerful</em> and <em>bright</em>.&#160;<a href="#fnref:21" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:22">
<p>From <a href="https://www.gutenberg.org/files/47001/47001-h/47001-h.htm"><em>De Officiis</em></a> passage 69 (nice): &ldquo;Vacandum autem omni est animi perturbatione, cum cupiditate et metu, tum etiam aegritudine et voluptate nimia[64] et iracundia, ut tranquillitas animi et securitas adsit, quae affert cum constantiam, tum etiam dignitatem.&rdquo; This translates roughly to: &ldquo;But it is necessary to be freed from all disturbance of the mind, with desire and fear, and also from sickness and excessive pleasure and anger, so that there may be peace of mind and security, which brings with it constancy, as well as dignity.&rdquo;&#160;<a href="#fnref:22" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:23">
<p>Pop-stoicism seems to be trending among security leaders lately but we do not have time to unpack why this is so, nor its troubling implications.&#160;<a href="#fnref:23" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:24">
<p>From <a href="https://la.wikisource.org/wiki/De_constantia_sapientis"><em>De constantia sapientis</em></a>: Nullius ergo mouebitur contumelia; omnes enim inter se differant, sapiens quidem pares illos ob aequalem stultitiam omnis putat. Nam si semel se demiserit eo ut aut iniuria moueatur aut contumelia, non poterit umquam esse securus; securitas autem proprium bonum sapientis est. This translates roughly to: &ldquo;Therefore no one will be moved by insults; for although they all differ from one another, the wise indeed think that they are equal because of their equal stupidity. For if he has once humbled himself to the point of being moved either by injury or insult, he will never be able to be secure; but security is the proper good of the wise.&rdquo;&#160;<a href="#fnref:24" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:25">
<p>This is another case where <a href="https://www.securitychaoseng.com/">Software Resilience (aka Security Chaos Engineering)</a> aligns with the historic meaning of securitas better than traditional security. Resilience accepts that failure is inevitable but gains confidence from preparation. It trusts that all our preparation will help us respond gracefully to failure.&#160;<a href="#fnref:25" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:26">
<p>History is not without its ironies; Nero adopted the mantle of <em>securitas</em> ¬– the divinity imparted by a mindset of fearlessness of death – and then ultimately died by suicide.&#160;<a href="#fnref:26" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:27">
<p>Schrimm-Heins, A. (1991). Gewissheit und Sicherheit: Geschichte und Bedeutungswandel der Begriffe certitudo und securitas (Teil I). <em>Archiv für Begriffsgeschichte, 34</em>, 123-213.&#160;<a href="#fnref:27" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:28">
<p>Tacitus, <a href="https://www.gutenberg.org/files/7524/7524-h/7524-h.htm"><em>The Agricola</em></a>.&#160;<a href="#fnref:28" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:29">
<p>Of course, with a Resilience strategy, we want to foster this sort of assurance through repeated experimentation – cultivating confidence through empirical evidence affirming or denying our hypotheses about the resilience of our systems.&#160;<a href="#fnref:29" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:30">
<p>Upon learning this, I immediately updated my brain dictionary lookup to display Security as a gorgeous transbian goddess whose favorite language, naturally, is Rust. I am hoping for a crossover episode in which our representative enby god, Loki, woos her by donning thigh highs made of the tendons of her enemies.&#160;<a href="#fnref:30" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:31">
<p>Levenson, E. (2014). The TSA Is in the Business of&rsquo;Security Theater,&rsquo; Not Security. <a href="https://www.theatlantic.com/national/archive/2014/01/tsa-business-security-theater-not-security/357599/">The Atlantic</a>.&#160;<a href="#fnref:31" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:32">
<p><em>Quis custodiet ipsos custodes?</em>&#160;<a href="#fnref:32" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:33">
<p>It was Pope Gregory I as hypeman for this interpretation and, yet again, the parallels between traditional infosec and the authoritarianism of the Catholic Church are… intriguing to say the least.&#160;<a href="#fnref:33" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:34">
<p>It&rsquo;s been fun watching the industry catch up to me. ~6 - 7 years ago when I was dropping spicy takes about how bullshit &ldquo;gotchya&rdquo; security tests are (along with a bunch of other behavioral science-informed takes), I got a ton of pushback and usually vitrol. BuT ReAL aTTaCkErS dOn&rsquo;T CaRe AbOuT fEeLinGs. Many of those same people now launder those takes and pretend like they were always on board. There&rsquo;s probably a post in itself about the adoption cycle of hot takes where, at the beginning, people bristle because it&rsquo;s new and bold and different but eventually it&rsquo;s accepted enough that it&rsquo;s worth changing your beliefs and evangelizing it to look &ldquo;thought leadering.&rdquo; Hopefully one day I&rsquo;ll be similarly vindicated with my (still wildly unpopular) take that &ldquo;DevSecOps&rdquo; is an unnecessary and harmful term.&#160;<a href="#fnref:34" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:35">
<p>Hobbes, with the benefit of hindsight and historical documentation, viewed the Peloponnesian War as a civil war among the Greek people. It seems at the time it was not perceived that way by Athens, its allies, or its enemies. The Persians were the starkest “other” throughout much of ancient Greek history, but by the time of the Peoponnesian War, the Persian “threat” was more like a distant, hazy shadow. Thus, the “other” from Athens’ perspective was other city-states, including its own allies who they feared would betray them (which they did, although “betray” perhaps is not the best characterization of the affair).&#160;<a href="#fnref:35" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:36">
<p>I promise I did not make this name up.&#160;<a href="#fnref:36" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:37">
<p>Bridging the Developer and Security Divide, VMWare, Forrester Research (2021)&#160;<a href="#fnref:37" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:38">
<p>UN-Water, 2013. Water security and the global water agenda—A UN-
Water analytical brief . Hamilton: United Nations University. See also: <a href="https://www.unwater.org/publications/water-security-infographic/">https://www.unwater.org/publications/water-security-infographic/</a>&#160;<a href="#fnref:38" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:39">
<p>Social Security: Origin of the Term at <a href="https://socialwelfare.library.vcu.edu/social-security/social-security-origin-of-the-term/">https://socialwelfare.library.vcu.edu/social-security/social-security-origin-of-the-term/</a>&#160;<a href="#fnref:39" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:40">
<p>Christie, R., &amp; Amitav, A. (2008). <em>Human security research: progress, limitations and new directions</em> (pp. 11-08). Working Paper. Centre for Governance and International Affairs. <a href="http://www.bris.ac.uk/media-library/sites/spais/migrated/documents/christiearcharya1108.pdf">http://www.bris.ac.uk/media-library/sites/spais/migrated/documents/christiearcharya1108.pdf</a>&#160;<a href="#fnref:40" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:41">
<p>Heidegger, M. (1977). The question concerning technology. New York, 214. <a href="https://www2.hawaii.edu/~freeman/courses/phil394/The%20Question%20Concerning%20Technology.pdf">https://www2.hawaii.edu/~freeman/courses/phil394/The%20Question%20Concerning%20Technology.pdf</a>&#160;<a href="#fnref:41" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:42">
<p>It also brings us to one of my favorite book quotes: “In the information society, nobody thinks. We expected to banish paper, but we actually banished thought.” (said by Ian Malcolm in <em>Jurassic Park</em> by Michael Crichton).&#160;<a href="#fnref:42" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:43">
<p><em>&ldquo;Security is ambiguous and elastic in its meaning.”</em> – Art, 1993&#160;<a href="#fnref:43" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:44">
<p>Wolfers, A. (1962). <em>Discord and collaboration: essays on international politics</em>. Baltimore: Johns Hopkins Press.&#160;<a href="#fnref:44" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:45">
<p>Brauch, H. G. (2011). Concepts of security threats, challenges, vulnerabilities and risks. In <em>Coping with global environmental change, disasters and security</em> (pp. 61-106). Springer, Berlin, Heidelberg. <a href="https://link.springer.com/content/pdf/10.1007/978-3-642-17776-7_2.pdf">https://link.springer.com/content/pdf/10.1007/978-3-642-17776-7_2.pdf</a>&#160;<a href="#fnref:45" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:46">
<p>Yet again, this is a dynamic <a href="https://www.securitychaoseng.com/">Software Resilience (aka Security Chaos Engineering (SCE))</a> is seeking to change.&#160;<a href="#fnref:46" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:47">
<p>Waldron, J. (2006). Safety and security. <em>Neb. L. Rev., 85</em>, 454.&#160;<a href="#fnref:47" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:48">
<p>Luhmann, N. (2018). <em>Trust and power</em>. John Wiley &amp; Sons.&#160;<a href="#fnref:48" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:49">
<p>I have much, much more to say on this topic (inspired by this paper: Power, M. (2009). The risk management of nothing. <em>Accounting, organizations and society, 34</em>(6-7), 849-855.)&#160;<a href="#fnref:49" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:50">
<p>Surprise, surprise, <a href="https://www.securitychaoseng.com/">Software Resilience (aka Security Chaos Engineering (SCE)</a> is aligned with the vibe of earning respect.&#160;<a href="#fnref:50" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:51">
<p>Arenas, J. F. M. (2008). From Homer to Hobbes and Beyond—Aspects of ’security’ in the European Tradition. In <em>Globalization and environmental challenges</em> (pp. 263-277). Springer, Berlin, Heidelberg.&#160;<a href="#fnref:51" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:52">
<p>Shortridge, Kelly (2022). From Lemons to Peaches: Improving Security ROI through Security Chaos Engineering. <em>IEEE SecDev 2022</em>. <a href="https://arxiv.org/abs/2307.03796">https://arxiv.org/abs/2307.03796</a>&#160;<a href="#fnref:52" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>The SUX Rule for Safer Code</title>
            <link>https://kellyshortridge.com/blog/posts/the-sux-rule-for-safer-code/</link>
            <pubDate>Tue, 10 Oct 2023 19:00:31 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/the-sux-rule-for-safer-code/</guid>
            <description>This is kind of a dumb post but it amuses me and maybe will help some people somewhere. It proposes a rename of Chromium’s brilliant “Rule of Two,” which I love because it is a very handy heuristic for software engineers but also hate because it co-opts a term from Star Wars lore that is not even close to related. Instead, I think it should be called “The SUX Rule.”
Let me explain.
Rule of Two: Sith Edition Let’s start with the Star Wars version of Rule of Two because I am a lore nerd (a lored, if you will) and while Star Wars lore isn’t as glorious as Elder Scrolls lore,1 it’s satisfying enough for me to defend.
The Rule of Two was a decree established by Darth Bane as part of the persistent Sith plot to exact revenge upon the Jedi Order. It states that only two Sith Lords may exist at any given time – maximizing secrecy – consisting of a “master” and “apprentice”; the apprentice was generally expected to murder their master after gaining enough experience2.
The most famous example of the Rule of Two is Darth Sidious – aka Senator Palpatine3 – as master and Darth Vader as apprentice. It is taking all my self-control not to unleash more lore, so let’s continue.
Rule of Two: Chromium Edition Now, Chromium’s Rule of Two does not involve writing two code modules, one which will eventually kill the other after running in production long enough. Chromium’s Rule of Two states that we should never write high privilege components in an unsafe implementation language if they process untrustworthy inputs.
It is an excellent rule, and as their diagram below visualizes, when code has all three problematic characteristics – runs without a sandbox; written in an unsafe language; processes untrustworthy inputs – it can spell doom.
However, I am also a bit of a language nerd and definitely a behavioral nerd and I’m dissatisfied with Chromium’s Rule of Two on these petty grounds. The “Rule of Two” isn’t very memorable when it isn’t about Siths and treachery. We want developers to think of this rule when designing systems, so it needs to be salient.
And I personally find terms like “untrustworthy inputs” too squishy; it can imply that we’ve done some sort of analysis to judge inputs as untrustworthy. The Chromium team is very smart and carefully defined what they mean by it in their post but human recall is flawed, especially when it comes to nuance.
The SUX Rule Instead, I propose the SUX Rule: Sandbox-free – Unsafe – eXogenous. If our code runs without a Sandbox and is written in an Unsafe language and processes eXogenous code, then that obviously sucks (i.e. SUX). We don’t want our code to suck4. Thus, we want to pick no more than two of these sucky things when we write code.
If we don’t process exogenous data like user input5, then maybe it’s fine to write it in C or not use a sandbox. But if we do process exogenous data, then we either want a runtime sandbox (“privilege reduction” in Chromium’s lingo) or to write it in a memory safe language (Rust being the most direct replacement for C)6.
I like exogenous since it feels less squishy than “untrustworthy inputs.” With “running without a sandbox” and “memory unsafe language,” there is a clear way to know whether you are doing things right (you apply a sandbox or you don’t use C/C&#43;&#43;). How do you know whether your inputs are actually trustworthy? If you say, “I trust the input now,” why?
If we flip this to the attacker’s perspective, they start with data they control and then will encounter the gifts of either C/C&#43;&#43; or sandbox-free execution. They would greatly prefer to not have any “real” checks performed on their data leading up to that point. But what are “real” checks? I feel like “exogenous” clarifies this a little, since it makes it less about trust or not and more about how to handle it.
Exogenous turns it into a question of: “Do we have things coming from the network / outside the component?” If yes, either sandbox or memory safety (or, even better, both).
Realistically, the X – exogenous data – is the characteristic you can’t eliminate. A lot of code interacts with the outside world, whether human users or silicon services, and that is part of its intended functionality. That is kind of the whole point of many software components.
For many projects (most?) it will be easier to throw that code in a sandbox or write / refactor it into a memory safe language rather than acquiring cryptographic proof that the data comes from a trusted entity or transforming the data in specific ways (which the Chromium post details). The SUX Rule can prompt us to recall this thinky thinky during the design phase.
Conclusion The SUX Rule is literally just me rebranding Chromium’s wonderful Rule of Two and changing terminology a bit to clarify the concept, avoid stepping on any Sith toes, and make it more memorable (since I’ve found not many software engineers even know of it, to my dismay).
My hope is that even if you don’t entirely remember what each letter stands for, perhaps you will remember “we don’t want code that SUX” and then you will look it up to refresh your memory. With any luck, a few years from now we’ll find that less of the world’s code SUX.
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
It does not get much better than the Thirty-Six Lessons of Vivec, Sermon Seventeen ↩︎
An example is the oft-memed Tragedy of Darth Plagueis the Wise. ↩︎
“Somehow, Palpatine has returned,” is dialogue that haunts me daily. There are few things in life that have disappointed me as much as hearing those words in the theater. ↩︎
If the internet is full of code that SUX, does that make it a SUXnet? (☞ ͡° ͜ʖ ͡°)☞ ↩︎
The Chromium team frames an untrustworthy source as “any arbitrary peer on the Internet” which I feel the word “exogenous” captures. ↩︎
Totally not me mentioning Rust for SEO. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>This is kind of a dumb post but it amuses me and maybe will help some people somewhere. It proposes a rename of Chromium’s brilliant “Rule of Two,” which I love because it is a very handy heuristic for software engineers but also hate because it co-opts a term from Star Wars lore that is not even close to related. Instead, I think it should be called &ldquo;The SUX Rule.&rdquo;</p>
<p>Let me explain.</p>
<h2 id="rule-of-two-sith-edition">Rule of Two: Sith Edition</h2>
<p>Let’s start with the Star Wars version of Rule of Two because I am a lore nerd (a lored, if you will) and while Star Wars lore isn’t as glorious as Elder Scrolls lore,<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> it’s satisfying enough for me to defend.</p>
<p><a href="https://starwars.fandom.com/wiki/Rule_of_Two">The Rule of Two</a> was a decree established by Darth Bane as part of the persistent Sith plot to exact revenge upon the Jedi Order. It states that only two Sith Lords may exist at any given time – maximizing secrecy – consisting of a “master” and “apprentice”; the apprentice was generally expected to murder their master after gaining enough experience<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>.</p>
<p>The most famous example of the Rule of Two is Darth Sidious – aka Senator Palpatine<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> – as master and Darth Vader as apprentice. It is taking all my self-control not to unleash more lore, so let&rsquo;s continue.</p>
<h2 id="rule-of-two-chromium-edition">Rule of Two: Chromium Edition</h2>
<p>Now, <a href="https://chromium.googlesource.com/chromium/src/+/master/docs/security/rule-of-2.md">Chromium’s Rule of Two</a> does not involve writing two code modules, one which will eventually kill the other after running in production long enough. Chromium’s Rule of Two states that we should never write high privilege components in an unsafe implementation language if they process untrustworthy inputs.</p>
<p>It is an excellent rule, and as their diagram below visualizes, when code has all three problematic characteristics – runs without a sandbox; written in an unsafe language; processes untrustworthy inputs – it can spell doom.</p>
<p><img src="/blog/img/chromium-rule-of-two.png" alt="Venn diagram showing you should always use a safe language, a sandbox, or not be processing untrustworthy inputs in the first place."></p>
<p>However, I am also a bit of a language nerd and definitely a behavioral nerd and I’m dissatisfied with Chromium’s Rule of Two on these petty grounds. The “Rule of Two” isn’t very memorable when it isn’t about Siths and treachery. We want developers to think of this rule when designing systems, so it needs to be salient.</p>
<p>And I personally find terms like “untrustworthy inputs” too squishy; it can imply that we’ve done some sort of analysis to judge inputs as untrustworthy. The Chromium team is very smart and carefully defined what they mean by it in their post but human recall is flawed, especially when it comes to nuance.</p>
<h2 id="the-sux-rule">The SUX Rule</h2>
<p>Instead, I propose the SUX Rule: Sandbox-free – Unsafe – eXogenous. If our code runs without a Sandbox and is written in an Unsafe language and processes eXogenous code, then that obviously sucks (i.e. SUX). We don’t want our code to suck<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>. Thus, we want to pick no more than two of these sucky things when we write code.</p>
<p>If we don’t process exogenous data like user input<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>, then maybe it’s fine to write it in C or not use a sandbox. But if we do process exogenous data, then we either want a runtime sandbox (“privilege reduction” in Chromium’s lingo) or to write it in a memory safe language (Rust being the most direct replacement for C)<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>.</p>
<p>I like exogenous since it feels less squishy than &ldquo;untrustworthy inputs.&rdquo; With &ldquo;running without a sandbox&rdquo; and &ldquo;memory unsafe language,&rdquo; there is a clear way to know whether you are doing things right (you apply a sandbox or you don&rsquo;t use C/C++). How do you know whether your inputs are actually trustworthy? If you say, &ldquo;I trust the input now,&rdquo; why?</p>
<p>If we flip this to the attacker&rsquo;s perspective, they start with data they control and then will encounter the gifts of either C/C++ or sandbox-free execution. They would greatly prefer to not have any &ldquo;real&rdquo; checks performed on their data leading up to that point. But what are &ldquo;real&rdquo; checks? I feel like &ldquo;exogenous&rdquo; clarifies this a little, since it makes it less about trust or not and more about how to handle it.</p>
<p>Exogenous turns it into a question of: &ldquo;Do we have things coming from the network / outside the component?&rdquo; If yes, either sandbox or memory safety (or, even better, both).</p>
<p>Realistically, the X – exogenous data – is the characteristic you can’t eliminate. A lot of code interacts with the outside world, whether human users or silicon services, and that is part of its intended functionality. That is kind of the whole point of many software components.</p>
<p>For many projects (most?) it will be easier to throw that code in a sandbox or write / refactor it into a memory safe language rather than acquiring cryptographic proof that the data comes from a trusted entity or transforming the data in specific ways (which the Chromium post details). The SUX Rule can prompt us to recall this thinky thinky during the design phase.</p>
<p><img src="/blog/img/the-sux-rule-for-safer-code.png" alt="Venn diagram of the sux rule showing you should always use a safe language, isolate the code in a sandbox, or not process exogenous inputs. The intersection of sandbox-free, unsafe, and exogenous is labeled NO and an arrow points to it saying Doom! Don&amp;rsquo;t do all three."></p>
<h2 id="conclusion">Conclusion</h2>
<p>The SUX Rule is literally just me rebranding Chromium’s wonderful Rule of Two and changing terminology a bit to clarify the concept, avoid stepping on any Sith toes, and make it more memorable (since I’ve found not many software engineers even know of it, to my dismay).</p>
<p>My hope is that even if you don’t entirely remember what each letter stands for, perhaps you will remember “we don’t want code that SUX” and then you will look it up to refresh your memory. With any luck, a few years from now we’ll find that less of the world’s code SUX.</p>
<hr>
<p><em>Enjoy this post? You might like <a href="https://www.securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>It does not get much better than the <a href="https://en.uesp.net/wiki/Morrowind:36_Lessons_of_Vivec,_Sermon_17">Thirty-Six Lessons of Vivec, Sermon Seventeen</a>&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>An example is the oft-memed <a href="https://starwars.fandom.com/wiki/The_Tragedy_of_Darth_Plagueis_the_Wise">Tragedy of Darth Plagueis the Wise</a>.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>“Somehow, Palpatine has returned,” is dialogue that haunts me daily. There are few things in life that have disappointed me as much as hearing those words in the theater.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>If the internet is full of code that SUX, does that make it a SUXnet? (☞ ͡° ͜ʖ ͡°)☞&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>The Chromium team frames an untrustworthy source as “any arbitrary peer on the Internet” which I feel the word “exogenous” captures.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Totally not me mentioning Rust for SEO.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>&#34;Quantum&#34; Doesn&#39;t Solve Anything for Cybersecurity</title>
            <link>https://kellyshortridge.com/blog/posts/quantum-doesnt-solve-anything/</link>
            <pubDate>Tue, 25 Jul 2023 07:00:31 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/quantum-doesnt-solve-anything/</guid>
            <description>
I sometimes get asked about what I think about “quantum” for solving cybersecurity problems. The answer is I try to never think about it because even one brain cell processing it is too many given its irrelevance to the problem domain.
What security problem is “quantum” trying to solve? Would quantum solve Solarwinds? Heartbleed? Log4Shell? The 2016 DNC compromise? Any number of the social engineering-based attacks we see month after month? No, no, no, no, and no.
“Quantum” is specifically solving the problem of cryptographic primitives: that some of the fancy math problems we use to keep other humans from guessing how to unscramble our data eventually might be solvable by superscale quantum computers.
The argument you’ll often hear from quantum zealots is: “imagine if the primitives that were beneath your feet just vanished.” I don’t have to imagine, bro, that happens in software security every fucking day.
Beyond zealots, there are those in the tech sphere and on its periphery who worry about how horrible cryptography problems could get – but this is because they are ignorant of how bad implementation problems currently are. They do not realize that “breaking a common cryptographic algorithm” is so far down the list of realistic concerns in cybersecurity that it might as well be next to secret agents kidnapping your sysadmin (or whatever employee has the most access to internal systems).
The reality is that there are lots of cyber-attacks and most are pretty boring. Very, very few attacks seen in the real world are from “breaking” encryption. When it does happen, it’s because someone tried to roll their own crypto1 and messed up the implementation. But most attacks use tactics like social engineering or exploiting known vulnerabilities in web apps or malware-as-a-service; they are not using fancy math, or really any math, other than thinking about the optimal level of effort to achieve the desired payout.
Quantum computing doesn’t solve spoofed URLs like company-name vs. companyname.com which lure users into entering in their credentials. It doesn’t solve attackers copying language from legitimate emails for use in their phishing emails. It doesn’t solve employees’ credentials being stolen. It definitely doesn’t solve memory safety issues, logic flaws, or components interacting in unintended ways.
The security problem “quantum” is trying to solve seems to be financial security – for the proselytizers’ ongoing research prospects. Looking deeper, I suspect the puffery about quantum security stuff is a variation on the theme of longtermism: let’s ignore the real problems in our face today and work on solving imaginary problems for problems many years from now. It’s a cheap way to feel like your work matters.
It also makes you immune to criticism. You can rightfully ignore the “it’s not even real yet” critique, but you can also ignore everything else because you’ve worldbuilt a future where the problem you’re solving is the biggest problem, so anyone who says otherwise is either myopically focused on the present or has other predictions for the future (and of course they are wrong because they do not use cool words like “post-quantum” to describe their future).
With the issue not existing today, anyone criticizing you doesn’t have evidence to disprove it. Sure, you don’t have the evidence to prove it will actually be a huge, society-shattering problem, either, but you can always leap onto your moral high horse and trot off with your bags of research funding, because one day you’ll be vindicated. And if it wasn’t important, would it be getting funding?? Checkmate, pre-quantum losers.
When I poke around the corners of tech society today, I often find a fatalism fetish – an weirdly eager hunger for the end of the world as we know it to be nigh, whether due to AI or quantum computers breaking cryptography.
The other significant contributing factor, I suspect, is the eroding sense of belonging and meaning in our society2 – the desperate human need to feel important and influential in our surrounding environment. If we fear that a Nation State will use multi-billion-dollar quantum capabilities to read our email, we implicitly are deeming ourselves important enough to warrant that level of attack investment – that our email is that special because our lives are that special.
The Nation State could also threaten our lives or our loved ones (“your laptop or your life”); profile our interests to send us targeted social engineering content; glean our favorite websites and inject malicious scripts that download malware onto our machine, watering-hold style; or literally so many things that don’t involve quantum anything. “Quantum” solves none of those things, but it does not matter because, like a cult, unquestioning faith is the price of entry for future glory.
The “quantum” hype, especially among leadership and people with deep coffers, is especially frustrating because there are many security problems worth solving now – not just the ones we hear about all the time (phishing, malware, vulnerability exploitation, etc.), but also ones we don’t discuss enough: stalkerware and spyware; digital identity and access for vulnerable populations, like refugees or the unhoused; privacy increasingly becoming a luxury good.
These are very hard problems to solve that disproportionately affect underprivileged groups, but instead we’re fretting about the future-flung vanity problem that is “quantum.” The fanatics think they are starring as an innovator in an epic sci-fi by being “involved” with “quantum” but instead they are the ham-fisted buffoon that serves to make it painfully obvious to the audience that merit does not matter in this dystopian setting.
In short, thinking that “quantum” will “solve” security is quixotic. If that is you, you are Don Quantum and you are tilting at windmills. I’d say you have big Captain Ahab energy, except your white whale doesn’t even exist. Meanwhile, problems abound, including very hard problems, many of which do need fancy math and sciencing. Like, was formal methods not exotic and impractical enough for you??
For real, if you want a huge, horrifying existential threat to tackle, the undersea cables that underpin the internet are vulnerable to climate change[1]. Migrating your efforts from technobabble to working on climate change (whether inputs or impacts) will mean you’re no longer a waste of carbon, at the very least.
I mentioned I get asked sometimes about “quantum” for cybersecurity and the answer in my head is usually, “Cybersecurity is an entirely unserious industry.” We think we’re being serious if we dress it up in the shallow artifacts of science, whether risk quantification or quantum3. Yet we still suck at empiricism and we’re only really curious if we don’t have to be accountable for results.
The quantum hype is perhaps a culmination of this – the right place at the right time to provide an outlet for our valid anxieties about the future while not having to do anything real to make that future better.
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
If you’re attending Black Hat USA, check out my talk Wednesday at 11:20 in Oceanside A. I’ll be doing a book signing at the Fastly booth at 14:30 Wednesday, where you can get a free copy. Otherwise, I’m signing books 16:00 Tuesday in the Black Hat Bookstore and 13:00 Wednesday at the O’Reilly Media booth.
The advice “don’t roll your own crypto” comes from the time before cryptocurrency; in this case, crypto = cryptographic algorithm. ↩︎
This was accurately forecasted by Jacques Ellul in 1954. It’s probably for the best he died before the rise of social media. ↩︎
I’m dreading the rise of “risk quantumication”; look for it as an “Innovation Trigger” on Gartner’s 2030 Hype Cycle. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/quantum-computing-spooky.png" alt="A render of a circuit board infused with magical energy glowing teal, cyan, and violet. It is floating in the ether, tendrils of mystical energy crackling around it. It looks possessed, or else imbued with spooky magic."></p>
<p>I sometimes get asked about what I think about “quantum” for solving cybersecurity problems. The answer is I try to never think about it because even one brain cell processing it is too many given its irrelevance to the problem domain.</p>
<p>What security problem is “quantum” trying to solve? Would quantum solve <a href="https://en.wikipedia.org/wiki/2020_United_States_federal_government_data_breach">Solarwinds</a>? <a href="https://en.wikipedia.org/wiki/Heartbleed">Heartbleed</a>? <a href="https://en.wikipedia.org/wiki/Log4Shell">Log4Shell</a>? The <a href="https://en.wikipedia.org/wiki/Democratic_National_Committee_cyber_attacks">2016 DNC compromise</a>? Any number of the social engineering-based attacks <a href="https://kellyshortridge.com/blog/posts/kellys-kommentary-on-verizon-dbir-2023/">we see month after month</a>? No, no, no, no, and no.</p>
<p>“Quantum” is specifically solving the problem of <a href="https://en.wikipedia.org/wiki/Cryptographic_primitive">cryptographic primitives</a>: that some of the fancy math problems we use to keep other humans from guessing how to unscramble our data eventually might be solvable by superscale quantum computers.</p>
<p>The argument you’ll often hear from quantum zealots is: &ldquo;imagine if the primitives that were beneath your feet just vanished.&rdquo; I don&rsquo;t have to imagine, bro, that happens in software security every fucking day.</p>
<p>Beyond zealots, there are those in the tech sphere and on its periphery who worry about how horrible cryptography problems could get &ndash; but this is because they are ignorant of how bad implementation problems currently are. They do not realize that &ldquo;breaking a common cryptographic algorithm&rdquo; is so far down the list of realistic concerns in cybersecurity that it might as well be next to secret agents kidnapping your sysadmin (or whatever employee has the most access to internal systems).</p>
<p>The reality is that there are lots of cyber-attacks and most are pretty boring. Very, very few attacks seen in the real world are from “breaking” encryption. When it does happen, it’s because someone tried to roll their own crypto<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> and messed up the implementation. But most attacks use tactics like social engineering or exploiting known vulnerabilities in web apps or malware-as-a-service; they are not using fancy math, or really any math, other than thinking about the optimal level of effort to achieve the desired payout.</p>
<p>Quantum computing doesn’t solve spoofed URLs like company-name vs. companyname.com which lure users into entering in their credentials. It doesn’t solve attackers copying language from legitimate emails for use in their phishing emails. It doesn’t solve employees’ credentials being stolen. It definitely doesn’t solve memory safety issues, logic flaws, or components interacting in unintended ways.</p>
<p>The security problem “quantum” is trying to solve seems to be financial security – for the proselytizers’ ongoing research prospects.
Looking deeper, I suspect the puffery about quantum security stuff is a variation on the theme of <a href="https://web.archive.org/web/20230705141434/https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo">longtermism</a>: let’s ignore the real problems in our face today and work on solving imaginary problems for problems many years from now. It’s a cheap way to feel like your work matters.</p>
<p>It also makes you immune to criticism. You can rightfully ignore the “it’s not even real yet” critique, but you can also ignore everything else because you’ve worldbuilt a future where the problem you’re solving is the biggest problem, so anyone who says otherwise is either myopically focused on the present or has other predictions for the future (and of course they are wrong because they do not use cool words like “post-quantum” to describe their future).</p>
<p>With the issue not existing today, anyone criticizing you doesn’t have evidence to disprove it. Sure, you don’t have the evidence to prove it will actually be a huge, society-shattering problem, either, but you can always leap onto your moral high horse and trot off with your bags of research funding, because one day you’ll be vindicated. And if it wasn’t important, would it be getting funding?? Checkmate, pre-quantum losers.</p>
<p>When I poke around the corners of tech society today, I often find a fatalism fetish – an weirdly eager hunger for the end of the world as we know it to be nigh, whether due to AI or quantum computers breaking cryptography.</p>
<p>The other significant contributing factor, I suspect, is the eroding sense of belonging and meaning in our society<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> – the desperate human need to feel important and influential in our surrounding environment. If we fear that a Nation State will use multi-billion-dollar quantum capabilities to read our email, we implicitly are deeming ourselves important enough to warrant that level of attack investment – that our email is that special because our lives are that special.</p>
<p>The Nation State could also threaten our lives or our loved ones (&ldquo;your laptop or your life&rdquo;); profile our interests to send us targeted social engineering content; glean our favorite websites and inject malicious scripts that download malware onto our machine, watering-hold style; or literally so many things that don’t involve quantum anything. “Quantum” solves none of those things, but it does not matter because, like a cult, unquestioning faith is the price of entry for future glory.</p>
<p>The “quantum” hype, especially among leadership and people with deep coffers, is especially frustrating because there are many security problems worth solving now – not just the ones we hear about all the time (phishing, malware, vulnerability exploitation, etc.), but also ones we don’t discuss enough: stalkerware and spyware; digital identity and access for vulnerable populations, like refugees or the unhoused; privacy increasingly becoming a luxury good.</p>
<p>These are very hard problems to solve that disproportionately affect underprivileged groups, but instead we’re fretting about the future-flung vanity problem that is “quantum.” The fanatics think they are starring as an innovator in an epic sci-fi by being “involved” with “quantum” but instead they are the ham-fisted buffoon that serves to make it painfully obvious to the audience that merit does not matter in this dystopian setting.</p>
<p>In short, thinking that “quantum” will “solve” security is quixotic. If that is you, you are Don Quantum and you are tilting at windmills. I’d say you have big Captain Ahab energy, except your white whale doesn’t even exist. Meanwhile, problems abound, including very hard problems, many of which do need fancy math and sciencing. Like, was <a href="https://users.ece.cmu.edu/~koopman/des_s99/formal_methods/">formal methods</a> not exotic and impractical enough for you??</p>
<p>For real, if you want a huge, horrifying existential threat to tackle, the undersea cables that underpin the internet are vulnerable to climate change[1]. Migrating your efforts from technobabble to working on climate change (whether inputs or impacts) will mean you’re no longer a waste of carbon, at the very least.</p>
<p>I mentioned I get asked sometimes about “quantum” for cybersecurity and the answer in my head is usually, “Cybersecurity is an entirely unserious industry.” We think we’re being serious if we dress it up in the shallow artifacts of science, whether risk quantification or quantum<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>. Yet we still suck at empiricism and we’re only really curious if we don’t have to be accountable for results.</p>
<p>The quantum hype is perhaps a culmination of this – the right place at the right time to provide an outlet for our valid anxieties about the future while not having to do anything real to make that future better.</p>
<hr>
<p><em>Enjoy this post? You might like <a href="https://www.securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<p><em>If you&rsquo;re attending Black Hat USA, check out <a href="https://www.blackhat.com/us-23/briefings/schedule/index.html#fast-ever-evolving-defenders-the-resilience-revolution-32751">my talk</a> Wednesday at 11:20 in Oceanside A. I&rsquo;ll be doing a book signing at the Fastly booth at 14:30 Wednesday, where you can get a free copy. Otherwise, I&rsquo;m signing books 16:00 Tuesday in the Black Hat Bookstore and 13:00 Wednesday at the O&rsquo;Reilly Media booth.</em></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>The advice &ldquo;don&rsquo;t roll your own crypto&rdquo; comes from the time before cryptocurrency; in this case, crypto = cryptographic algorithm.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>This was accurately forecasted by Jacques Ellul in 1954. It&rsquo;s probably for the best he died before the rise of social media.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>I&rsquo;m dreading the rise of &ldquo;risk quantumication&rdquo;; look for it as an &ldquo;Innovation Trigger&rdquo; on Gartner&rsquo;s 2030 Hype Cycle.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Leading Cybersecurity with a Control vs. Resilience Strategy</title>
            <link>https://kellyshortridge.com/blog/posts/control-vs-resilience-cybersecurity-strategy/</link>
            <pubDate>Mon, 17 Jul 2023 21:03:25 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/control-vs-resilience-cybersecurity-strategy/</guid>
            <description>There are two paths we can pursue for our cybersecurity strategy: the control strategy or the resilience strategy.
I created this infographic as a handy reference for the software engineering, platform engineering, and cybersecurity communities, drawing on a table in Chapter 7 of my new book. It’s a great way to validate that the decisions we make when leading cybersecurity programs support resilience rather than control.
The control strategy designs security programs, and the elements within them, based on what security-humans think other humans should do. From this control-centric perspective, what humans “should” do is give their full attention to a task every time; give every risk conscious consideration; notice and comply with every warning; adhere to rules, policies, and procedures 100% of the time; and remain willing to expend all resources toward this compliance (time, cognitive energy, money, etc.).
The control strategy amounts to nothing more than wishful thinking.
The resilience strategy is grounded in reality. It promotes and designs security based on how humans actually behave – the reality of our finite cognition, attention, and energy. The resilience strategy appreciates that security isn’t always the top priority; in fact, most organizations, and the individuals working in them, are constantly calibrating how they balance competing priorities. Our security strategy must respect those priorities and find ways to work with them rather than against them.
Why don’t we see the resilience strategy in action more? As I say in the book, the control strategy is extremely convenient from the security team’s perspective. In fact, it might be better described as the “convenience strategy.” The cybersecurity program can pursue “solutions” that are comparatively easy to implement — like warnings, procedures, policies, training, and even some bolt-on security tools — and that also allow the security program to blame users when an incident occurs.
The security team gains convenience at the expense of everyone else’s inconvenience. As a common example, the security team may force developers to wrangle with slow, cumbersome appsec tools, gaining their own convenience at the expense of developers’ inconvenience. They get to avoid the hard work of brainstorming design-based solutions and instead prescribe fanciful ways people should work, then blame them when they fail to live up to those conceits.
The resilience strategy offers a way forward that transforms security into an enabler and a secure-by-design-er rather than a blocker or “pass the buck”-er. Through this transformation, our strategy respects that humans don’t interact with software or systems to be secure, they interact to perform a task to achieve a goal.
If you want to learn more about the resilience strategy and emerging practice of platform resilience engineering, check out my new book Security Chaos Engineering: Sustaining Resilience in Software and Systems available at Amazon and other major retailers.
</description>
            <atom:content type="html"><![CDATA[<p>There are two paths we can pursue for our cybersecurity strategy: the control strategy or the resilience strategy.</p>
<p>I created this infographic as a handy reference for the software engineering, platform engineering, and cybersecurity communities, drawing on a table in Chapter 7 of my new book. It&rsquo;s a great way to validate that the decisions we make when leading cybersecurity programs support resilience rather than control.</p>
<p><img src="/blog/img/control-vs-resilience-cybersecurity-strategy.png" alt="An infographic comparing the control vs. resilience strategy for cybersecurity. The control column includes Plan security based on how we think humans should behave, in contrast to the resilience strategy which says we should promote and design security based on how humans actually behave. The next row says the control strategy says humans should give full attention 100% of the time, while the resilience strategy says human attention is finite and a precious resource. Next, the control strategy says every risk should receive conscious consideration by humans as they work, while the resilience strategy says we should design the hazard out of the system, or reduce it, whenever possible. Next, the control strategy says humans should notice and comply with every warning, while the resilience strategy says we should avoid relying on human attention as much as possible. Next, the control strategy says humans should adhere to policies, procedures, and rules 100% of the time, while the resilience strategy says we shouldn&amp;rsquo;t rely on humans to act contrary to their nature 100% of the time. Finally, the control strategy says humans should be willing to expend resources towards compliance at all costs, while the resilience strategy says we should respect users&amp;rsquo; time, attention, cognitive energy, and priorities at all times."></p>
<p>The control strategy designs security programs, and the elements within them, based on what security-humans think other humans <em>should</em> do. From this control-centric perspective, what humans “should” do is give their full attention to a task every time; give every risk conscious consideration; notice and comply with every warning; adhere to rules, policies, and procedures 100% of the time; and remain willing to expend all resources toward this compliance (time, cognitive energy, money, etc.).</p>
<p>The control strategy amounts to nothing more than wishful thinking.</p>
<p>The resilience strategy is grounded in reality. It promotes and designs security based on how humans actually behave &ndash; the reality of our finite cognition, attention, and energy. The resilience strategy appreciates that security isn&rsquo;t always the top priority; in fact, most organizations, and the individuals working in them, are constantly calibrating how they balance competing priorities. Our security strategy must respect those priorities and find ways to work <em>with</em> them rather than against them.</p>
<p>Why don&rsquo;t we see the resilience strategy in action more? As I say in the book, the control strategy is extremely convenient from the security team&rsquo;s perspective. In fact, it might be better described as the “convenience strategy.” The cybersecurity program can pursue “solutions” that are comparatively easy to implement — like warnings, procedures, policies, training, and even some bolt-on security tools — and that also allow the security program to blame users when an incident occurs.</p>
<p>The security team gains convenience at the expense of everyone else’s inconvenience. As a common example, the security team may force developers to wrangle with slow, cumbersome appsec tools, gaining their own convenience at the expense of developers’ inconvenience. They get to avoid the hard work of brainstorming design-based solutions and instead prescribe fanciful ways people should work, then blame them when they fail to live up to those conceits.</p>
<p>The resilience strategy offers a way forward that transforms security into an enabler and a secure-by-design-er rather than a blocker or &ldquo;pass the buck&rdquo;-er. Through this transformation, our strategy respects that humans don&rsquo;t interact with software or systems to be secure, they interact to perform a task to achieve a goal.</p>
<p>If you want to learn more about the resilience strategy and emerging practice of platform resilience engineering, check out my new book <em>Security Chaos Engineering: Sustaining Resilience in Software and Systems</em> available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a> and <a href="https://www.securitychaoseng.com/">other major retailers</a>.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Kelly’s Kommentary on the 2023 Verzion DBIRRRRRRR</title>
            <link>https://kellyshortridge.com/blog/posts/kellys-kommentary-on-verizon-dbir-2023/</link>
            <pubDate>Tue, 06 Jun 2023 06:19:36 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/kellys-kommentary-on-verizon-dbir-2023/</guid>
            <description>
I enjoy the simple pleasures in life: a fragrant bouquet of peonies; crisp, caramelized plantains; my fluffy cat purring in my lap; and receiving an early copy of this year’s Verizon Data Breach Investigations Report (DBIR) to pour over and let my hot takes simmer in the early summer sunlight.
While this year’s report lacks any graphs that remind me of “butterfly vomit” or “a parrot on miscellaneous bath salts”, I still found myself enjoying it. The “spaghetti charts” are not your simple curved line graph but instead are VIBING, which both my aesthetic and economic soul appreciates (the latter due to the vibe-vibes reflecting confidence intervals). The footnotes are also a treat and keep what otherwise might be an oppressively dry report feeling like a breeze to read. Verizon DBIR? More like Verizon DBIRRRRRRRRRRRR.
But the point of the report is the data within and while not much changes year over year, a few things stood out to me. This post shares my notable thoughts, questions, and (spicy) commentaries on the 2023 DBIR.
Cash Rules Everything Around Us Yet again, the data shows 94.6% of breaches are financially driven. Our oft-aggrandized advanced persistent threats (APTs) – the nation states who largely compromise for espionage purposes – are simply not the bulk of activity. Indeed, organized crime is 75% of all “threat actors” in breaches, followed by “Other” and “End-user” (the latter also including mistakes, not just purposefully malicious activity – a lumping together I personally dislike) with nation-state dead last.
It’s a reminder that when we invest effort in security, we should focus on the potential paths that require attackers to invest the least; if they’re overwhelmingly motivated by money, then they will be sensitive to return on investment (ROI) 1. But we should also think about what costs us the most money – not just in terms of direct costs from the compromise (like money wired away or incident response retainer fees) but also the costs of investing in mitigations for it – and opportunity costs, too, like hurting productivity due to security mitigations.
And it’s a reminder that the best way to hurt attackers, whether at local or macro scales, is to poison their ROI. One of my hobbies is brainstorming ways we can aggrieve attackers and force them to waste time and money, but, fun aside, there is wisdom in the idea that applying an economic lens to security benefits us.
Double Double Pretexting and Trouble I’ll admit I rarely think about pretexting and so when I read it in the report this year, my first thought was that it must be something like the new “first base” for the chronically online – but my confusion grew when the report said it’s commonly used in “bacon egg and cheese” sandwiches, which is what any New Yorker knows “BEC” means.
But no, pretexting is when an attacker crafts a compelling scenario to trick victims into doing something for them – like changing bank account details or sending money via a wire transfer (and BEC stands for “business email compromise,” which is much less delicious). Phishing will have something like an attachment or link the attacker wants you to click, while pretexting is more like the Nigerian Prince scam but without the purposefully incredulous elements.
What I found interesting is that attackers are using email access to insert themselves into existing email threads to ask the victim to perform some sort of task (like updating bank details that mean money will route to the attacker). I spend a lot of time wishing I were excluded from email threads, so in some sense I respect the hustle and grind by attackers here.
Like, let’s take a step back. Attackers are basically simulating the most tedious, soul-sucking aspect of corporate life for profit and I’m pretty sure they’re still making less per capita than the people who do that for their day jobs – possibly expending more effort, too. Is the prevalence of “bullshit jobs” what they’re really exploiting? I just can’t imagine willingly joining the mundane, lifeless coordination dance millions of workers perform daily, because no amount of crimes makes that sexy.
It also doesn’t feel particularly scalable? From what I understand, purchasing email creds from the deep, dark web2 is not expensive – but surely accessing the inbox, finding appropriate email threads for inserting your request, typing a context-appropriate email, etc. involves quite a bit of effort?
Yet, it starts to make sense when we look at the payout. The median transaction size for a BEC incident is now $50k (up from ~$30k in 2020), which, as we’ll see in the next section on ransomware, is 5x the median ransomware transaction size. Is the effort and resource expenditure involved in pretexting -&gt; BEC vs. ransomware 5x as much? Maybe? Criminals leverage automation to great effect in ransomware operations, after all.
I don’t like making bets, so I’m eager to see what the data indicates next year; if pretexting/BEC continues to rise, and ransomware stagnates or falls, then it suggests criminals pivoting to the monetization path with higher ROI.
Ransomware at the Plateau of Productivity? Speaking of ransomware, one shocker in the report is that ransomware didn’t grow year over year – staying at 24% of breaches – despite what crowing headlines would have us believe. Does this suggest attackers have perhaps hit a “plateau of productivity” with it? Or are there simply higher ROI options (like Pretexting) that is dampening its adoption / expansion by criminal organizations?
What may shock many is that 93% of ransomware incidents had no loss (which is a little higher than last year but I know it surprised many people last year, too). We constantly hear that ransomware could be a company-ending compromise; it’s already a myth that tons of businesses (especially small ones) go out of business due to breaches and, based on this data, the financial loss from ransomware is unlikely to lead to bankruptcy for the vast majority of organizations.
The 95% range of ransomware losses is now $1.00 (you can’t even get a slice of pizza in NYC for that anymore) to $2.25 million. At that extreme end of the scale, the smattering of tools to protect against it are well worth the cost (assuming the companies who faced losses that high weren’t already using those security tools, which is not revealed by this data but I would very much like to know). The median loss, however, is $26,000. For a year’s worth of EDR subscription, $26,000 covers something like 350 endpoints (assuming zero additional costs beyond the sticker price)… and that doesn’t even help you with recovery like backups…
What’s especially intriguing is that the median ransomware payout is decreasing while the median loss is increasing. Is it, as the report suggests, because ransomware campaigns are going after smaller companies? In truth, I’ve heard the opposite – that criminals are targeting larger organizations in hope of a bigger median payout. The report also suggests the higher loss amounts might be due to the recovery costs, which feels more plausible. I’m open to clues as well as alternate theories about this one.
Mandatory Log4Shell Mention Attackers, perhaps unsurprisingly, tried to capitalize on companies not patching Log4j, with 32% of all scanning during the year conducted within 30 days of its release. Yet, overall, Log4Shell didn’t have the breach impact we might expect: it was mentioned in 0.4% of the incidents in their data set (“just under a hundred cases”). I think we can interpret this as a victory for defense, especially thanks to the tireless efforts of SREs and security engineers to patch and otherwise protect against it.
One thing that does fascinate me is that 73% of the Log4j cases in the DBIR’s data set involved Espionage, with 26% involving Organized Crime. To me this suggests one of two things (or both): first, that both nation states and criminal organizations tried to leverage Log4j but the nation states were more successful in achieving compromise; or, that nation states seized the opportunity to blend in with criminal activity so successfully that they actually dominated the activity (which could make sense given they were faster to operationalize exploits for it).
Side rant about SBOMs There’s SBOM propaganda at the end of the Log4j section which is sad because there are better recommendations they could have made and it feels like pandering to one of their key data providers rather than a thoughtful analysis of what might help readers the most. SBOMs may tell you where the software exists (if you can wrangle the deluge of JSON they entail) but it doesn’t help you take action. You could easily have SBOMs and still not patch critical vulns for 49 days (the median time they cite in the report).
The problem, from what I’ve seen, is very rarely awareness of a vulnerability; the issue is patching being a confusing, manual process or being worried about patches breaking things in prod or, relatedly, not having sufficient testing to feel confident in deploying the patch. Yes, in the Log4J scenario, some security teams scrambled to figure out “where is log4j in our stack???” but I also know of multiple software engineering teams who gave their security teams that info posthaste and yet it still didn’t get patched quickly (sadly, often due to sociopolitical reasons). I agree SBOMs may offer an asset management value prop, but we must always be careful to ask “okay, now what?” when we think about what value data might grant us.
SBOMs do not answer “okay, now what?” Repeatable change processes, including testing, answers that. Speedy, automated CI/CD answers that. I’ve discussed this at length elsewhere and I’m tired of standing on my soapbox, so let’s move on.
Don’t Roll Your Own Mail Server Another thing I found surprising is that 41% of breaches involve mail servers (not just the sending or receiving of email). Why are you still running your own mail server? It’s not only a pain in the ass but a recipe for deliverability problems, too, aside from the security issues that arise. We often hear security is the enemy of convenience but rolling your own mail server is both inconvenient and insecure. I do not get it.
Desktop Sharing isn’t Caring How many organizations still use desktop sharing software? The answer seems to be a lot given its prominence in these breach statistics… but does any employee even like desktop sharing software?
I have to call out Microsoft here. Android, ChromeOS, iOS, and macOS all require user permission for desktop screen capture or control – and display a warning dialog – but every running Windows application can snoop on the screen content, log keyboard presses, and inject user input without any sort of opt-in.
Alas, those same features allow organizational leadership to surveil their employees’ activity, and Microsoft can make money by supplying those capabilities. Even if requiring consent via warning dialogue would make a dent in this widespread attack problem, I’m doubtful they’ll remove the snoopability because then they’d sacrifice potential revenue. Perverse incentives like this make me want to flee to a cottage in a woodland grove with my cats and forsake snarky infosec commentary for a simple life of writing philosophical novels.
Delays on the Supply Chain Train We’re inundated with FUD about the software supply chain and all the horrors skulking within it but the data in this year’s DBIR suggests we might be investing too much of our feels in this problem. Based on this data, the most common “threat” from using third-party software is that their dev or admin credentials are compromised and the attacker then assk you to send money to the wrong place – and that would only apply to commercial vendors, not open source ones (although I’m sure open source contributors would love you to randomly send them money, they usually don’t ask for it).
Sure, there’s the “exploit vuln” category but that can include vulnerabilities in first-party and third-party code, so it’s difficult to tease apart the impact of “supply chain” specifically. And, regardless, “exploit vuln” comes well after use of stolen creds, “other”, ransomware, phishing, and pretexting. It’s about on par with “misdelivery” and, in fact, dropped from 7% last year to 5% this year.
If we think about the upstream attacks that especially spook us, where an attacker gains access to the supplier’s source code and pushes a malicious update, the data suggests its prevalence is miniscule – so much so that the DBIR excluded “Partner and Software update” as an action vector this year and even lumps “Backdoor / C2” together.
My hot take is a lot of this comes down to how the human brain works and ego. Humans prefer the world being a straightforward, orderly place. Throw rock up, rock come down. We invented religions, in part, to cope with the complexities of the world, like: “I planted the seeds in the ground but we haven’t gotten rain, therefore the gods are angry and we should offer sacrifices to them” or the more basic “[insert deity here] has a plan.”
We really do not like acknowledging that stochasticity slithers throughout all things, and we recognize, quite correctly, that the more things there are interacting in a system, the more likely the outcomes will baffle us. “Attacker sends email with link, human clicks on link, malware downloaded” is actually a pretty linear story involving few components.
But when we look at the sprawling graph that is our software ecosystem, we have no chance of comprehending it in full – and that complexity terrifies us. We don’t feel in control. Even if most of the time things go right (which is the case with software) and that complexity helps us achieve otherwise impossible outcomes, our lizard brains are still like “MANY BIG SCARE!” because we can’t comprehend it naturally in our brains. The emergent interactions shock us, unlike when a user clicks on a link or even wires money to the wrong bank account.
That’s the “how human brains work” part of my explanation for the furor over supply chain security of late. But there’s often a tinge of moral outrage in the supply chain discourse – that vendors “know better” and are “negligent” and other denigrations. This is where ego comes in. My hot take is that certain entities felt extremely embarrassed about the SolarWinds incident and, because they can’t invent a time machine and go back to the 80s and insist that software should continue to be built primarily in-house3, they decided to shame the private sector instead.
Let’s be real, software quality usually dies in any incumbent vendor required by compliance and regulations. I find it hilarious that in the same breath as “how dare you” they also recommend vendors that offer “zero trust” things (usually in name only), as if those also won’t make for delicious targets for nation state adversaries. By far the best targets for attackers from a software supply chain perspective are the tools required to meet regulatory requirements. There is very little incentive to invest in quality or innovation when you sell into a sticky budget line item.
But you can’t exactly shame the incentive paradigms beget by compliance requirements4, so here we are. And from a private sector perspective, it’s really convenient for security leaders to claim that the real problem is this software spaghetti monster of societal proportions. No Board of Directors can expect you to tackle that problem, right? So, yet again in the infosec industry, we have this admittedly elegant symbiosis where we all avoid accountability and shout a lot and pretend like we’re making progress – and, even better, we get to gain a sense of self-righteous superiority in the meantime.
To be clear, the Verizon DBIR says none of this and if they ever did I’d expect it to be their last report given how much of their data comes from various government entities and incumbent security vendors. I say it because I still don’t know how to sell out (please help me).
Conclusion This year’s Verizon DBIR is worth the read to challenge your assumptions and ponder the data. To any vendors reading, please consider contributing to the data set (and no, Verizon isn’t paying me to say this). Realistically, this is the best aggregate of breach data we have in the private sector and, as the report notes a few times, there is sampling bias due to their sources. The greater diversity of sources, the clearer story we can craft of what is actually happening in attack land vs. the fan fiction we normally read about and pretend helps us.
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
Nation-states are also sensitive to ROI but their payoff is often less quantifiable than money is. How do you assign value to classified documents about upcoming trade negotiations? Actually, this is a challenge I’d very much enjoy tackling in another life but it’s safe to say that quantifying it is harder than quantifying money. ↩︎
After playing Tears of the Kingdom for a bit, I now imagine the deep, dark web to be a chasm filled with Ganondorf’s gloom. ↩︎
Even Microsoft, the formerly big bad anti-OSS bogeycompany, builds much of their stack on OSS now. ↩︎
I mean, I have and I do shame incentive paradigms – and I find it fulfilling af – but I’ve long understood I am a stranger in a strange land. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/cyberpunk-cat-bar-chart.png" alt="An AI-generated image of a black cat playing with a bar chart in the style of a cyberpunk painting. The bar chart is neon green and the overall effect is vibrant and electronic, like upbeat synthesized music."></p>
<p>I enjoy the simple pleasures in life: a fragrant bouquet of peonies; crisp, caramelized plantains; my fluffy cat purring in my lap; and receiving an early copy of <a href="https://verizon.com/dbir">this year’s Verizon Data Breach Investigations Report (DBIR)</a> to pour over and let my hot takes simmer in the early summer sunlight.</p>
<p>While this year’s report lacks any graphs that remind me of <a href="https://twitter.com/swagitda_/status/1263900704558788610?s=20">“butterfly vomit”</a> or <a href="https://twitter.com/swagitda_/status/1392942248917192710?s=20">“a parrot on miscellaneous bath salts”</a>, I still found myself enjoying it. The “spaghetti charts” are not your simple curved line graph but instead are VIBING, which both my aesthetic and economic soul appreciates (the latter due to the vibe-vibes reflecting confidence intervals). The footnotes are also a treat and keep what otherwise might be an oppressively dry report feeling like a breeze to read. Verizon DBIR? More like Verizon DBIRRRRRRRRRRRR.</p>
<p>But the point of the report is the data within and while not much changes year over year, a few things stood out to me. This post shares my notable thoughts, questions, and (spicy) commentaries on the 2023 DBIR.</p>
<h2 id="cash-rules-everything-around-us">Cash Rules Everything Around Us</h2>
<p>Yet again, the data shows 94.6% of breaches are financially driven. Our oft-aggrandized advanced persistent threats (APTs) – the nation states who largely compromise for espionage purposes – are simply not the bulk of activity. Indeed, organized crime is 75% of all “threat actors” in breaches, followed by “Other” and “End-user” (the latter also including mistakes, not just purposefully malicious activity – a lumping together I personally dislike) with nation-state dead last.</p>
<p><img src="/blog/img/2023-dbir/threat-actors.png" alt="A screenshot from the Verizon Data Breach Investigations Report. It displays threat actor motives in breaches (n = 2,328) and threat actor varieties in breaches (n = 2,489). As stated in the surrounding text, the financial motive dominates the chart with 95% of breaches. The organized crime motivate dominates in the second chart, reflecting 75% of all threat actors in breaches."></p>
<p>It&rsquo;s a reminder that when we invest effort in security, we should focus on the potential paths that require attackers to invest the least; if they’re overwhelmingly motivated by money, then they will be sensitive to return on investment (ROI) <sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. But we should also think about what costs us the most money – not just in terms of direct costs from the compromise (like money wired away or incident response retainer fees) but also the costs of investing in mitigations for it – and <a href="https://queue.acm.org/detail.cfm?id=3588041">opportunity costs</a>, too, like hurting productivity due to security mitigations.</p>
<p>And it&rsquo;s a reminder that the best way to hurt attackers, whether at local or macro scales, is to poison their ROI. One of my hobbies is <a href="https://arxiv.org/abs/2211.16626">brainstorming ways</a> we can aggrieve attackers and force them to waste time and money, but, fun aside, there is wisdom in the idea that applying an economic lens to security benefits us.</p>
<h2 id="double-double-pretexting-and-trouble">Double Double Pretexting and Trouble</h2>
<p><img src="/blog/img/2023-dbir/pretexting.png" alt="A screenshot from the Verizon Data Breach Investigations Report. It displays pretexting incidents over time, with the line slowly slithering its way up from 2017 to 2023."></p>
<p>I’ll admit I rarely think about pretexting and so when I read it in the report this year, my first thought was that it must be something like the new “first base” for the chronically online – but my confusion grew when the report said it’s commonly used in “bacon egg and cheese” sandwiches, which is what any New Yorker knows &ldquo;BEC&rdquo; means.</p>
<p>But no, pretexting is when an attacker crafts a compelling scenario to trick victims into doing something for them – like changing bank account details or sending money via a wire transfer (and BEC stands for &ldquo;business email compromise,&rdquo; which is much less delicious). Phishing will have something like an attachment or link the attacker wants you to click, while pretexting is more like the Nigerian Prince scam but without the <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/WhyFromNigeria.pdf">purposefully incredulous elements</a>.</p>
<p>What I found interesting is that attackers are using email access to insert themselves into existing email threads to ask the victim to perform some sort of task (like updating bank details that mean money will route to the attacker). I spend a lot of time wishing I were excluded from email threads, so in some sense I respect the hustle and grind by attackers here.</p>
<p>Like, let’s take a step back. Attackers are basically simulating the most tedious, soul-sucking aspect of corporate life for profit and I’m pretty sure they’re still making less per capita than the people who do that for their day jobs – possibly expending more effort, too. Is the prevalence of <a href="https://en.wikipedia.org/wiki/Bullshit_Jobs">“bullshit jobs”</a> what they’re really exploiting? I just can’t imagine willingly joining the mundane, lifeless coordination dance millions of workers perform daily, because no amount of crimes makes that sexy.</p>
<p>It also doesn’t feel particularly scalable? From what I understand, purchasing email creds from the deep, dark web<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> is not expensive – but surely accessing the inbox, finding appropriate email threads for inserting your request, typing a context-appropriate email, etc. involves quite a bit of effort?</p>
<p>Yet, it starts to make sense when we look at the payout. The median transaction size for a BEC incident is now $50k (up from ~$30k in 2020), which, as we’ll see in the next section on ransomware, is 5x the median ransomware transaction size. Is the effort and resource expenditure involved in pretexting -&gt; BEC vs. ransomware 5x as much? Maybe? Criminals leverage automation to great effect in ransomware operations, after all.</p>
<p>I don’t like making bets, so I’m eager to see what the data indicates next year; if pretexting/BEC continues to rise, and ransomware stagnates or falls, then it suggests criminals pivoting to the monetization path with higher ROI.</p>
<h2 id="ransomware-at-the-plateau-of-productivity">Ransomware at the Plateau of Productivity?</h2>
<p>Speaking of ransomware, one shocker in the report is that ransomware didn’t grow year over year – staying at 24% of breaches – despite what crowing headlines would have us believe. Does this suggest attackers have perhaps hit a “plateau of productivity” with it? Or are there simply higher ROI options (like Pretexting) that is dampening its adoption / expansion by criminal organizations?</p>
<p>What may shock many is that 93% of ransomware incidents had no loss (which is a little higher than last year but I know it surprised many people last year, too). We constantly hear that ransomware could be a company-ending compromise; it’s already <a href="https://youtu.be/Bvps1JdYYlE?t=834">a myth</a> that tons of businesses (especially small ones) go out of business due to breaches and, based on this data, the financial loss from ransomware is unlikely to lead to bankruptcy for the vast majority of organizations.</p>
<p><img src="/blog/img/2023-dbir/ransomware.png" alt="A screenshot from the Verizon Data Breach Investigations Report. It displays 95% and 80% confidence intervals of ransomware incident cost per complaint (n equals 2,575) as a dot chart. The text on the chart says 93% of incidents had no loss. The dots represent the remaining 7% starting at $1 up to $2,244,956. The median is $26,000. The 80% confidence interval is $526 on the low end and $699,000 on the high end."></p>
<p>The 95% range of ransomware losses is now $1.00 (you can’t even get a slice of pizza in NYC for that anymore) to $2.25 million. At that extreme end of the scale, the smattering of tools to protect against it are well worth the cost (assuming the companies who faced losses that high weren’t already using those security tools, which is not revealed by this data but I would <em>very</em> much like to know). The median loss, however, is $26,000. For a year’s worth of EDR subscription, $26,000 covers something like 350 endpoints (assuming zero additional costs beyond the sticker price)… and that doesn’t even help you with recovery like backups…</p>
<p>What’s especially intriguing is that the median ransomware <em>payout</em> is decreasing while the median <em>loss</em> is increasing. Is it, as the report suggests, because ransomware campaigns are going after smaller companies? In truth, I’ve heard the opposite – that criminals are targeting <em>larger</em> organizations in hope of a bigger median payout. The report also suggests the higher loss amounts might be due to the recovery costs, which feels more plausible. I’m open to clues as well as alternate theories about this one.</p>
<h2 id="mandatory-log4shell-mention">Mandatory Log4Shell Mention</h2>
<p>Attackers, perhaps unsurprisingly, tried to capitalize on companies not patching Log4j, with 32% of all scanning during the year conducted within 30 days of its release. Yet, overall, Log4Shell didn’t have the breach impact we might expect: it was mentioned in 0.4% of the incidents in their data set (“just under a hundred cases”). I think we can interpret this as a victory for defense, especially thanks to the tireless efforts of SREs and security engineers to patch and otherwise protect against it.</p>
<p>One thing that does fascinate me is that 73% of the Log4j cases in the DBIR&rsquo;s data set involved Espionage, with 26% involving Organized Crime. To me this suggests one of two things (or both): first, that both nation states and criminal organizations tried to leverage Log4j but the nation states were more successful in achieving compromise; or, that nation states seized the opportunity to blend in with criminal activity so successfully that they actually dominated the activity (which could make sense given they were faster to operationalize exploits for it).</p>
<h3 id="side-rant-about-sboms">Side rant about SBOMs</h3>
<p>There’s SBOM propaganda at the end of the Log4j section which is sad because there are better recommendations they could have made and it feels like pandering to one of their key data providers rather than a thoughtful analysis of what might help readers the most. SBOMs may tell you where the software exists (if you can wrangle the deluge of JSON they entail) but it doesn’t help you take action. You could easily have SBOMs and still not patch critical vulns for 49 days (the median time they cite in the report).</p>
<p>The problem, from what I’ve seen, is very rarely awareness of a vulnerability; the issue is patching being a confusing, manual process or being worried about patches breaking things in prod or, relatedly, not having sufficient testing to feel confident in deploying the patch.
Yes, in the Log4J scenario, some security teams scrambled to figure out “where is log4j in our stack???” but I also know of multiple <em>software engineering</em> teams who gave their security teams that info posthaste and yet it still didn’t get patched quickly (sadly, often due to sociopolitical reasons). I agree SBOMs may offer an asset management value prop, but we must always be careful to ask “okay, now what?” when we think about what value data might grant us.</p>
<p>SBOMs do not answer “okay, now what?” Repeatable change processes, including testing, answers that. Speedy, automated CI/CD answers that. I’ve discussed this at length elsewhere and I’m tired of standing on my soapbox, so let’s move on.</p>
<h2 id="dont-roll-your-own-mail-server">Don’t Roll Your Own Mail Server</h2>
<p>Another thing I found surprising is that 41% of breaches involve mail servers (not just the sending or receiving of email). Why are you still running your own mail server? It’s not only a pain in the ass but a recipe for deliverability problems, too, aside from the security issues that arise. We often hear security is the enemy of convenience but rolling your own mail server is both inconvenient and insecure. I do not get it.</p>
<h2 id="desktop-sharing-isnt-caring">Desktop Sharing isn’t Caring</h2>
<p><img src="/blog/img/2023-dbir/desktop-sharing.png" alt="A screenshot from the Verizon Data Breach Investigations Report. It displays the action vectors for ransomware (n equals 690). The top is email at just under 40%, followed by desktop sharing software at around 30% and web app attacks a little less than that."></p>
<p>How many organizations still use desktop sharing software? The answer seems to be a lot given its prominence in these breach statistics&hellip; but does any employee even like desktop sharing software?</p>
<p>I have to call out Microsoft here. Android, ChromeOS, iOS, and macOS all require user permission for desktop screen capture or control – and display a warning dialog – but every running Windows application can snoop on the screen content, log keyboard presses, and inject user input without any sort of opt-in.</p>
<p>Alas, those same features allow organizational leadership to surveil their employees’ activity, and Microsoft can make money by supplying those capabilities. Even if requiring consent via warning dialogue would make a dent in this widespread attack problem, I’m doubtful they’ll remove the snoopability because then they&rsquo;d sacrifice potential revenue. Perverse incentives like this make me want to flee to a cottage in a woodland grove with my cats and forsake snarky infosec commentary for a simple life of writing philosophical novels.</p>
<h2 id="delays-on-the-supply-chain-train">Delays on the Supply Chain Train</h2>
<p>We’re inundated with FUD about the software supply chain and all the horrors skulking within it but the data in this year&rsquo;s DBIR suggests we might be investing too much of our feels in this problem. Based on this data, the most common “threat” from using third-party software is that their dev or admin credentials are compromised and the attacker then assk you to send money to the wrong place – and that would only apply to commercial vendors, not open source ones (although I&rsquo;m sure open source contributors would love you to randomly send them money, they usually don&rsquo;t ask for it).</p>
<p>Sure, there’s the “exploit vuln” category but that can include vulnerabilities in first-party and third-party code, so it’s difficult to tease apart the impact of &ldquo;supply chain&rdquo; specifically. And, regardless, “exploit vuln” comes well after use of stolen creds, “other”, ransomware, phishing, and pretexting. It’s about on par with “misdelivery” and, in fact, dropped from 7% last year to 5% this year.</p>
<p>If we think about the upstream attacks that especially spook us, where an attacker gains access to the supplier’s source code and pushes a malicious update, the data suggests its prevalence is miniscule – so much so that the DBIR excluded “Partner and Software update” as an action vector this year and even lumps “Backdoor / C2” together.</p>
<p>My hot take is a lot of this comes down to how the human brain works and ego. Humans prefer the world being a straightforward, orderly place. Throw rock up, rock come down. We invented religions, in part, to cope with the complexities of the world, like: “I planted the seeds in the ground but we haven’t gotten rain, therefore the gods are angry and we should offer sacrifices to them” or the more basic “[insert deity here] has a plan.”</p>
<p>We really do not like acknowledging that stochasticity slithers throughout all things, and we recognize, quite correctly, that the more things there are interacting in a system, the more likely the outcomes will baffle us. “Attacker sends email with link, human clicks on link, malware downloaded” is actually a pretty linear story involving few components.</p>
<p>But when we look at the sprawling graph that is our software ecosystem, we have no chance of comprehending it in full – and that complexity terrifies us. We don’t feel in control. Even if most of the time things go <em>right</em> (which is the case with software) and that complexity helps us achieve otherwise impossible outcomes, our lizard brains are still like “MANY BIG SCARE!” because we can’t comprehend it naturally in our brains. The emergent interactions shock us, unlike when a user clicks on a link or even wires money to the wrong bank account.</p>
<p>That’s the “how human brains work” part of my explanation for the furor over supply chain security of late. But there’s often a tinge of moral outrage in the supply chain discourse – that vendors “know better” and are “negligent” and other denigrations. This is where ego comes in. My hot take is that certain entities felt extremely embarrassed about the SolarWinds incident and, because they can’t invent a time machine and go back to the 80s and insist that software <em>should</em> continue to be built primarily in-house<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>, they decided to shame the private sector instead.</p>
<p>Let’s be real, software quality usually dies in any incumbent vendor required by compliance and regulations. I find it hilarious that in the same breath as “how dare you” they also recommend vendors that offer “zero trust” things (usually in name only), as if those also won’t make for delicious targets for nation state adversaries. By far the best targets for attackers from a software supply chain perspective are the tools required to meet regulatory requirements. There is very little incentive to invest in quality or innovation when you sell into a sticky budget line item.</p>
<p>But you can’t exactly shame the incentive paradigms beget by compliance requirements<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, so here we are. And from a private sector perspective, it’s really convenient for security leaders to claim that the real problem is this software spaghetti monster of societal proportions. No Board of Directors can expect you to tackle that problem, right? So, yet again in the infosec industry, we have this admittedly elegant symbiosis where we all avoid accountability and shout a lot and pretend like we’re making progress – and, even better, we get to gain a sense of self-righteous superiority in the meantime.</p>
<p>To be clear, the Verizon DBIR says none of this and if they ever did I’d expect it to be their last report given how much of their data comes from various government entities and incumbent security vendors. I say it because I still don’t know how to sell out (please help me).</p>
<h2 id="conclusion">Conclusion</h2>
<p>This year’s Verizon DBIR is worth the read to challenge your assumptions and ponder the data. To any vendors reading, please consider contributing to the data set (and no, Verizon isn’t paying me to say this). Realistically, this is the best aggregate of breach data we have in the private sector and, as the report notes a few times, there is sampling bias due to their sources. The greater diversity of sources, the clearer story we can craft of what is actually happening in attack land vs. the fan fiction we normally read about and pretend helps us.</p>
<hr>
<p><em>Enjoy this post? You might like <a href="https://www.securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>Nation-states are also sensitive to ROI but their payoff is often less quantifiable than money is. How do you assign value to classified documents about upcoming trade negotiations? Actually, this is a challenge I’d very much enjoy tackling in another life but it’s safe to say that quantifying it is harder than quantifying money.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>After playing <em>Tears of the Kingdom</em> for a bit, I now imagine the deep, dark web to be a chasm filled with <a href="https://zelda.fandom.com/wiki/Gloom">Ganondorf’s gloom</a>.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Even Microsoft, the formerly big bad anti-OSS bogeycompany, builds much of their stack on OSS now.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>I mean, I have and I do shame incentive paradigms &ndash; and I find it fulfilling af &ndash; but I&rsquo;ve long understood I am a stranger in a strange land.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Sun Tzu wouldn&#39;t like the cybersecurity industry</title>
            <link>https://kellyshortridge.com/blog/posts/sun-tzu-wouldnt-like-the-cybersecurity-industry/</link>
            <pubDate>Wed, 03 May 2023 08:00:34 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/sun-tzu-wouldnt-like-the-cybersecurity-industry/</guid>
            <description>
Cybersecurity loves its Sun Tzu quotes. What could feel more self-aggrandizing than imagining yourself as the commander of a stalwart army in a battle of erudition and cunning? We fancy ourselves as the “good guys&#34; defending a battlefield of green and gold.
The military fetish soaks into the traditional cybersecurity discourse and doesn’t help our reputation as bellicose obstructionists. In my new book, I shun metaphors of warfare in violence in favor of nature, nurturing, and nourishing to vanquish these aggressive tendencies.
But I also recognize that cybersecurity humans are often stubborn and so if they love Sun Tzu quotes, then I’ll show them how much their beloved would scorn traditional cybersecurity programs.
Whether they intend it or not, many security leaders are the “weak” generals Sun Tzu disdained, the wimpish princes he spoonfed “Baby’s First Warfare Advice.” When we obstruct our organization, we are “the bad guys,” the weak-willed generals who strip their own city of liveliness and liberty as defense against an approaching foe.
In The Art of War, Sun Tzu propounds three ways a ruler can bring misfortune on their army:
“…being ignorant of the fact that [the army] cannot obey. This is called hobbling the army.”“…being ignorant of the conditions which obtain in an army. This causes restlessness in the soldiers’ minds.”“…through ignorance of the military principle of adaptation to circumstances.”All three of these pervade traditional cybersecurity. Let’s examine each in turn.
1. Ignorance of feasibility
Cybersecurity teams create policies and define procedures their colleagues can’t obey, or else it hobbles their productivity. The organization would flounder and lose competitive advantage if people actually followed those rules 100% of the time.
2. Ignorance of conditions
Cybersecurity programs are ignorant of conditions “on the ground.” They aren’t curious about work being done; they care only about work as they imagine it done. They don’t do user research. They blame “human error” without considering context.
3. Ignorance of adaptation
Sticking with status quo ways, proposing container firewalls and slandering “shadow IaC” rather than adapting to evolving conditions. Rather than seizing new opportunities for better defense from modern software tooling, infrastructure, and practices, they cling to what they know – their cognitive comforts – and claim it won’t work for them because of their “unique” business contexts (which mostly amounts to being emotionally resistant to change).
Flinging victory away These three mistakes, as Sun Tzu says, cause “the army” – or our colleagues, as is the case in cybersecurity – to become “restless and distrustful.” It is “flinging victory away.” Cybersecurity flings victory away everyday. Better systems security is possible but it requires a remodeling of both principles and practices – and a willingness and ability to learn about how software and systems work in our modern era.
Elsewhere, Sun Tzu says that:
If you know the enemy and know yourself, you need not fear the result of a hundred battles.
In cybersecurity, we don’t even know ourselves1, and we maintain a menagerie of mildewy myths about attackers. We fumble in self-imposed darkness, clumsily grasping at how software and systems work; we can neither understand our own terrain nor the attackers’ calculus. The number of security tools with hooks into everything, running as root on critical systems, betrays this naiveté.
Cybersecurity programs stumble their way through each year, spirits sagging under the burdens of these three blunders. They spiral around the battlefield, plodding to the insidious percussion by vendors, research analysts, and journalists.
And it is insidious. The most remunerative way to sell out in cybersecurity is convincing security leaders that they can overcome each of Sun Tzu’s three ignorances to achieve victory (if you can accomplish this via generative AI then congrats on your seed funding!). It is a soothing message: you don’t have to change your thinking or methods – convenience is what you deserve and we’ll help you make it appear like security is still being done – ascend amongst the cardboard cutout fleshlings with your cardboard cutout security stack – blame the fast, ever-evolving winds of change for your failures.
We architect illusions; like Sun Tzu advised, deception is key to victory, but the victory in this case is avoiding accountability rather than successfully outmaneuvering attackers. If being a CISO is the “hardest job in corporate America” (it is not) when your success metrics are imaginary friends like “% of risk coverage” or “time to detect” (because who cares about actually acting?) then it is worth asking whether we are our own source of hardship.
We deceive ourselves – the most human endeavor of all – to maintain this convenience. Security leaders wail that more modern ways are impossible for “real” businesses, despite similar businesses – including those with legacy systems – transforming. It already feels so difficult; how much more difficult will it be once tangible outcomes are required? Will we still be relevant? Who are we if we are not needed?
This dynamic is creating a significant divide in security efficacy, an iteration (admittedly a rather esoteric one) on the “Two Americas” theme. If you feel that you cannot keep up with the business, technology trends, or software velocity, you are not alone; but you and your compatriots are languishing while others march onward towards victory. And it is this stagnancy that presents an opportunity for SRE and platform engineering teams to swoop in, better versed in software and execution alike.
How do we stop flinging victory away? We dismantle our ignorance to avoid these three mistakes:
1. Understanding of work-as-done
We understand how work is actually performed and how our systems actually behave so we can craft security solutions that eliminate hazards by design or reduce hazardous methods and materials by design.
2. Understanding of local conditions
We research the local context of the users we care about, like those interacting with a system (whether developers, accountants, and so forth). We never implement a solution – especially an administrative control, like a policy or training – without performing user research to understand the experiences of those who will be subjected to our prescribed solution.
3. Understanding of adaptation
We cultivate adaptive capacity, prioritizing investment in our ability to learn and drive change. We expend effort on iterative improvement rather than attempting perfect prevention. We embrace speed and all the practices, patterns, and tools that can help us quickly adapt to evolving conditions.
Transforming towards resilience All three of these strategies can be summarized as transforming towards resilience. To connect the final stars in our concept constellation, I’ll close with a definition of Security Chaos Engineering to reveal why I’m pretty sure Sun Tzu would think it slaps.
Security Chaos Engineering (SCE) is a socio-technical transformation that enables the ability to gracefully respond to failure and adapt to evolving conditions. It embraces resilience as its philosophical foundation: that our digital world will constantly evolve and we must therefore “change to stay the same.” For that reason, I also describe SCE as “platform resilience engineering.”
What matters in the SCE world of resilience is continuously refined mental models of our systems; a learning culture and feedback loops; and willingness and capacity to change. This is precisely what Sun Tzu espoused: mental and strategic flexibility to seize opportunities while outmaneuvering adversaries to reduce their opportunities.
This isn’t something brand new I divined out of the ether. SCE is seeped in decades of scholarship and practice from other complex systems domains in the realm of resilience engineering and, as Sun Tzu shows, it draws on ancient wisdom (not just in warfare, but in matters of socioecological systems, too).
Such a transformation is entirely within the grasp of any organization; but the wild problem looming ahead is whether you’ll pursue resilience or convenience. Many humans prefer playing pretend, performing ritualistic motions as scaffolding for an otherwise aimless life. Craving convenience is only natural for most humans.
If that is your choice, I wish you well – but please do kindly step aside for the rest of us so we may sate our hunger to achieve real security outcomes and sustain systems resilience. A life of checkbox compliance may suit your tastes, but we demand more from life.
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
This made me think of the iconic clip from Mariah Carey: https://www.youtube.com/watch?v=-lposG3n5u4 ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/cyberpunk-sun-tzu.png" alt="An AI-generated image of Sun Tzu in the style of a cyberpunk painting in bisexual lighting. His eyes are glowing neon green because his disappointment in the infosec industry is radioactive in intensity."></p>
<p>Cybersecurity loves its Sun Tzu quotes. What could feel more self-aggrandizing than imagining yourself as the commander of a stalwart army in a battle of erudition and cunning? We fancy ourselves as the “good guys&quot; defending a battlefield of green and gold.</p>
<p>The military fetish soaks into the traditional cybersecurity discourse and doesn’t help our reputation as bellicose obstructionists. In <a href="https://www.securitychaoseng.com/">my new book</a>, I shun metaphors of warfare in violence in favor of nature, nurturing, and nourishing to vanquish these aggressive tendencies.</p>
<p>But I also recognize that cybersecurity humans are often stubborn and so if they love Sun Tzu quotes, then I’ll show them how much their beloved would scorn traditional cybersecurity programs.</p>
<p>Whether they intend it or not, many security leaders are the &ldquo;weak&rdquo; generals Sun Tzu disdained, the wimpish princes he spoonfed &ldquo;Baby&rsquo;s First Warfare Advice.&rdquo; When we obstruct our organization, we are &ldquo;the bad guys,&rdquo; the weak-willed generals who strip their own city of liveliness and liberty as defense against an approaching foe.</p>
<p>In <em>The Art of War</em>, Sun Tzu propounds three ways a ruler can bring misfortune on their army:</p>
<ol>
<li>
<pre><code>“…being ignorant of the fact that [the army] cannot obey. This is called hobbling the army.”
</code></pre>
</li>
<li>
<pre><code>“…being ignorant of the conditions which obtain in an army. This causes restlessness in the soldiers’ minds.”
</code></pre>
</li>
<li>
<pre><code>“…through ignorance of the military principle of adaptation to circumstances.”
</code></pre>
</li>
</ol>
<p>All three of these pervade traditional cybersecurity. Let’s examine each in turn.</p>
<p><strong>1. Ignorance of feasibility</strong></p>
<p>Cybersecurity teams create policies and define procedures their colleagues can’t obey, or else it hobbles their productivity. The organization would flounder and lose competitive advantage if people actually followed those rules 100% of the time.</p>
<p><strong>2. Ignorance of conditions</strong></p>
<p>Cybersecurity programs are ignorant of conditions “on the ground.” They aren’t curious about work being done; they care only about work as they imagine it done. They don’t do user research. They blame “human error” without considering context.</p>
<p><strong>3. Ignorance of adaptation</strong></p>
<p>Sticking with status quo ways, proposing container firewalls and slandering “shadow IaC” rather than adapting to evolving conditions. Rather than seizing new opportunities for better defense from modern software tooling, infrastructure, and practices, they cling to what they know – their cognitive comforts – and claim it won’t work for them because of their “unique” business contexts (which mostly amounts to being emotionally resistant to change).</p>
<h2 id="flinging-victory-away">Flinging victory away</h2>
<p>These three mistakes, as Sun Tzu says, cause “the army” – or our colleagues, as is the case in cybersecurity – to become “restless and distrustful.” It is “flinging victory away.” Cybersecurity flings victory away everyday. Better systems security is possible but it requires a remodeling of both principles and practices – and a willingness and ability to learn about how software and systems work in our modern era.</p>
<p>Elsewhere, Sun Tzu says that:</p>
<blockquote>
<p>If you know the enemy and know yourself, you need not fear the result of a hundred battles.</p>
</blockquote>
<p>In cybersecurity, we don’t even know ourselves<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>, and we maintain a menagerie of mildewy myths about attackers. We fumble in self-imposed darkness, clumsily grasping at how software and systems work; we can neither understand our own terrain nor the attackers’ calculus. The number of security tools with hooks into everything, running as root on critical systems, betrays this naiveté.</p>
<p>Cybersecurity programs stumble their way through each year, spirits sagging under the burdens of these three blunders. They spiral around the battlefield, plodding to the insidious percussion by vendors, research analysts, and journalists.</p>
<p>And it is insidious. The most remunerative way to sell out in cybersecurity is convincing security leaders that they can overcome each of Sun Tzu&rsquo;s three ignorances to achieve victory (if you can accomplish this via generative AI then congrats on your seed funding!). It is a soothing message: you don’t have to change your thinking or methods &ndash; convenience is what you deserve and we&rsquo;ll help you make it appear like security is still being done &ndash; ascend amongst the cardboard cutout fleshlings with your cardboard cutout security stack &ndash; blame the fast, ever-evolving winds of change for your failures.</p>
<p>We architect illusions; like Sun Tzu advised, deception is key to victory, but the victory in this case is avoiding accountability rather than successfully outmaneuvering attackers. If being a CISO is the &ldquo;hardest job in corporate America&rdquo; (it is not) when your success metrics are imaginary friends like &ldquo;% of risk coverage&rdquo; or &ldquo;time to detect&rdquo; (because who cares about actually acting?) then it is worth asking whether we are our own source of hardship.</p>
<p>We deceive ourselves &ndash; the most human endeavor of all &ndash; to maintain this convenience. Security leaders wail that more modern ways are impossible for “real” businesses, despite similar businesses &ndash; including those with legacy systems &ndash; transforming. It already feels so difficult; how much more difficult will it be once tangible outcomes are required? Will we still be relevant? Who are we if we are not needed?</p>
<p>This dynamic is creating a significant divide in security efficacy, an iteration (admittedly a rather esoteric one) on <a href="https://en.wikipedia.org/wiki/Two_Americas">the “Two Americas” theme</a>. If you feel that you cannot keep up with the business, technology trends, or software velocity, you are not alone; but you and your compatriots are languishing while others march onward towards victory. And it is this stagnancy that presents an opportunity for SRE and platform engineering teams to swoop in, better versed in software and execution alike.</p>
<p>How do we stop flinging victory away? We dismantle our ignorance to avoid these three mistakes:</p>
<p><strong>1. Understanding of work-as-done</strong></p>
<p>We understand how work is actually performed and how our systems actually behave so we can craft security solutions that eliminate hazards by design or reduce hazardous methods and materials by design.</p>
<p><strong>2. Understanding of local conditions</strong></p>
<p>We research the local context of the users we care about, like those interacting with a system (whether developers, accountants, and so forth). We never implement a solution – especially an administrative control, like a policy or training – without performing user research to understand the experiences of those who will be subjected to our prescribed solution.</p>
<p><strong>3. Understanding of adaptation</strong></p>
<p>We cultivate adaptive capacity, prioritizing investment in our ability to learn and drive change. We expend effort on iterative improvement rather than attempting perfect prevention. We embrace speed and all the practices, patterns, and tools that can help us quickly adapt to evolving conditions.</p>
<h2 id="transforming-towards-resilience">Transforming towards resilience</h2>
<p>All three of these strategies can be summarized as transforming towards resilience. To connect the final stars in our concept constellation, I’ll close with a definition of Security Chaos Engineering to reveal why I&rsquo;m pretty sure Sun Tzu would think it slaps.</p>
<p><strong>Security Chaos Engineering (SCE) is a socio-technical transformation that enables the ability to gracefully respond to failure and adapt to evolving conditions.</strong> It embraces resilience as its philosophical foundation: that our digital world will constantly evolve and we must therefore “change to stay the same.” For that reason, I also describe SCE as &ldquo;platform resilience engineering.&rdquo;</p>
<p>What matters in the SCE world of resilience is continuously refined mental models of our systems; a learning culture and feedback loops; and willingness and capacity to change. This is precisely what Sun Tzu espoused: mental and strategic flexibility to seize opportunities while outmaneuvering adversaries to reduce their opportunities.</p>
<p>This isn’t something brand new I divined out of the ether. SCE is seeped in decades of scholarship and practice from other complex systems domains in the realm of resilience engineering and, as Sun Tzu shows, it draws on ancient wisdom (not just in warfare, but in matters of socioecological systems, too).</p>
<p>Such a transformation is entirely within the grasp of any organization; but <a href="https://www.econtalk.org/russ-roberts-and-mike-munger-on-wild-problems/">the wild problem</a> looming ahead is whether you’ll pursue resilience or convenience. Many humans prefer playing pretend, performing ritualistic motions as scaffolding for an otherwise aimless life. Craving convenience is only natural for most humans.</p>
<p>If that is your choice, I wish you well &ndash; but please do kindly step aside for the rest of us so we may sate our hunger to achieve real security outcomes and sustain systems resilience. A life of checkbox compliance may suit your tastes, but we demand more from life.</p>
<hr>
<p><em>Enjoy this post? You might like <a href="https://www.securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>This made me think of the iconic clip from Mariah Carey: <a href="https://www.youtube.com/watch?v=-lposG3n5u4">https://www.youtube.com/watch?v=-lposG3n5u4</a>&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Security-by-Design and by-Default: Sustaining Software Resilience</title>
            <link>https://kellyshortridge.com/blog/posts/security-by-design-default-software-resilience/</link>
            <pubDate>Mon, 17 Apr 2023 08:00:11 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/security-by-design-default-software-resilience/</guid>
            <description>
Last Thursday, a consortium of intelligence agencies spanning Australia, Canada, Germany, Netherlands, New Zealand, United Kingdom, and the United States published a document called Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default. Its aim is to incite vendors to deliver products that are secure-by-design and -default.
Per their guidance, secure-by-design “means that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure.” And secure-by-default “means products are resilient against prevalent exploitation techniques out of the box without additional charge.” Both are admirable, worthy pursuits.
But a natural question for readers after digesting this guidance is, “Okay, secure-by-design and secure-by-default sound great… but how do I implement all of this in my organization?”
Well, darling readers, the timing could not be more fortuitous. You see, the obsessive passion project book I wrote throughout last year – Security Chaos Engineering: Sustaining Resilience in Software and Systems – just came out. And I cannot overstate how perfectly the book answers that very valid question of how we can achieve these noble pursuits in practice.
In this post, I’ll map the recommendations in CISA, et al’s report (which I’ll refer to as SBDD for short) to specific chapters and sections in my own book, optimizing for brevity since our time is valuable. If the report is an apéritif, then consider the book a flavorful and fragrant multi-course meal to leave you fulfilled and flourishing.
Software Product Security Principles There are three principles the SBDD guidance outlines (page 6), two of which are covered in the book at some length1:
The burden of security should not fall solely on the customer Build organizational structure and leadership to achieve these goals To really immerse yourself in this first principle, skip to Chapter 7 in the book within the section “Designing a Solution.” I suspect you’ll especially enjoy the sidebar on “Solution Design and the Thermodynamics of Effort,” which crystallizes the dynamic that effort can only be shifted around between humans, not destroyed.
The second principle defined by SBDD is also the focus of Chapter 7, which introduces the concept of a Platform Resilience Engineering team dedicated to sustaining resilience in software and systems. Platform resilience engineering teams treat software resilience like a product:
A platform engineering approach to resilience treats security as a product, as something created through a process that provides benefits to a market. Platform engineering teams treat their internal customers’ outages as their own outages, as a call to action to build a better product for them. (~page 2722 of Security Chaos Engineering: Sustaining Resilience in Software and Systems)
This topic also relates to one spot where I differ with their report: the emphasis on “IT departments.” Securing corporate laptops radically differs from assessing software quality; they are distinct skill sets. Chapter 7 untangles this distinction and offers a path forward for the latter (assessing software quality, of which security is a part).
Secure-by-Design Software manufacturers should perform a risk assessment to identify and enumerate prevalent cyber threats to critical systems, and then include protections in product blueprints that account for the evolving cyber threat landscape. (SBDD page 4)
Their Secure-by-Design call-to-action resembles the E&amp;E Assessment Approach I describe in Chapter 2, albeit in different lingo (since “risk” is a slippery concept we largely eschew). The E&amp;E Resilience Assessment reflects two “tiers” of assessment: Evaluation and Experimentation.
Tier 1 (Evaluation) emphasizes the value of creating decision trees for critical functionality, which directly maps to the SBDD guidance of “identify and enumerate prevalent cyber threats to critical systems.” The decision trees you create should capture the paths attackers might take to achieve a particular goal, as well as capture existing and prospective security mechanisms – satisfying this criteria.
In the book, we take it further than what SBDD suggests, however. It isn’t enough to “include protections in product blueprints”; we must verify that those protections work as expected. This is Tier 2 (Experimentation), in which we conduct resilience stress tests – what we refer to as “chaos experiments” in software land – to observe how our system behaves and responds in an adverse scenario.
Another element of SBDD’s guidance is also satisfied by the E&amp;E Resilience Assessment:
The authoring agencies recommend manufacturers use a tailored threat model during the product development stage to address all potential threats to a system and account for each system’s deployment process.” (SBDD page 4)
We need only conduct the Tier 1 assessment to satisfy this; nevertheless, we still recommend both the Evaluation and Experimentation phases to fully assess system resilience.
Memory-safe languages Manufacturers are encouraged make hard tradeoffs and investments, including those that will be “invisible” to the customers, such as migrating to programming languages that eliminate widespread vulnerabilities. (SBDD page 5)
Prioritize the use of memory safe languages wherever possible. (SBDD page 8)
The value of memory-safe languages in sustaining software resilience receives special focus in a few sections of my book:
Chapter 4 in the section called “Standardization of Raw Materials,” which includes a tl;dr of what memory safety is, why it matters, how to select the right language when building software, and how to handle migrations or otherwise cope with lingering C/C&#43;&#43; code.
Chapter 7 in the section called “Substitute Less Hazardous Methods or Materials,” since, per NSA guidance elsewhere as well, we should treat C/C&#43;&#43; code as hazardous materials to avoid when possible.
Secure software components. Acquire and maintain well-secured software components (e.g., software libraries, modules, middleware, frameworks,) from verified commercial, open source, and other third-party developers to ensure robust security in consumer software products. (SBDD page 8)
Chapter 4 in my book covers this “best practice” as well. It’s honestly tricky for me to pinpoint where precisely to point you because it’s such a pervasive topic woven throughout the chapter. But my favorite warning box in the book is relevant:
Angy Scorpion is in the section “‘Boring’ Technology Is Resilient Technology,” but I also recommend the following sub-sections in Chapter 4 (really, just read the whole chapter; it’s the longest one by far in the book, but bursting with practical opportunities):
Standardization of Raw Materials Standardization of Patterns and Tools Integration Tests, Load Tests, and Test Theater Modularity: Humanity’s Ancient Tool for Resilience Web template frameworks &amp; parameterized queries Use web template frameworks that implement automatic escaping of user input to avoid web attacks such as cross-site scripting. (SBDD page 8)
Both Chapter 3 and Chapter 4 cover the principle of “Choose Boring” technology, which covers this. We don’t want to DIY something and potentially miss important security characteristics when there are readily-available frameworks and libraries that are well-vetted.
Use parameterized queries rather than including user input in queries, to avoid SQL injection attacks. (SBDD page 8)
This exact recommendation is covered in Chapter 4, subsection “Documenting Why and When,” although we also recommend ORMs:
The organization should instead standardize on their database access patterns and make choices that make the secure way the default way. (~page 185 of Security Chaos Engineering: Sustaining Resilience in Software and Systems)
It’s flattering to imagine these agencies took a peek at my book draft along the way to inform their guidance given the similarities in language. Please do not start a real conspiracy theory about this.
Code review Strive to ensure that code submitted into products goes through peer review by other developers to ensure higher quality. (SBDD page 9)
The art of thoughtful code reviews is extensively covered in Chapter 4 in the subsection “Code Reviews and Mental Models.” One thing the report doesn’t mention but the book does is that we really need to implement code reviews for tests, too, and also expend special effort on code reviews for error-handling functionality.
Defense in depth. Design infrastructure so that the compromise of a single security control does not result in compromise of the entire system. For example, ensuring that user privileges are narrowly provisioned and access control lists are employed can reduce the impact of a compromised account. Also, software sandboxing techniques can quarantine a vulnerability to limit compromise of an entire application. (SBDD page 9)
Isolation is so essential for resilience – and life itself – that it is threaded throughout the whole book. But, for this “best practice,” read Chapter 3, subsections “Investing in Loose Coupling in Software Systems” and “Introducing Linearity into our Systems”; they cover isolation as well as design practices like the D.I.E. triad (short for Distributed, Immutable, and Ephemeral). I cover different forms of isolation worth considering, too – fault isolation, performance isolation, and cost isolation. You’ll even learn some things about biology along the way:
The goal espoused by SBDD here also reflects eliminating or reducing hazards by design, which we explore in depth in Chapter 7. I highly recommend beholding the “Ice Cream Cone Hierarchy of Security Solutions,” which is especially tasty wisdom.
Secure-by-Default A macro idea in the report’s Secure-by-Default section is treating security like a product, which is the precise focus of Chapter 7 in my book. The SBDD report also touches on the futility of training/policy/guidance relative to building in security qualities by design, which is also covered in Chapter 7. And, surprise surprise, the importance of UX to security solutions is also covered in Chapter 7.
As a little taste of what you’ll read, here’s a cheerful lemur teaching you about user journeys:
Safe to say that if you care about the secure-by-default angle, Chapter 7 is for you.
Consider the user experience consequences of security settings Each new setting increases the cognitive burden on end users and should be assessed in conjunction with the business benefit it derives. Ideally, a setting should not exist; instead, the most secure setting should be integrated into the product by default. When configuration is necessary, the default option should be broadly secure against common threats. (SBDD page 11)
Cognitive overhead is a common concern covered in the book, but for this guidance I highly recommend this subsection in Chapter 7: “Understanding How Humans Make Trade-Offs Under Pressure.” We cover how human brains really work, especially under pressure; why we should be curious about users’ workarounds rather than incensed; and how to respect the cognitive load of our users.
Chapter 7 also discusses choice architecture and the power of defaults in the subsection “Substitute Less Hazardous Methods or Materials,” drawing from behavioral science.
Forward-looking security over backwards compatibility Too often, backwards-compatible legacy features are included, and often enabled, in products despite causing risks to product security. Prioritize security over backwards compatibility, empowering security teams to remove insecure features even if it means causing breaking changes. (SBDD page 11)
The report suggests avoiding backwards compatibility for security’s sake, which belies the sheltered purview of the authors, who need not appease the fickle market gods. Few organizations will forego a seven-figure contract with a customer who requires backwards compatibility. The report does mention offering carrots to customers to incentivize updates, although doesn’t elaborate.
Organizations already charge customers more to maintain backwards compatibility, which is an incentive to upgrade. I suspect many would be surprised how much some customers are willing to pay to keep things as static as possible.
But I do think there are other opportunities here in terms of how to achieve this. Towards the end of Chapter 4, I describe the Strangler Fig Pattern for software transformation; it’s a beloved migration pattern in software engineering land but few infosec people know about it in my experience. It’s perfect for especially conservative organizations worried about breaking things when modernizing and migrating. The section in which it lives, “Flexibility and Willingness to Change,” is worth a read regardless to inoculate ourselves against the pestilent industry rhetoric that change is dangerous.
Conclusion For years, I’ve watched the faces of infosec mortals twist in fear and revulsion at my suggestion that we can and should infuse security by design. “But the Falmer Magic Wheel says blahdiblah!” I don’t care. A regressive approach will never help us outmaneuver attackers. This report, though, grants me a secret weapon: I can appeal to authority now, too, rather than rely on logical arguments – because logic is a poor persuasive tool, as I’ve learned through tribulations and tears and will forever lament. I’m unironically thrilled.
I sincerely think that the guidance within SBDD is a strong start as far as why, but weak (by design, I think) on how. I am obviously quite biased as it aligns quite well with my own thinking – thinking that blossomed into 300 pages in this book (I don’t have the luxury of being a bunch of intelligence agencies in a trench coat where people automatically trust what I say without elaborate justification).
I hope that for those in the community who crave meaningful security outcomes – to achieve victories like secure-by-design and secure-by-default, or even come close to the summit – that the book charts your course to navigate the oft-tumultuous waters of software delivery and sustain resilience in your systems.
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
The third is principle is “Embrace radical transparency and accountability,” which we do indirectly cover in our discussion of success metrics in Chapter 7 but not directly enough that it counts. ↩︎
I haven’t actually received my copy of my own book yet but will update page numbers once I do. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/cat-protecting-data-center.png" alt="A cyberpunk painting of a black cat with glowing green eyes protecting a data center. The datacenter is bathed in blue with glowing neon green lights and terminal screens. You do not want to mess with this cat."></p>
<p>Last Thursday, a consortium of intelligence agencies spanning Australia, Canada, Germany, Netherlands, New Zealand, United Kingdom, and the United States published a document called <em><a href="https://www.cisa.gov/resources-tools/resources/secure-by-design-and-default">Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default</a></em>. Its aim is to incite vendors to deliver products that are secure-by-design and -default.</p>
<p>Per their guidance, secure-by-design “means that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure.” And secure-by-default “means products are resilient against prevalent exploitation techniques out of the box without additional charge.” Both are admirable, worthy pursuits.</p>
<p>But a natural question for readers after digesting this guidance is, “Okay, secure-by-design and secure-by-default sound great… but how do I implement all of this in my organization?”</p>
<p>Well, darling readers, the timing could not be more fortuitous. You see, the <del>obsessive passion project</del> book I wrote throughout last year – <em><a href="https://www.securitychaoseng.com/">Security Chaos Engineering: Sustaining Resilience in Software and Systems</a></em> – just came out. And I cannot overstate how perfectly the book answers that very valid question of how we can achieve these noble pursuits in practice.</p>
<p>In this post, I’ll map the recommendations in CISA, et al’s report (which I’ll refer to as <em>SBDD</em> for short) to specific chapters and sections in my own book, optimizing for brevity since our time is valuable. If the report is an apéritif, then consider the book a flavorful and fragrant multi-course meal to leave you fulfilled and flourishing.</p>
<h2 id="software-product-security-principles">Software Product Security Principles</h2>
<p>There are three principles the <em>SBDD</em> guidance outlines (page 6), two of which are covered in the book at some length<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>:</p>
<ol>
<li>The burden of security should not fall solely on the customer</li>
<li>Build organizational structure and leadership to achieve these goals</li>
</ol>
<p>To really immerse yourself in this first principle, skip to Chapter 7 in the book within the section “Designing a Solution.” I suspect you’ll especially enjoy the sidebar on “Solution Design and the Thermodynamics of Effort,” which crystallizes the dynamic that effort can only be shifted around between humans, not destroyed.</p>
<p>The second principle defined by <em>SBDD</em> is also the focus of Chapter 7, which introduces the concept of a Platform Resilience Engineering team dedicated to sustaining resilience in software and systems. Platform resilience engineering teams treat software resilience like a product:</p>
<blockquote>
<p>A platform engineering approach to resilience treats security as a product, as something created through a process that provides benefits to a market. Platform engineering teams treat their internal customers’ outages as their own outages, as a call to action to build a better product for them. (~page 272<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> of <em>Security Chaos Engineering: Sustaining Resilience in Software and Systems</em>)</p>
</blockquote>
<p>This topic also relates to one spot where I differ with their report: the emphasis on “IT departments.” Securing corporate laptops radically differs from assessing software quality; they are distinct skill sets. Chapter 7 untangles this distinction and offers a path forward for the latter (assessing software quality, of which security is a part).</p>
<h2 id="secure-by-design">Secure-by-Design</h2>
<blockquote>
<p>Software manufacturers should perform a risk assessment to identify and enumerate prevalent cyber threats to critical systems, and then include protections in product blueprints that account for the evolving cyber threat landscape. (<em>SBDD</em> page 4)</p>
</blockquote>
<p>Their Secure-by-Design call-to-action resembles the E&amp;E Assessment Approach I describe in Chapter 2, albeit in different lingo (since “risk” is a slippery concept we largely eschew). The E&amp;E Resilience Assessment reflects two “tiers” of assessment: Evaluation and Experimentation.</p>
<p>Tier 1 (Evaluation) emphasizes the value of creating decision trees for critical functionality, which directly maps to the <em>SBDD</em> guidance of <em>“identify and enumerate prevalent cyber threats to critical systems.”</em> The <a href="deciduous.app/">decision trees</a> you create should capture the paths attackers might take to achieve a particular goal, as well as capture existing and prospective security mechanisms – satisfying this criteria.</p>
<p>In the book, we take it further than what <em>SBDD</em> suggests, however. It isn’t enough to <em>“include protections in product blueprints”</em>; we must <strong>verify</strong> that those protections work as expected. This is Tier 2 (Experimentation), in which we conduct resilience stress tests – what we refer to as “chaos experiments” in software land – to observe how our system behaves and responds in an adverse scenario.</p>
<p>Another element of <em>SBDD</em>’s guidance is also satisfied by the E&amp;E Resilience Assessment:</p>
<blockquote>
<p>The authoring agencies recommend manufacturers use a tailored threat model during the product development stage to address all potential threats to a system and account for each system’s deployment process.” (<em>SBDD</em> page 4)</p>
</blockquote>
<p>We need only conduct the Tier 1 assessment to satisfy this; nevertheless, we still recommend both the Evaluation and Experimentation phases to
fully assess system resilience.</p>
<h3 id="memory-safe-languages">Memory-safe languages</h3>
<blockquote>
<p>Manufacturers are encouraged make hard tradeoffs and investments, including those that will be “invisible” to the customers, such as migrating to programming languages that eliminate widespread vulnerabilities. (<em>SBDD</em> page 5)</p>
</blockquote>
<blockquote>
<p>Prioritize the use of memory safe languages wherever possible. (<em>SBDD</em> page 8)</p>
</blockquote>
<p>The value of memory-safe languages in sustaining software resilience receives special focus in a few sections of my book:</p>
<ol>
<li>
<p>Chapter 4 in the section called “Standardization of Raw Materials,” which includes a tl;dr of what memory safety is, why it matters, how to select the right language when building software, and how to handle migrations or otherwise cope with lingering C/C++ code.</p>
</li>
<li>
<p>Chapter 7 in the section called “Substitute Less Hazardous Methods or Materials,” since, <a href="https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/3215760/nsa-releases-guidance-on-how-to-protect-against-software-memory-safety-issues/">per NSA guidance elsewhere</a> as well, we should treat C/C++ code as hazardous materials to avoid when possible.</p>
</li>
</ol>
<h3 id="secure-software-components">Secure software components.</h3>
<blockquote>
<p>Acquire and maintain well-secured software components (e.g., software libraries, modules, middleware, frameworks,) from verified commercial, open source, and other third-party developers to ensure robust security in consumer software products. (<em>SBDD</em> page 8)</p>
</blockquote>
<p>Chapter 4 in my book covers this “best practice” as well. It’s honestly tricky for me to pinpoint where precisely to point you because it’s such a pervasive topic woven throughout the chapter. But my favorite warning box in the book is relevant:</p>
<p><img src="/blog/img/dont-diy-middleware.png" alt="A picture from my book. An orange scorpion flares its pincers. The accompanying text, seemingly said by the scorpion, is, &amp;ldquo;Don&amp;rsquo;t DIY Middleware.&amp;rdquo;"></p>
<p>Angy Scorpion is in the section “‘Boring’ Technology Is Resilient Technology,” but I also recommend the following sub-sections in Chapter 4 (really, just read the whole chapter; it’s the longest one by far in the book, but bursting with practical opportunities):</p>
<ul>
<li>Standardization of Raw Materials</li>
<li>Standardization of Patterns and Tools</li>
<li>Integration Tests, Load Tests, and Test Theater</li>
<li>Modularity: Humanity’s Ancient Tool for Resilience</li>
</ul>
<h3 id="web-template-frameworks--parameterized-queries">Web template frameworks &amp; parameterized queries</h3>
<blockquote>
<p>Use web template frameworks that implement automatic escaping of user input to avoid web attacks such as cross-site scripting. (<em>SBDD</em> page 8)</p>
</blockquote>
<p>Both Chapter 3 and Chapter 4 cover the principle of “Choose Boring” technology, which covers this. We don’t want to DIY something and potentially miss important security characteristics when there are readily-available frameworks and libraries that are well-vetted.</p>
<blockquote>
<p>Use parameterized queries rather than including user input in queries, to avoid SQL injection attacks. (<em>SBDD</em> page 8)</p>
</blockquote>
<p>This exact recommendation is covered in Chapter 4, subsection “Documenting Why and When,” although we also recommend ORMs:</p>
<blockquote>
<p>The organization should instead standardize on their database access patterns and make choices that make the secure way the default way. (~page 185 of <em>Security Chaos Engineering: Sustaining Resilience in Software and Systems</em>)</p>
</blockquote>
<p>It’s flattering to imagine these agencies took a peek at my book draft along the way to inform their guidance given the similarities in language. Please do not start a real conspiracy theory about this.</p>
<h3 id="code-review">Code review</h3>
<blockquote>
<p>Strive to ensure that code submitted into products goes through peer review by other developers to ensure higher quality. (<em>SBDD</em> page 9)</p>
</blockquote>
<p>The art of thoughtful code reviews is extensively covered in Chapter 4 in the subsection “Code Reviews and Mental Models.” One thing the report doesn’t mention but the book does is that we really need to implement code reviews for tests, too, and also expend special effort on code reviews for error-handling functionality.</p>
<h3 id="defense-in-depth">Defense in depth.</h3>
<blockquote>
<p>Design infrastructure so that the compromise of a single security control does not result in compromise of the entire system. For example, ensuring that user privileges are narrowly provisioned and access control lists are employed can reduce the impact of a compromised account. Also, software sandboxing techniques can quarantine a vulnerability to limit compromise of an entire application. (<em>SBDD</em> page 9)</p>
</blockquote>
<p>Isolation is so essential for resilience – and life itself – that it is threaded throughout the whole book. But, for this “best practice,” read Chapter 3, subsections “Investing in Loose Coupling in Software Systems” and “Introducing Linearity into our Systems”; they cover isolation as well as design practices like the D.I.E. triad (short for Distributed, Immutable, and Ephemeral). I cover different forms of isolation worth considering, too – fault isolation, performance isolation, and cost isolation. You’ll even learn some things about biology along the way:</p>
<p><img src="/blog/img/isolation-life-security-chaos-book.png" alt="A paragraph from my book. It reads: Isolation is one of nature’s favored features to foster resilience, reflective of her superlative prudence. Life quite literally emerged due to isolation. All life relies on a membrane partially severing a living being from its environment; as biophysicist Harold Morowitz summarily states, “The necessity of thermodynamically isolating a subsystem is an irreducible condition of life.” Nature was the original architect of nested isolation, long before we conceived the transistor."></p>
<p>The goal espoused by <em>SBDD</em> here also reflects eliminating or reducing hazards by design, which we explore in depth in Chapter 7. I highly recommend beholding the “Ice Cream Cone Hierarchy of Security Solutions,” which is especially tasty wisdom.</p>
<h2 id="secure-by-default">Secure-by-Default</h2>
<p>A macro idea in the report’s Secure-by-Default section is treating security like a product, which is the precise focus of Chapter 7 in my book. The <em>SBDD</em> report also touches on the futility of training/policy/guidance relative to building in security qualities by design, which is also covered in Chapter 7. And, surprise surprise, the importance of UX to security solutions is <em>also</em> covered in Chapter 7.</p>
<p>As a little taste of what you’ll read, here’s a cheerful lemur teaching you about user journeys:</p>
<p><img src="/blog/img/lemur-user-journey-security-chaos-book.png" alt="A tip box from my book. A green lemur is perched next to the text. The text reads: A user journey is a “visualization of the major interactions shaping a user’s experience,” whether solving a particular problem or interacting with a particular product or service. It helps us understand why and when interactions unfold, revealing the emotional and cognitive highs and lows a human experiences. It provides a visual way of empathizing with our user’s internal and external context, and helps us define where there are opportunities to make their experience better."></p>
<p>Safe to say that if you care about the secure-by-default angle, Chapter 7 is for you.</p>
<h3 id="consider-the-user-experience-consequences-of-security-settings">Consider the user experience consequences of security settings</h3>
<blockquote>
<p>Each new setting increases the cognitive burden on end users and should be assessed in conjunction with the business benefit it derives. Ideally, a setting should not exist; instead, the most secure setting should be integrated into the product by default. When configuration is necessary, the default option should be broadly secure against common threats. (<em>SBDD</em> page 11)</p>
</blockquote>
<p>Cognitive overhead is a common concern covered in the book, but for this guidance I highly recommend this subsection in Chapter 7: “Understanding How Humans Make Trade-Offs Under Pressure.” We cover how human brains really work, especially under pressure; why we should be curious about users’ workarounds rather than incensed; and how to respect the cognitive load of our users.</p>
<p>Chapter 7 also discusses choice architecture and the power of defaults in the subsection “Substitute Less Hazardous Methods or Materials,” drawing from behavioral science.</p>
<h3 id="forward-looking-security-over-backwards-compatibility">Forward-looking security over backwards compatibility</h3>
<blockquote>
<p>Too often, backwards-compatible legacy features are included, and often enabled, in products despite causing risks to product security. Prioritize security over backwards compatibility, empowering security teams to remove insecure features even if it means causing breaking changes. (<em>SBDD</em> page 11)</p>
</blockquote>
<p>The report suggests avoiding backwards compatibility for security’s sake, which belies the sheltered purview of the authors, who need not appease the fickle market gods. Few organizations will forego a seven-figure contract with a customer who requires backwards compatibility. The report does mention offering carrots to customers to incentivize updates, although doesn’t elaborate.</p>
<p>Organizations already charge customers more to maintain backwards compatibility, which is an incentive to upgrade. I suspect many would be surprised how much some customers are willing to pay to keep things as static as possible.</p>
<p>But I do think there are other opportunities here in terms of how to achieve this. Towards the end of Chapter 4, I describe the Strangler Fig Pattern for software transformation; it’s a beloved migration pattern in software engineering land but few infosec people know about it in my experience. It’s perfect for especially conservative organizations worried about breaking things when modernizing and migrating. The section in which it lives, “Flexibility and Willingness to Change,” is worth a read regardless to inoculate ourselves against the pestilent industry rhetoric that change is dangerous.</p>
<h1 id="conclusion">Conclusion</h1>
<p>For years, I’ve watched the faces of infosec mortals twist in fear and revulsion at my suggestion that we can and should infuse security by design. “But the Falmer Magic Wheel says blahdiblah!” I don’t care. A regressive approach will never help us outmaneuver attackers. This report, though, grants me a secret weapon: I can appeal to authority now, too, rather than rely on logical arguments – because logic is a poor persuasive tool, as I’ve learned through tribulations and tears and will forever lament. I’m unironically thrilled.</p>
<p>I sincerely think that the guidance within <em>SBDD</em> is a strong start as far as <em>why</em>, but weak (by design, I think) on <em>how</em>. I am obviously quite biased as it aligns quite well with my own thinking – thinking that blossomed into 300 pages in this book (I don’t have the luxury of being a bunch of intelligence agencies in a trench coat where people automatically trust what I say without elaborate justification).</p>
<p>I hope that for those in the community who crave meaningful security outcomes – to achieve victories like secure-by-design and secure-by-default, or even come close to the summit – that the book charts your course to navigate the oft-tumultuous waters of software delivery and sustain resilience in your systems.</p>
<hr>
<p><em>Enjoy this post? You might like <a href="securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>The third is principle is “Embrace radical transparency and accountability,” which we do indirectly cover in our discussion of success metrics in Chapter 7 but not directly enough that it counts.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>I haven’t actually received my copy of my own book yet but will update page numbers once I do.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>69 Ways to F*** Up Your Deploy</title>
            <link>https://kellyshortridge.com/blog/posts/69-ways-to-mess-up-your-deploy/</link>
            <pubDate>Tue, 04 Apr 2023 08:00:00 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/69-ways-to-mess-up-your-deploy/</guid>
            <description>Co-authored by Kelly Shortridge and Ryan Petrich
We hear about all the ways to make your deploys so glorious that your pipelines poop rainbows and services saunter off into the sunset together. But what we don’t see as much is folklore of how to make your deploys suffer.1
Where are the nightmarish tales of our brave little deploy quivering in their worn, unpatched boots – trembling in a realm gory and grim where pipelines rumble towards the thorny, howling woods of production? Such tales are swept aside so we can pretend the world is nice (it is not).
To address this poignant market painpoint, this post is a cursed compendium of 69 ways to fuck up your deploy. Some are ways to fuck up your deploy now and some are ways to fuck it up for Future You. Some of the fuckups may already have happened and are waiting to pounce on you, the unsuspecting prey. Some of them desecrate your performance. Some of them leak data. Some of them thunderstrike and flabbergast, shattering our mental models.
All of them make for a very bad time.
We’ve structured this post into 10 themes of fuckups plus the singularly horrible fuckup of manual deploys. For your convenience, these themes are linked in the Table of Turmoil below so you can browse between soul-butchering meetings or existential crises. We are not liable for any excess anxiety provoked by reading these dastardly deeds… but we like to think this post will help many mortals avoid pain and pandemonium in the future.
The Table of Turmoil:
Identity Crisis Loggers and Monitaurs Playing with Deployment Mismatches Configuration Tarnation Statefulness is Hard Net-not-working Rolls and Reboots Disorganized Organization Business Illogic The Audacity of Spacetime Manual Deploys Identity Crisis Permissions are perhaps the final boss of Deployment Dark Souls; they are fiddly, easily forgotten, and never forgiven by the universe.
1. Allow all access. “Allow all access” is simple and makes deployment easy. You’ll never get a permission failure! It makes for infinite possibilities! Even Sonic would wonder at our speed!
And indeed, dear reader, what wonder allow * inspires… like a wonder for what services the app actually talks to and what we might need to monitor; a wonder for what data the app actually reads and modifies; a wonder for how many other services could go down if the app misbehaved; and a wonder for exactly how many other teams we might inconvenience during an incident.
Whether for quality’s sake or security’s, we should not trade simplicity today for torment tomorrow.
2. Keys in plaintext. Key management systems (KMS) are complex and can be ornery. Instead of taming these complex beasts – requiring persistence and perhaps the assistance of Athena herself to ride the beast onward to glory – it can be tempting to store keys in plaintext where they are easily understandable by engineers and operators.
If anything goes wrong, they can simply examine the text with their eyeballs. Unfortunately, attackers also have eyeballs and will be grateful that you have saved them a lot of steps in pwning your prod. And if engineers write the keys down somewhere for manual use in an “emergency” or after they’ve left the company… thoughts and prayers.
3. Keys aren’t in plaintext, they’re just accessible to everyone through the management system. You’ve already realized storing keys in plaintext is unwise (F.U. #2) and upgraded to a key management system to coordinate the dance of the keys. Now you can rotate keys with ease and have knowledge of when they were used! Alas, no one set up any roles or permissions and so every engineer and operator has access to all of the keys.
At least you now have logs of who accessed which keys so you can see who possibly leaked or misused a key when it happens, right? But how useful are those logs when they are simply a list of employees that are trusted to make deploys or respond to incidents?
4. No authorization on production builds. The logical conclusion of fully automated deployments is being able to push to production via SCM operations (aka “GitOps”). Someone pushes a branch, automation decides it was a release, and now you have a “fun” incident response surprise party to resolve the accidental deploy.
One option is to enforce sufficient restrictions on who can push to which branches and in what circumstances. Or, you can go on a yearlong silent meditation retreat to cultivate the inner peace necessary to be comfortable with surprise deployments.
The common “mitigation” “plan” is to only hire devs who have a full understanding of how git works, train them properly2 on your precise GitOops workflow, and trust that they’ll never make a mistake… but we all know that’s just your lizard brain’s reckless optimism telling logic to stop harshing its vibes. Make it go sun on a rock somewhere instead.
5. Keys in the deploy script… which is checked into GitHub. Sometimes build tooling is janky and deployment tooling even jankier. After all, you don’t ship this code to users, so it’s okay if it’s less tidy (or so we tell ourselves). Because working with key management systems can be frustrating, it’s tempting to include the keys in the script itself.
Now anyone who has your source code can deploy it as your normal workflow would. Good luck maintaining an accurate history of who deployed what and when, especially when the “who” is the intern who git clone’d the codebase and started “experimenting” with it.
6. New build uses assets or libraries on a dev’s personal account. You’ve decided that developers should be free to choose the best libraries and tools necessary to get the job done, and why shouldn’t they? For many, this will be a homegrown monstrosity that has no tests or documentation and is written in their Own Personal Style™. The dev who chose it is the only one who knows how to use it, but it’s convenient for them.
But is it the most convenient choice for everyone else? What about when the employee leaves and shutters their github account? The supply chain attack3 is coming from inside the house!
7. Temporary access granted for the deployment isn’t revoked. As MilTOR Freedmem quipped years ago, “Nothing is so permanent as a temporary access token.”
The deployment is complicated and automating all of the steps is a lot of work, so the logical path is to deploy the service manually just this once. The next quarter, there’s an incident and to get the system operational again, it’s quickest to let the team lead log in and manually repair it.
But after the access is added, it’s all too easy to overlook removing the access. Employees would never take shortcuts or abuse their access, right? And their accounts or devices could never be compromised by attackers, right?
8. Former employees can still deploy. Leadership claims your onboarding and offboarding checklists are exhaustive and followed perfectly every time. And, indeed, your resilience and security goals rely on them being followed perfectly. A safety job well done! No one will be able to deploy your application after they’ve put in notice!
What’s that? That wasn’t part of your checklist, too? Or did you skip over that item because it’s too hard to rotate the keys if some employees quit because they’re too essential and baked into too many systems?
You’ve replaced those keys but they aren’t destroyed and aren’t revoked and don’t expire, so your only hope now is the org didn’t piss off the employees enough for them to YOLO rage around in prod. Sure, former employees have always expressed goodwill towards your company and no one has ever left disgruntled… but would you bet on that staying true?
9. Your app uses credentials associated with the account of the employee you just fired. Sharing credentials isn’t just something engineers and operators share between themselves. If you’re extra lucky, they’ll bake them into the software or services and then when they leave or transfer to a new department, the system will fail when their permissions are revoked. Maybe sharing isn’t caring.
10. Login tokens are reset and users get frustrated and churn as a result. Some businesses run on engagement. The more users interact with the platform, the more they induce others to interact, which means more advertising messages you can show them with a more precise understanding of what they might buy. Teams track engagement metrics closely and every little design change is justified or rescinded by how it performs on these metrics. It’s a merry-go-round of incentives and dark patterns.
But one day you migrate to a new login token format or seed, forcing everyone to log in again and the metrics are fucked because many users don’t want to go to the trouble. Those fantastic growth numbers you hoped would bolster your company’s next VC round no longer exist because you broke the cycle of engagement addiction.
Loggers and Monitaurs Logging and monitoring are essential, which is why getting them wrong wounds us like a Minotaur’s horn through the heart.
11. Logs on full blast. Systems are hard to analyze without breadcrumbs describing what happened, so logging is an essential quality of an observable system.
Ever-lurking in engineering teams is the natural temptation to log more things. You might need some information in a scenario you haven’t thought of yet, so why not log it? It will be behind the debug level anyway, so it does no harm in production…
…until someone needs to debug a live instance and turns the logging up to 11. Now the system is bogged down by a deluge of logging messages full of references to internal operations, data structures, and other minutia. The poor soul tasked with understanding the system is looking for hay in a needlestack.
Worse, someone could enable debugging in pre-production where traffic isn’t as high4 and not notice before deploying to the live environment. Now all your production machines are printing logs with CVS receipt-levels of waste, potentially flooding your logging system. If you’re extra unlucky, some of your shared logging infrastructure is taken offline and multiple teams must declare an incident.
12. Logs on no blast. Who doesn’t want peace and quiet? But when logs are quiet, the peace is potentially artificial.
Logs could be configured to the wrong endpoint or fail to write for whatever reason; you wouldn’t even be aware of it because the error message is in the logs that you aren’t receiving. Logs could also be turned off; maybe that’s an option for performance testing5.
Either way, you better hope that the system is performing properly and that you planned adequate capacity. Because if the system ever runs hot or hits a bottleneck, it has no way of telling you.
13. Logs being sent nowhere. Your log pipelines were set up years ago by employees long gone. Also long gone is the SIEM to which logs were being sent. Years go by, an incident happens, and during investigation you realize this fatal mistake. Your only recourse is locally-saved logs, which, for capacity reasons, are woefully itsy bitsy and you are the spider stuck in a spout, awash in your own tears.
14. Canary is dead, but you didn’t realize it so you deployed to all the servers anyway and caused downtime. You’ve been doing this DevOps thing awhile and have a mature process that involves canary deployments to ensure even failed updates won’t incur downtime for users. Deployments are routine and refined to a science. Uptime clearly matters to you. Only this time, the canary fails in a way that your process fails to notice.
An alternative scenario is that some part of the process wasn’t followed and a dead canary is overlooked. You miss the corpse that is your new version and kill the entire flock.
Having a process and system in place to prevent failure and then completely ignoring it and failing anyway likely deserves its own achievement award. Do you need a better process, or do you need to fix the tools? How can you avoid this in the future? This will be furiously debated in the post-mortem, which, if blameful rather than blameless will likely result in this failure repeating within the next year.
15. System fails silently. A system is crying out for help. Its calls are sent into the cold, uncaring void. Its lamentable fate is only discovered months later when a downstream system goes haywire or a customer complains about suspiciously missing data.
“How could it be failing for so long?” you wonder as you stand it back up before adding a “please monitor this” ticket to the team’s backlog that they’ll definitely, totes for sure get to in the next sprint.
16. New version puts sensitive data in logs. Yay, the new version of the service writes more log data to make it easier to operate, monitor, and debug the service should something ever go wrong! But, there’s a catch: some of the new log messages include sensitive data such as passwords or credit card details. This may not even be purposeful. Perhaps it logs the contents of the incoming request when a particular logging mode is enabled.
Unfortunately, there are very specific rules that businesses of your type must follow when handling certain types of data and your logging pipeline doesn’t follow any of them. Now your near-term plans are decimated by the effort to clean up or redact logs that you otherwise wouldn’t have to if the engineer that added that logging knew about the data handling requirements. By the way, the IPO is in a few months. XOXO.
Playing with Deployment Mismatches There were assumptions about what you deployed and those assumptions were wrong.
17. What you deployed wasn’t actually what you tested. Builds are automated and we tested the output of the previous build, so what’s the harm of rebuilding as part of the deployment process? Not so fast.
Unless your build is reproducible, the results you receive may be somewhat different. Dependencies may have been updated. Docker caching may give you a newer (or older, surprisingly!) base image6. Even something as simple as changing the order of linked libraries7 could result in software that differs from what was tested.
Configurations fall prey to this, too. “Well, it works with allow-all!” Right, but it doesn’t work in production because the security policy is different in pre-prod. Or, the new version requires additional permissions or resources which were configured manually in the test environment… but, cranial working memory is terribly finite, and thus they were forgotten in prod.
There are numerous solutions to this problem (like reproducible builds or asset archiving), but you may not bother to employ them until a broken production deploy prompts you to. And some of the solutions descend into a stupid sort of fatalism: “If we don’t have fully reproducible builds, we don’t have anything, there’s no point to any of this.” And then Nietzsche rolls in his grave.
18. Not testing important error paths. We have to move fast. New features. Tickets. Story points. Ship, ship, ship. Developers with the velocity of a projectile. Errors? Bah, log them and move on.
If something is incorrect, surely it will be noticed in test or be reported by users – spoken by someone who has never faced an angry customer because their data was leaked or discovered their lowest rung employee fuming with resentment when they see the company’s fat margins.
Alas, too often we see a new version which forgets to check auth cookies, roles, groups, and so forth because devs test it as admin with the premium enterprise plan, but forget lowly regular users on the the free tier can’t do and see everything.
19. Untested upgrade path. Your infrastructure is declarative, but the world is not. The app works in isolation, but doesn’t accept the data from the previous version or behaves weirdly when faced with it.
Possibly the schema has changed, but the migration path for existing data (like user records) was never tested. You didn’t test it because you recreated your environment each time. The new version no longer preserves the same invariants as the old version and you watch in horror as other components in the system topple one by one.
Possibly you’re using a NoSQL database or some other data store for which there isn’t a schema and now the work of data migration falls on the application… but no one designed or tested for that.
Or, maybe you’re pushing a large number of updates to a rarely used part of your networking stack. For those that are all-in on infrastructure as code (IaC), supporting old schema, data, and user sessions can be a thorny problem.
20. “It’s a configuration change, so there’s no need to test.” A shocking number of outages spawn from what is, in theory, a simple little configuration change. “How much could one configuration change break, Michael?”
Many teams overlook just how much damage configuration changes can engender. Configuration is the essential connective tissue between services and just about anything that can be configured can cause breakage when misconfigured.
21. Deployment was untested because the fix is urgent. The clock is ticking and sweat is sopping your brow. Something must be done to avoid an outage or data loss or some other negative consequence. This fix is at least something and this something seems like it should work8. You deploy it now because time is of the essence. It fails and you now have less time or have caused more mess to clean up.
Only in hindsight do you realize a better option was available. Or, maybe the option you chose was the best one, but you made a small mistake. Was the haste worth it?
Urgency changes your decision-making. It’s a well-intentioned evolutionary design of your brain that causes unfortunate side effects when dealing with computer systems. In fact, “urgency” could probably be its own macro class of deploy fails given its prevalence as a factor in them.
22. App wasn’t tested on realistic data. “Everything works in staging! How could it have failed when we pushed it live? I thought we did everything right by testing the schema migration with our test data and load testing the new version.”
Narrator: The software engineer is in their natural habitat. Observe how they pull at their own hair, a hallmark of their species to signal that something has distressed them. It is very difficult to replicate everything that’s happening in production in an artificial test environment without some sort of replication or replay system. This vexes our otherwise clever engineer.
“It causes a crash!? What kind of deranged mortal would have an apostrophe in their name? Oh, it’s common in some cultures? Hmmm…”
If you keep your service online as you deploy, you should really test your upgrade path under simulated load. If you don’t, you can’t be sure if your planned upgrade process will work or how long it will take.
23. Deploying to the wrong environment accidentally. When you make deployments easy, it is possible to make deploying to prod too easy. And easy to use doesn’t necessarily mean easy to understand. When a slip of the finger results in code going live, you may want to consider just how far you’ve taken automation and if other parts of your process need to catch up.
Because one day, a sleep-deprived Future You is going to run a deploy script where you have to pass in an environment name and you will type dve instead of dev. Once it dawns on you that the deploy system falls back to “prod” as the default, adrenaline shocks you awake with the force of 9000 espressos and you will never sleep again.
The regrettable reality is that internal tools often offer terrible UX because engineers refuse to give themselves nice things (including therapy). These tools, akin to a rusty sword with no hilt, make these sorts of failures tragically common. The rise of platform engineering is hopefully an antidote to this phenomenon, treating software engineers as user personas worthy of UX investments, too.
24. No real pre-production environment. You have a staging environment (congrats!), but it’s an ancient clone from production which has seen so many failed builds, bizarre testing runs, and manual configs that it bears only a pale resemblance to the system it’s supposed to epitomize. It gives you confidence that your software could deploy successfully, but not much else.
You wish you could tear it down and rebuild it anew, but everyone’s busy and it’s never quite important enough for someone to start working on it rather than some other task. Thus you’re doomed to clean up small messes that could be caught by a true staging environment.
At the next DevOps conference you attend, every keynote speaker refers to the “fact” that “everyone” has a “high-fidelity” staging environment (“obviously”) as you weep in silence.
25. Production OS has a different version than pre-prod OS and the app fails to start. Production systems are incredibly important and we must patch frequently to keep them in compliance. But the same diligence isn’t applied to pre-prod, development, build and other environments.
The systems in these environments may therefore be wildly out of date and the software they produce may be incompatible with the up-to-date, patched production system. Systems will drift so far from the standard that QA systems look like an alternate reality from production and make you a believer in the multiverse hypothesis.
26. Backup botch-ups. A production deploy requires a backup because hot damn have we fucked it up so many times and a backup makes everyone feel more confident. The administrator responsible for performing the backup writes the backup over the live system, causing an outage. Furthermore, because the data was overwritten by the botched backup, any existing backups are not recent.
Backup fuckups happen more than anyone admits and when they go down, they go down hard. Recovering from them is rough because no one thinks it will happen to them.
Lesser failures in this category include saturating the disk or network IO of the host taking the backup or filling the disk – each perfectly capable of causing an outage, too.
27. Audit logs are turned off. Audit logs are accidentally turned off during a configuration change or as part of a software upgrade and now the system is out of compliance. No one notices until the auditors ask for the audit logs months later and a wave of panic ripples through the teams involved.
Will we fail the audit? Will customers drop us? How much revenue is impacted? Will we still get raises at our quarterly review? Will I have to switch to getting artisanal roasted bean elixirs every other day?
28. Iceberg dependencies. Only the simplest services run entirely isolated without any other dependencies. When done right, dependencies are properly documented and the infrastructure dependencies of each component are clear. Even better, the dependencies are specified declaratively, rendering it impossible for the human-generated documentation to drift from the machine specification.
But in less auspicious cases, the dependencies are hazy and can even form chains which loop back on themselves like a branching ouroboros eating its own rotting tails. Debugging a production incident for a system with unknown dependencies is software archeology where the only treasure is tears.
The infrastructure upgrade toppled some of the apps and services running on top of it, but the people deploying the upgrade lack context on those casualties and you all wonder when the Jigsaw puppet will come into view and reveal this has all been a grand experiment to pit you against each other.
“We upgraded the OS, clearly everything will be fine!” My brother in christ your system fetches Kerberos creds automatically on boot, but your first boot on a fresh host fails because the Kerberos fetch infra depends on a QA host that was decommed 6 months ago!
And then there’s the ultimate iceberg dependency: DNS. If DNS is borked or misconfigured, all sorts of thorny problems can emerge.
29. Enabling a new feature in vendor software without load testing it. Vendors make all sorts of claims about the behavior of their wares. It’s fast and stable. It migrates its data format. It slices and dices. It follows semver. Should you believe them? In a word, no.
Configuration Tarnation Playing god with your environments does not always result in intelligent design.
30. Per-environment configuration isn’t updated ahead of the deploy. Per-environment configuration is a fact of life. Hostnames, instance counts, and other configuration settings will be necessarily different between environments. Keeping these up to date can be a challenge and it’s all too easy to overlook updating the production template when new configs must be added.
New configuration values are often copied from a staging template into the production one without appropriate adjustments like switching the hostname. You will wonder which evil eldritch god you pissed off when deploying to prod takes down both the production and staging environments. This is so frequent and yet! and yet.
31. Deployed new configuration, but forgot to restart the associated services. Deploying a configuration change is easy: apply the configuration, restart the service. You might think it should be easy to remember the steps when there’s only two of them, but it’s easy to overlook for quick deployments. Only later do you realize you set a new config variable in prod without applying it to the prod instances.
Design patterns like the D.I.E. triad can help — there’s no way for infrastructure to drift if it’s redeployed from scratch on each deployment. And, of course, automated deployments can help, too.
32. Feature flag fuckups. Feature flags are a simple and amazing way to explode the number of system states you must test. N flags make for 2^N combinations. Are all of them tested? Are you sure they’re all set correctly? Do the people who test your application have the same flags as the unwashed masses? Are there old feature flags in your app that should be retired? What could happen if they were activated mistakenly? (just ask Knight Capital).
Maybe you push a release before the company holiday party and deploy the entire release successfully… until a few intoxicants in you realize you forgot to flip the feature flag and now you’re crying in the bar’s bathroom shakily singing along to Mariah Carey (though you suspect the “baby” she desperately wants for xmas isn’t a feature flag).
It’s also possible that you do the exact opposite and flip the flag too soon. Maybe a new product is accidentally announced early, deflating all the carefully constructed marketing plans leading up to the company conference and leading customers to ask why the new feature is “broken.” You had just regained the respect of the customer support team too…
Perhaps the new feature simply uses too many resources and you haven’t scaled your infrastructure appropriately. Or maybe the freemium gate is broken and everyone gets access to premium features. Good luck explaining to customers why you now have to take away their new shiny feature unless they pay up for it…
33. Delayed failures. Faulty configuration may not necessarily cause failures immediately. It’s only after you do some other, seemingly unrelated operation does the fault cause any symptoms. Like medicine, it can be difficult to untangle exactly what faults are the cause of what symptoms.
For systems like load balancers or orchestrators, a bad configuration can remain in place and as long as the system is stable, the misconfiguration will cause no ill effects. But one day when you decommission a cluster as planned, another cluster immediately shits itself – suffering a total outage baffling everyone – and only after many painful hours of debugging do you realize its healthchecking was configured against the one you decommed.
If the team owning that other cluster has poor monitoring hygiene, they may only discover their service is dead much later. But the outage gods care not for your mortal troubles and will do nothing to ease the pain of what is now a multi-day incident all due to faulty health checks.
34. What lies beneath. The layers far underneath your application can still cause your deployment to fail.
Orchestrator fails? Your service is dead. Operating system fails? Dead. Disk controller fails? Dead. BGP? Dead. DNS? Dead. Backhoe cuts the backbone to your sole datacentre? Dead. NVMe subtly violates DMA protocol? Dead. NIC driver fails or goes rogue? Dead. Baseboard management controller borks? Dead. Deploy a bunch of new machines into a cluster with a bad BIOS? This may shock you, but: dead.
35. Scheduled failures. Deployments may appear to succeed only to fail hours or days later if you have periodic background jobs or the ability to schedule tasks. The deployment isn’t successful until these jobs and tasks run successfully.
Perhaps you deploy a busted systemd timer which causes all your nodes to self-destruct after 8 hours… and only discover this “fun” fact after you deploy to your first tranche in prod. See also: the dreaded slow memory leak.
Another variant is the odd date/time bug which causes the application to malfunction only on leap years or when daylight savings time occurs. If you’re not swift with your incident response, the incident resolves itself and you’re left scratching your heads until someone realizes it’s because the clocks rolled back.
Do you bother fixing the bug? Or do you hope to find another job before the next orbital period elapses?
36. Accidentally push components beyond their limits. Components may have poorly documented or undocumented limitations or may simply become unusably slow when assigned more work than they were designed for. Does your database have a limit on the number of connections? Better not scale the number of clients beyond that number, then!
Is it a deployment failure? Yes, if a deployment pushes the system beyond its limits, which is more likely to happen when you add new v2 replicas before retiring old v1 replicas.
In the microservices world, this can manifest as running so many k8s jobs without deleting them that all the k8s operations on jobs begin taking tens of seconds because the cluster is bogged down with so much cluster metadata. Is the inevitable conclusion of microservices simply more microservice instances and metadata than actual work and user data? Makes u think.
37. Builds always use the latest version of a library. Some well-meaning person may decide that builds for a piece of software always use the latest version of its dependencies. This ensures that whenever you release, you always have the latest security patches.
This sounds wise until one of the dependencies causes a subtle API breakage and your app fails to function. Or, any of your dependencies’ authors could decide “fuck this, I’m not maintaining this open source project anymore and giving corporations free labor” and push a dead version of a package.
Now you’re unable to build new versions of the app until someone resolves the dependency situation. Worse, if that fed-up developer has pulled their old versions out of spite or frustration with the pain of maintaining OSS, and if you haven’t archived builds of old versions, then you may not even be able to deploy at all. And this is how you end up cursing a random dev you hadn’t even heard of until just now when you should be taking your lunch break.
Statefulness is Hard Mere mortals cannot maintain accurate mental models of data in distributed systems. Even the divines struggle.
38. An irreversible process fails part way through. Some irreversible process fails part way through your deploy. Possibly it was a migration or some other critical step during your deployment. For whatever reason, this step didn’t happen when deploying to the other environments; it only happened in the one environment that matters most.
What state is the system actually in? Should you rollback? If you try to roll back, will it even work? You’re in uncharted waters under shrouded stars.
Data migrations are often a one-way process. Have you tried migrating all of your existing data to see what happens? How long does it take? Do you have backups? Could you even use the backups, or would restoring result in yet more downtime?
If you don’t know the answers to these questions, you might find yourself deploying an ORM/data model layer which automatically migrates read-only database values to a new format and somehow corrupts the records, resulting in you frantically trying to patch and deploy a fix before too much of your DB becomes unreadable.
Or perhaps you set --timeout 10 on your ORM migration with the innocent assumption that “10” here refers to second. It’s 10 milliseconds. There are no down migrations. And migrations can be arbitrary JS and therefore not guaranteed to be atomic or idempotent and now you’ve started a slow-motion train crash that you cannot stop. One hour of scheduled downtime becomes 18 hours. Your youth and zeal is irreversibly drained.
39. Distributed data vore. Distributed storage / database systems require careful understanding of their operational characteristics if you are to operate them safely. They can be used to achieve better uptime, reliability, and possibly even lower latency if operated within their safety margins… but they also require more care and feeding than traditional databases with an authoritative primary and can be quite temperamental.
If operated incorrectly, distributed storage can silently lose data or disagree on the data they contain if nodes aren’t retired correctly or if an insufficient number of nodes remain healthy. Do you know enough about your data storage layers to operate them safely? Or when you next roll-reboot your Elasticsearch cluster will it silently eat 30% of your data for seemingly no reason at all? The customers now complaining that all their graphs are 30% too low are certainly not silent.
When you deployed new database nodes to prod, did you assume the cluster would rebalance on them? Oopsies, it didn’t! And thus when you decommissioned the old nodes, you destroyed 99% of your data in the process. There are not enough oofs in this universe to reflect this oofiness.
40. Cache is an unhealthy monarchy. If caches aren’t healthy, rolling restart instructions aren’t followed or are insufficient and the system fails to start.
Where to begin? Let’s start with why caches exist in the first place: to avoid repeated execution of expensive computations by storing a mapping between inputs and their results in memory (aka “caching them”). Caches will typically discard infrequently-used results automatically to make space for frequently-used results, and can be asked to drop any results that are no longer valid. How can this go wrong? Oh so many ways!
First off, just like database schema, the format of data in the cache might not be compatible with the new version of the app. Similarly, when there is more than one application instance, the old version of the app will run alongside the new version and could see cache entries from its successors. This can cause problems where either the old version or the new version of the app could malfunction from improper data. Deploying a canary can cause all of the instances of the software version to fail.
Have your engineers thought about cross-version compatibility? Do they reject linear notions of spacetime and thus believe compatibility is a blasphemous act against the holographic principle? “Spacetime is just an abstraction,” they tell you cooly while sipping their matcha latte.9 You are tempted to remind them that money is also an abstraction and therefore they should abstain from it, too, but it’s faster if you just fix it yourself.
Second, the keys might change. If version A of an app uses one nomenclature for keys but its successor (version B) uses another, version B will operate as if the cache is empty. The app now must perform much more work to populate the cache in the new format. If both versions of the app are running simultaneously, they will fight for space in the cache – and the cache is limited in how much data it can hold by necessity. Now the cache has a lower hit ratio and more requests must go through the more costly “uncached” path.
Third, a common ReCoMmEnDaTiOn is to flush caches when deploying new versions of software (“it’s a caching issue, clear your browser cache” said the frontend dev to the product manager as the PM rolled their eyes). This can be dangerous when using a shared cache since so much extra work must now be performed with every request.
With healthy cache hit ratios commonly being in the 90% range for some workloads, that means the part of the application beyond the cache must handle ten times the throughput until the cache is rebuilt. Could you handle a sudden 10x increase in your workload?
Net-not-working We make piles of thinking sand talk to each other through light and wonder why weird shit happens.
41. Accidental self-DoS. The accidental self-DoS could be due to many reasons. Maybe new versions of the application inhibit the CDN’s ability to cache, but this non-functional requirement wasn’t recorded anywhere. Maybe a new analytics feature inundates the application backend with data collected to appease the whims of product management. Maybe a new retry mechanism is being used for failed requests, causing traffic amplification if the backend becomes even a little sluggish.
The end result is the same: the new version of the app swamps the backend service and causes downtime. Engineers tirelessly work to restore service by standing up more instances or filtering the unnecessary traffic the application created for itself.
You ask your devs what happened and they said, “Well, it didn’t work with CDN so we added cache-busting headers to make it work.” You nod quietly while gazing into the abyss.
42. Poorly configured caching. The previous version of the app configured common static assets with a long cache duration. This caches the asset for long periods of time in CDNs and in users’ browsers. Fabulous! The app loads more quickly for users, especially those that visit frequently.
You build a new version of the app with new cached assets. The new version looks great in staging and dev, where testers are unlikely to have stale cached assets. But when you deploy it to production, you receive reports from your most fervent supporters that the app “looks weird.” It’s a Frankenstein’s monster mismatch of static assets from the old and new versions and behaves unpredictably.
Before enough understanding of what has happened filters through to the development team, all of the stale caches expire and the dev marks the JIRA ticket closed. The issue repeats again when you release the next minor redesign.
Due to the nature of CDNs and prod websites, there’s a category of people for which this is a persistent problem and they should be able to fix it… and yet can’t. The entirely avoidable fuckup is a formidable beast.
43. A hurricane of reconnections foments a flash flood. You disconnect clients simultaneously during your deployment, leading to them all trying to reconnect simultaneously shortly thereafter. Your system was never designed to handle a flash flood of connections, so it stays down until it’s scaled manually well beyond what it was originally budgeted for.
Someone throws a ticket to add exponential backoff with randomization to the bottom of the client team’s backlog. Years pass and it happens again as their backlog only grows.
44. DoS yourself via CDN purge. Purging a CDN with a cache hit ratio of 90% results in an immediate 10x throughput increase to the origin. Did you deploy the required additional capacity?
It’s such an easy button to press, too. Some CDNs don’t put a glass case around the button nor require administrator permission to press it. Pressing it immediately grants you the rank of “rogue developer” and now you’ve given your security team a reason to require ten more hours of annual security awareness training. Your access to the secret cools kids Slack channel is purged, too.
45. Accidental network isolation. Adjust some network config you read about on Stack Overflow and suddenly the site is down and no one has access to the systems that can bring it back up and ahhhhh. You frantically call your AWS or colo account rep to see what they can do as your mobile device buzzes incessantly.
The essence of this fuckup is that the outage locks you out of the systems which need to be accessed to resolve the outage. This can be something as simple as firewall rules or as complex as unicast BGP configurations across complicated multi-vendor networks that locks everyone out of your data centers.
46. The orchestrator goes down with the ship. A core service on which your orchestrator depends is down. You would normally use the orchestrator to deploy the service, but since the service is down, the orchestrator no longer functions. Now someone must dig out the dusty documentation on the old manual way to do this as the clock is ticking. Does the manual way even still work? Who even has access?
Elsewhere, you put Consul into the deploy path six months into its lifespan and it packet-storms itself into oblivion, taking down not only service discovery but also your ability to deploy anything or even log into nodes.
Rolls and Reboots “No plan of operations reaches with any certainty beyond the first encounter with production” – Helmchart von Faultke10
47. No rollback plan. It’s truly shocking how often orgs don’t have a rollback plan. But just like your mom told you about jumping off bridges, just because everyone is doing it doesn’t mean it isn’t dangerous.
There’s more than one way to handle this properly, like CD with canaries, blue / green deploys, full rollback of everything… but to not have a strategy for this at all and YOLO it? If only we gatekept less against liberal arts majors to fill this chasm of critical thinking.
A special mention goes to the untested rollback plan, too. “We have a complicated deployment that went smoothly in staging, pre-prod, and every other environment, so why would we ever need to rollback?” you say. “It can’t possibly fail in production,” you say.
You’d be correct 9 out of every 10 times… but how many times do you deploy a year again? So, you painstakingly craft a rollback plan for your deployments, but never test it since it’s unlikely to be used. And how little confidence you have in your rollback plans leads to this next fuckup.
48. Forward “fixing.” Something didn’t go as planned, so you decide to roll forward with some new plan you came up with on the spot instead of rolling back – and then something fails in the roll forward.
This is a fuckup sprouting from the “developers are optimistic by nature” problem. A deployment fails on what you believe is some minor technicality. And then you fail to resist the temptation of making a “quick fix” to patch it while on the call and build a new version of the software so your team can ship…
…But it might not be a quick fix and you’re proposing deploying something completely untested straight to production. Somehow the SRE team is okay with this, or maybe they’re hesitant but let it slide since there are already too many hills on which they must die.
Either way, you’re risking your uptime and stress for deploying a little earlier than you otherwise would. A worthy heuristic for this might be: because developers appear to be optimistic by nature, even the “tiniest” of hotfixes are incomplete and require more testing.
49. Scheduled tasks build up while the system is down for maintenance and DoS the system upon startup. Your system has a job queue with workers that are carefully tuned not to consume too much money and still complete their work. While the maintenance page is up, the workers are shut off. Deploying the app takes longer than expected and scheduled tasks pile up. The original pool of workers is no longer sufficient to process the backlog of scheduled tasks and people waiting on their results find your team to be insufficient.
50. Circular dependencies in infrastructure. Circular infra dependencies result in a particularly nefarious failure pattern. If anything in the chain ever goes down completely, it’s impossible to stand the system back up without yolo-rushing a new version of a component to break the chain. For instance, perhaps you store the latest deployed revision on your own host, which means you can’t access it when something goes wrong.
You may design your system nicely, but time inexorably marches forward without regard for your intentions. This failure is an emergent property of all the changes people make over time. It’s an iceberg failure that only emerges when another failure has already emerged and is plaguing you. That is to say, circular infra dependencies result in a particularly nefarious failure pattern…
Disorganized Organization No amount of fancy automation can truly save you from disorganized organizational processes.
51. No one wants to write docs. Raise your hand if you’ve ever worked at a company with great internal documentation. Try to recall when you’ve ever read truly complete and up to date deployment documentation. For many of you (most of you, even), nothing comes to mind, right?
The closest might be a well-commented deployment script and some associated high level description. Perhaps it’s a design doc that you trust to be sort of right but cannot assuage your suspicion that the implemented system has drifted away from it. If you trust your documentation to be 100% accurate when deploying software, you’re going to have a bad time because it’s inevitable that there will be errors in it.
And because no one wants to write docs, numerous fuckups occur. You followed outdated or misleading docs on how to make the release, which fucked up the deploy. You forgot to update customer-facing docs and they configured something incorrectly and now all your other customers are suffering from the outage.
You forgot to send release notes which, wait, how is that a fuckup? Oh right, the account manager for your largest customer added in terms about releasing their requested feature by a certain date (without telling anyone in product or engineering of this, naturally) and now you’re re-negotiating their multi-year contract and giving them a serious discount to stay which is going to be difficult for your CEO to explain on the next earnings call.
52. People only get rewarded for diving saves. People are congratulated for resolving the downtime or for catching a failure as it’s happening, but no one is rewarded for anticipating failures ahead of time.
The CEO wants things to be shipped now so everything is a rush to get half-baked features out the door quickly. But that causes quality problems elsewhere. At least half the deploys have an emergency “oh shit something is borked” follow-up deploy. And either you roll forward or the app limps along and languishes in a janky existence for the next five days until someone builds the fix and ships it.
Whoever ships the fix is lauded for restoring sanity, but it never should have been broken in the first place. And everyone knows if they had chosen to roll back, the CEO would’ve been angry because his little gamification feature wouldn’t have been there for five days. You suffer, your team suffers, your customers suffer, but bossman is happy and the bleary-eyed engineer who spent days on the recovery gets a pat on the back. Well done, naive salaryman.
Then a conference is coming up; your CEO and CMO demand a splashy announcement for it. That means your Q3 deploys are now beginning-of-Q2 deploys… which is in two weeks. You ship a ton of stuff that is half-baked and barely strung together, but the press release goes out (along with the press releases of all your competitors in an unnavigable sea of babblespeak that the market largely ignores).
The team is congratulated while the architect cries in the bathroom grieving their multiple quarters of work of carefully planned releases as support tickets now pile up with customer complaints about how features are broken. By end of year, half the features are still being “stabilized” and the other half are mothballed.
53. No process for rarely-performed tasks. A task is rarely performed, so there’s no documentation on it. Regrettably, someone must perform the task now and today the universe has decided for that person to be you. You go to look for documentation and find nothing. You look at the code for the systems involved and it’s unintelligible. You git log the associated files and discover that everyone involved with the system has already moved on. You wonder if you should move on, too.
When disparate teams try to coordinate on rarely-performed tasks a special sort of confusion emerges.
54. Have to build a replica for noobs who can’t write queries. It’s deemed necessary for internal data analysts to be able to run queries against production data so they can serve customers and forecast future business (or other such violations of linear time). They’re granted read-only credentials to the production database because that should be sufficient. Later, you are paged because the service is down and the database is wedged.
You discover that one of the data analyst’s queries is taking up way too much memory and has locked a critical table. You kill the query, sever access, and prepare for hell in the morning. In the end, you deploy a replica so the internal teams can query production data without killing the production database. Leaders considered it too expensive to set up originally, but how expensive was the outage and all the effort which went into restoring service?
55. Layer 8 denial of service. Once upon a time, you and your team decided to rewrite an app because your company’s business model changed and thus very little of it was still useful. You also didn’t like Ruby, so you decided to rewrite it in Scala because Scala was hot and everyone on the team wanted to learn Scala. Great, let’s trust our important business function to people learning a new language!
The first version of the app was supposed to be deployed alongside the Ruby version and coexist with it. That deployment failed and also caused the Ruby app to fail. Repairing that took 8 hours of downtime. Naturally, the sysadmin didn’t particularly appreciate having to stay for an extra 8 hours on a Friday because your team wanted to deploy outside of business hours.
A month later, you try again. It deployed successfully! …But the migration for the user accounts fucked up. You could use the new app, but no one had accounts for it other than the root account. A week later, you try again with a script to deploy all the user accounts – and that was successful.
Later, your team discovers the v1 of the app is very slow when actual work is done in it. So, you switch to using Cloudsearch to “optimize” part of the app. And it does! …Except Cloudsearch is eventually consistent and now users complain that when they add something to the app and click refresh, it doesn’t show up until 30 seconds later.
Your team rushes a hotfix to undo the Cloudsearch integration and restore the previous functionality. The sysadmin says no. You gave them less than a day’s notice to deploy this new version, even though your team knew about it for a week while you worked on undoing the integration. You will be lucky if you ship anything else the rest of the year now.
tl;dr the sysadmin is fed up and doesn’t trust anything your team deploys now.
56. Engineers take key bumps of YOLO in prod. Your company prides itself on being a meritocracy with a flat hierarchy, which is why senior leaders (like your boss) can disregard deploy processes – like making a production fix for a bug on the production node and recompiling, re-introducing the bug on the subsequent deploy because they never fixed the issue in tree.
This travesty is an argument in favor of making manual deployments impossible or difficult (see #69), but there’s no guarantee that any proposed safeguards would avoid veto by the Director of YOLO Engineering who is responsible for the fuckup in the first place. Because it’s never their fault, is it?
There’s also a coding variant to this fuckup: someone yolo-typing new code into a live virtual machine. They hot patch at the Erlang console because they relish living in sin. It might be called performance art if it wasn’t fated to desecrate service performance.
That anyone would be allowed to do this assuredly reflects organizational dysfunction. It is so bonkers to be able to just like, write code on a production box and expect that it works. It is a pathological level of optimism. It is suspiciously reminiscent of the Pyro in TF2 who runs around burning everyone to a crisp with a flamethrower while, from their deranged vantage, they are showering the world in glittering rainbows and bubbles and whimsy.
“Well, I’d never do that!” you say, thinking this doesn’t apply to you. And then you’d proceed to attach VisualVM to the JMX port and yolo some gc tuning. Or you’d run some exploratory bash or SQL on the prod instance to get some data without having tested it fully in a test environment. Maybe you aren’t debugging in prod, but using tracing or performance analysis tools in prod to debug problems or tune settings without having tried first in QA at the very least makes you a co-conspirator and likely a Staff YOLO Engineer (maybe even Senior Staff if you continue to do it after reading this! Don’t let your dreams be memes).
57. Cloud credits are about to run out so you rush deploys to reduce your AWS bill. You have to scale down really quickly because your cloud credits ran out and you can no longer afford your infra… which means you were spending money you didn’t have for a long time because Papa Bezos was your sugar daddy for a bit. As you scale down in a panic, you fail to load test the new database and regret not just selling out at one of the tech giants. Now your organization has successfully reduced costs… but also revenue.
58. Behavior in your dependents fucks up your deploy. It’s trivial to mentally model your service in isolation; the rest of the world is immutable and your deployment is the only change in motion. In reality, other teams are hurling themselves at their OKRs, your sales team is onboarding new accounts, and your data integrations are data pipelines haphazardly built with popsicle sticks and glue. Like nature, the system is in constant flux and no matter how confident you are in your deploy, an unexpected shift in the system elsewhere can result in your system failing.
Maybe another team has worse deployment hygiene than you do and they yolo’d a version straight to prod without giving you a chance to integrate with it. Maybe they’re hotfixing an incident themselves and your service is collateral damage. Maybe a data partner changes their data format without announcing it (see #51) and every system in the path falls flat on its face.
It’s not your fault, but it is your problem. Scream into a pillow and sing lamentations to your pet or whatever you need to do to process your grief and move on to acceptance. Because if you want to prevail, you must be nimble and maintain the capacity to recover from unexpected failure.
Business Illogic The deployment may pass your tests but it can still break your business logic.
59. Breaking API change for a partner. Your team finally tackles tech debt and deploys the new, shiny, streamlined version of the API. A few hours later, a partner is screaming at your CTO because they were using the API in a way you never fathomed was even possible and their integration no longer works due to your change.
Another time, you’re celebrating the successful update of the auth method in your SaaS app. It passed all tests, got approval from the security team, and nothing broke after deployment… but, as you’ll soon realize upon wading into a shit show the next morning, you forgot to tell customers about the auth method update. Everyone built access using a certain type of token and switching the service to use a new method completely broke customer access. Guess who will be blamed for lower renewal numbers this quarter?
The “funny” thing about breaking API changes is devs will often argue what is or isn’t breaking. Semver this, semver that. It still takes the same signature and they only fixed a “bug” in the behavior of the other parameters… but what if the other software was relying on that behavior? Now it’s different and different is bad when customers rely on things staying the same.
60. Compliance calamity. Compliance stuff is boring but it matters. Some subtle design, layout, wording, or data retention change in a highly regulated part of the system causes it to no longer be in compliance with one of the onerous compliance regimes it must be a part of for the business to remain viable.
For instance, your payment flow changed and now you’re no longer in compliance with PCI. This remains undiscovered until much later, as most failures of this type are. If you’re unlucky it’s the auditor who discovers it and you’re now buried in paperwork. Or you erode trust by violating user expectations about how you handle their data.
61. robots.txt that inhibits search engine indexing and traffic plummets as a result. You change something in a way that results in search engines or other traffic sources deranking or delisting you. Maybe it’s as subtle as borking the preview cards; sure, the links still work, but it’s no longer as clickbaity to the ever-shortening attention spans of the plebeian spectators. Congratulations, you just killed your traffic source and meal ticket!
Everyone frantically tries to figure out what is going wrong as bank accounts drain. It might not even be something you changed — sometimes giants simply roll over in their sleep and crush smaller players. But it could also be that you messed up the robots.txt and are now poor.
The Audacity of Spacetime Deploying the system at scale is different than deploying the little test sandbox version of it.
62. Deployment assumes all servers are updated at the same time, but they’re not. This fuckup is so, so common. It breaks the simplified, but wrong, mental model that users will talk to your servers and only to that one server. It’s a useful model because it simplifies a bunch of things and is mostly true; when it’s not true, it’s often fine to overlook the effects. But, occasionally, the effects are catastrophic and nothing behaves properly until reality settles.
63. A new deployment begins while a previous one is still in progress. Canaries and staged multi-region deploys can, by design, take a while – so your upgrade is only partially tested and deployed, resulting in an outage.
Most of the fuckups on this list are due to immature processes. But this one emerges as your processes begin to mature. Observing how your failures transform over time can elucidate your progress, a kind of mindfulness that is admittedly difficult to cultivate when feeling the crushing weight of disappointment.
64. Multi-stage deploys of unrelated components. You’ve had so many deployment failures in the past and every deployment has been painful. Some well-meaning person has decided that deployments need to be surveilled with hawkish intensity. Deployment frequency plummets accordingly and every deployment is a potpourri of changes that various stakeholders demand go live.
Good ol’ batch deploys take forever. People get burned out or fatigued and then naturally make mistakes. Or it’s not their component and they don’t have skin in the game11 and consequently are careless when handling it.
When failure does transpire, everyone’s frustration inflames. It’s either their component that failed and they’re frustrated at the lack of care by their peers, or it’s not their component and they’re frustrated that they have to be on this stupid Zoom call until 04:00.
The answer is probably splitting the deploys out; the only reason not to do separate deploys is likely organizational process or dysfunction (see also: Disorganized Organization).
65. Accidentally deploy more than you thought you did. You’ve put a ton of work into automating your deployments. The automated tooling is effective and deploys exactly what you asked of it – but what you asked of it didn’t match your expectations.
Perhaps you thought you were deploying a branch containing only a hotfix, but it was started from the wrong base branch. Or maybe you thought you were asking it to target only a few canary nodes, but accidentally rolled the whole fleet. Perhaps the automation tries its best to make all of the servers consistent by ensuring changes must be deployed in the same sequence. Whatever it was, automation ruthlessly executed your command and now you’re scrambling to recover.
In many organizations, it’s difficult to justify improving the safety and user experience of internal tools since it doesn’t directly affect customers and “just” makes the system confusing for our engineers working with it. The silver lining is this outage will at least make the case that developer experience is important.
66. Zombie hosts. Your new version operates under the assumption that the fleet is only running the new version and all instances speak the same protocol. But in reality, some hosts came back from the dead (i.e. maintenance) running an old version of the software after the deployment completed.
Now you have a zombie apocalypse on your hands with nothing to defend yourself but your laptop. You now regret choosing the ultraportable version rather than the hefty tank boi. And just like zombies, zombie hosts can sneak up on you when you least expect it, long after your deployment is complete when the post-apocalyptic landscape that is your prod environment seems almost serene.
67. Running out of cloud resources. One fine morning, you discover you’ve run out of the specific instance type your service needs. Like, there are literally no more i3.16xlarge instances that exist for you to purchase in this universe (or possibly just the availability zone).
It turns out you are their largest customer, which, of course, the vendor never made clear for strategic reasons. Scaling beyond the capabilities of a vendor inevitably results in downtime. Either you convince the vendor to git gud or you patch to make the app creak along as you frantically build a migration path to a substitute, disrupting the roadmap in the process.
Or, on a Zoom meeting with a bloated attendee list, a dev notes that the app is slower: “I refactored the code to make it easier to read, but now it’s slower, so we need 3x the servers to run it.” You swallow bile. Lucille Bluth asks in your head, “How much could one server cost, Michael?”
If you have rollbacks, you should be fine. If you have autoscaling, you can just pay to address this problem. But nothing can help you automatically scale your tolerance to bullshit or rollback your life choices.
68. Proactively overloading your systems. Scaling one part of the system puts pressure on other parts… and now they’re failing. You now must deal with an outage somewhere you weren’t expecting, all because you were proactive in anticipating capacity you’d need in the future. Worse, if that capacity is required right this millisecond, you face the dilemma of choosing which part of the system to sacrifice temporarily while you figure out how to fix the bottleneck.
Manual Deploys 69. Manual deploys. Manual deploys are truly terrible. If there is a villain in the story of DevOps, it is manual deploys. They are not the serpent in the garden promising forbidden knowledge. Manual deploys are the Diablo boss that probably smells like rotten onions and toe fungus IRL and whose only purpose is to destroy any and all life.
Not convinced yet? Here are reasons A through Z to stop living in Clown Town. Each should be enough to convince you to automate at least the tedious parts of your deploys. Please, we beg you on behalf of humanity and reason, automate all the repetitive tasks you can, even if your org has an aversion to it. Humans are not meant for executing the same thing the same way every time.
An engineer walks into a bar, has two beers, and now is deploying to the entire cluster as they order a third. The bartender says, “You know, if you used an orchestrator, you could order something stronger.” That bartender’s name? Q. Burr-Netty.
Backups of the database probably don’t work. Every time you take a snapshot, it’s someone reading the docs off a DigitalOcean post on how to back up MySQL.
Copy pasta is always served with failsauce. Copying a config from an existing build to a new one, then forgetting to change the version number. Copying SSH authorized keys between machines… and if you’re managing them like that, it’s probably append-only which means your old ops people still have access to your prod servers.
Disk management as a matryoshka doll of disasters: capacity management, failing to provision enough space12, IOPS management, SAN management and all the babysitting required for distributed disks, we probably don’t need to go on.
Expiration of certificates or domains, the tech tragicomedy. You know this will happen again in a year. You see the rhino charging towards you in the distance but there’s always something more urgent to do until it’s too late.
Forget to smoke test the whole environment. You perform manual tests but they only hit the “good” servers. Luck favors the automated.
GeoDNS routing with manual region switching so you can take down a data center and update it without any traffic… but actually DNS takes awhile to propagate so you still have a trickle of traffic coming in (does anyone care that much about those lost requests?).
Handling hardware failures is nigh on impossible. Are your systems even failing over?
Improper sequence when deploying components. Just like your dance moves, the order of your deploy steps is all wrong.
Jumpbox that people use as a dumping ground for random assets they need in prod, like random JAR files or Debian packages, movies they torrent at the office that they want to get on their home machine, random database dumps that people need for various purposes…
Keen to have the deploy done, you do not wait for changes to propagate, the cache to become warm, nor the system to become healthy. “No, sir, the engineer really worth having won’t wait for anybody.” ~ F. Scoff Gitzgerald13
Lonesome server runs the wrong version because you forgot to update all the servers. Or, you forgot one region when you’re doing multi-region updates.
Mismatched component versions. It’s very easy to do when you’re slinging deploys manually and how many database servers do we have again? Is Tantalum down or decommissioned? This IP naming scheme makes no sense. Is it even a database server?
Not copying code to all of the servers and not removing the old code from it, leading to conflicts worse than the tantrums on your executive team.
Overlook which environment you’re in. If it happens, it’s probably a process failure. Because it’s an easy thing to overlook, but there should be a lot more processes in place to catch someone from accidentally farting about in prod. Ideally, you shouldn’t even be able to make this mistake.
Provision users manually. Not only is it a pain in the ass, it is also fraught with peril.
Quarrels between IP addresses and hostnames that rival a Real Housewives reunion special.
Rotate the password or keys, but forget to update the service config with the new password. You rotate the password, so of course you have to update the config, but there may be numerous configs and it can be easy to miss one if it’s not documented or automated.
Smoke tests aren’t performed after manual production deploy. If you’re doing deploys the wrong way (i.e. the manual way), smoke tests are a way to mitigate some of the issues – but you must remember to actually conduct them.
Trusting that your on-call team will be paged despite never testing the paging plan.
Updating the monitoring system is overlooked. If you autoscale, the system managing the autoscaling will self-monitor the hosts. If you add a host manually to a system that doesn’t autoscale, you probably want the system to register with the agent that’s supposed to do the monitoring.
VPN that is a single-point-of-failure and held together with duct tape and twine. The VPN is required to access the network to do the deploys but apparently making it not suck is not required.
Wait for DNS propagation? Who has time for that?
X11 and RDP-based deploys where a tired sysadmin remotely logs into the virtual desktop of a system that shouldn’t even have a graphical environment and haphazardly drags files around until the new release is live. The commands can’t even be audited because there were no commands, only mouse movements.
Your sysadmin does maintenance on the database so that it can stay up, but in the morning you discover the settings they’ve changed cause the database to no longer run its background maintenance processes and you’ve just deferred your downtime until later.
ZIP or JAR file is copied from the developer’s laptop and now you have no record of what was deployed.
Thank you to the following co-conspirators for their contributions to this list: C. Scott Andreas, Matthew Baltrusitis, Zac Duncan, Dr. Nicole Forsgren, Bea Hughes, Kyle Kingsbury, Toby Kohlenberg, Ben Linsay, Caitie McCaffrey, Mikhail Panchenko, Alex Rasmussen, Leif Walsh, Jordan West, and Vladimir Wolstencroft.
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
This brings to mind Vonnegut’s advice of “Be a sadist. No matter how sweet and innocent your leading characters, make awful things happen to them—in order that the reader may see what they are made of.” ↩︎
As “Duskin” rightly noted in an investigation of a fire at an ammonia plant back in 1979: “If you depend only on well-trained operators, you may fail.” ↩︎
These sorts of seldom-used libraries are much less likely to be poisoned than the mainstream libraries which occasionally have CVEs, but infosec folks ambulance chase off them until our sanity is flattened and bloodied like roadkill. ↩︎
And why should traffic in pre-prod be as high as prod? Replaying all traffic to pre-production all the time is expensive af! So it’s a reasonable assumption, in isolation. ↩︎
but oh honey why are you performance testing an option that’s faster than what you’ll actually deploy?? ↩︎
The options are even more misleading than you might expect. --no-cache only inhibits the cache for layers created by the Dockerfile and does not skip the image cache. You need --pull to skip the image cache. ↩︎
Usually linkers order the objects they’re instructed to link by the order they’re presented. If you specify the order, you’ll always get the same order. If you have Make or whatever build system you’re using send the linker all the .o files in the directory, it will send them in the order the filesystem lists them, which can change depending on some internal filesystem properties (usually what order their metadata was last written). Usually it doesn’t matter, but maybe the code has some undefined behavior based on the layout of the code itself. Maybe there are static initializers that get run in a different order and some data structure is corrupted before the program even starts doing anything useful. ↩︎
Action bias is a bitch. See also a recent paper I co-authored: Opportunity Cost of Action Bias in Cybersecurity Incident Response ↩︎
I did not have to come at myself this hard. (That’s what they said). ↩︎
The original quote by Helmuth von Moltke is “No plan of operations reaches with any certainty beyond the first encounter with the enemy’s main force.” from Kriegsgeschichtliche Einzelschriften (1880). It is commonly quoted as “No plan survives first contact with the enemy.” ↩︎
“Skin in the game” is such a strange idiom. It makes me think of skeletonless fleshlings flailing around on a football pitch trying to flop wobbling meatflaps at the ball. Neurotypical lingo never ceases to amaze. ↩︎
You might think that with the growth of data collection, machine learning, and other flagrant privacy violations business intelligence practices that data storage is the primary dimension of capacity planning. This is often not the case. In the last decade or so, capacity has grown phenomenally but throughput and latency have not kept pace. As a result, IOPS and throughput are more commonly the bottleneck that needs planning while storage capacity is overprovisioned. On the cloud, allocated throughput and IOPS are assigned based on volume size, so it’s common to see vast overprovisioning of volume size to realize sufficient IOPS. It also occurs on storage SANs, where the number and capacity of disks are selected to match the required sustained read and write rates. All of this is phenomenally complicated but as a first approximation, IOPS and throughput matter more than storage capacity for many use cases. ↩︎
Paraphrased from Chapter 2 of This Side of Paradise by F. Scott Fitzgerald: https://www.bartleby.com/115/22.html ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><em>Co-authored by Kelly Shortridge and Ryan Petrich</em></p>
<p>We hear about all the ways to make your deploys so glorious that your pipelines poop rainbows and services saunter off into the sunset together. But what we don’t see as much is folklore of how to make your deploys suffer.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>Where are the nightmarish tales of our brave little deploy quivering in their worn, unpatched boots &ndash; trembling in a realm gory and grim where pipelines rumble towards the thorny, howling woods of production? Such tales are swept aside so we can pretend the world is nice (it is not).</p>
<p>To address this poignant market painpoint, this post is a cursed compendium of 69 ways to fuck up your deploy. Some are ways to fuck up your deploy now and some are ways to fuck it up for Future You. Some of the fuckups may already have happened and are waiting to pounce on you, the unsuspecting prey. Some of them desecrate your performance. Some of them leak data. Some of them thunderstrike and flabbergast, shattering our mental models.</p>
<p>All of them make for a very bad time.</p>
<p>We&rsquo;ve structured this post into 10 themes of fuckups plus the singularly horrible fuckup of manual deploys. For your convenience, these themes are linked in the Table of Turmoil below so you can browse between soul-butchering meetings or existential crises. We are not liable for any excess anxiety provoked by reading these dastardly deeds&hellip; but we like to think this post will help many mortals avoid pain and pandemonium in the future.</p>
<p>The Table of Turmoil:</p>
<ul>
<li><a href="#identity-crisis">Identity Crisis</a></li>
<li><a href="#loggers-and-monitaurs">Loggers and Monitaurs</a></li>
<li><a href="#playing-with-deployment-mismatches">Playing with Deployment Mismatches</a></li>
<li><a href="#configuration-tarnation">Configuration Tarnation</a></li>
<li><a href="#statefulness-is-hard">Statefulness is Hard</a></li>
<li><a href="#net-not-working">Net-not-working</a></li>
<li><a href="#rolls-and-reboots">Rolls and Reboots</a></li>
<li><a href="#disorganized-organization">Disorganized Organization</a></li>
<li><a href="#business-illogic">Business Illogic</a></li>
<li><a href="#the-audacity-of-spacetime">The Audacity of Spacetime</a></li>
<li><a href="#manual-deploys">Manual Deploys</a></li>
</ul>
<p><img src="/blog/img/deployment-explosion.png" alt="An image of a rocket exploding in space, filled with multitudes of galaxies. The rocket represents a deployment. The rocket is being piloted by Captain Kube with the Docker whale looking out the porthole. The Go gopher is running on one of the rocket wings. On the other wing, the BSD daemon is about to smash Clippy with a mallet. The Java mascot, known as Duke, is surfing on the end of the rocket. A dead canary lies within the rocket exhaust. The GitHub octocat is flying behind the exploding rocket with a jetpack. It is meant to look shitposty."></p>
<h2 id="identity-crisis">Identity Crisis</h2>
<p><em>Permissions are perhaps the final boss of Deployment Dark Souls; they are fiddly, easily forgotten, and never forgiven by the universe.</em></p>
<h4 id="1-allow-all-access">1. Allow all access.</h4>
<p>&ldquo;Allow all access&rdquo; is simple and makes deployment easy. You&rsquo;ll never get a permission failure! It makes for infinite possibilities! Even Sonic would wonder at our speed!</p>
<p><img src="/blog/img/derp-sonic.gif" alt="A gif of Sonic drawn by someone who really cannot draw well. The effect is humorous. Sonic is running on a road with the caption Gotta go fast."></p>
<p>And indeed, dear reader, what wonder <code>allow *</code> inspires&hellip; like a wonder for what services the app actually talks to and what we might need to monitor; a wonder for what data the app actually reads and modifies; a wonder for how many other services could go down if the app misbehaved; and a wonder for exactly how many other teams we might inconvenience during an incident.</p>
<p>Whether for quality&rsquo;s sake or security&rsquo;s, we should not trade simplicity today for torment tomorrow.</p>
<h4 id="2-keys-in-plaintext">2. Keys in plaintext.</h4>
<p>Key management systems (KMS) are complex and can be ornery. Instead of taming these complex beasts &ndash; requiring persistence and perhaps the assistance of <a href="http://madelinemiller.com/myth-of-the-week-pegasus-and-bellerophon/">Athena herself</a> to ride the beast onward to glory &ndash; it can be tempting to store keys in plaintext where they are easily understandable by engineers and operators.</p>
<p>If anything goes wrong, they can simply examine the text with their eyeballs. Unfortunately, attackers also have eyeballs and will be grateful that you have saved them a lot of steps in pwning your prod. And if engineers write the keys down somewhere for manual use in an &ldquo;emergency&rdquo; or after they&rsquo;ve left the company&hellip; thoughts and prayers.</p>
<h4 id="3-keys-arent-in-plaintext-theyre-just-accessible-to-everyone-through-the-management-system">3. Keys aren&rsquo;t in plaintext, they’re just accessible to everyone through the management system.</h4>
<p>You&rsquo;ve already realized storing keys in plaintext is unwise (<a href="#2-keys-in-plaintext">F.U. #2</a>) and upgraded to a key management system to coordinate the dance of the keys. Now you can rotate keys with ease and have knowledge of when they were used! Alas, no one set up any roles or permissions and so every engineer and operator has access to all of the keys.</p>
<p>At least you now have logs of who accessed which keys so you can see who possibly leaked or misused a key when it happens, right? But how useful are those logs when they are simply a list of employees that are trusted to make deploys or respond to incidents?</p>
<p><img src="/blog/img/kms-logs-anakin-padme-meme.png" alt="A variation of the Anakin Padme 4 Panel Meme. In the first panel, Anakin tells Padme that he just upgraded to a key management system. In the second panel, Padme, with a gleeful expression on her face, responds: omg grats those logs are so useful. In the third panel, Anakin stares back at her with a blank expression. In the fourth panel, Padme looks shocked and concerned; she says: you are collecting the access logs, right?"></p>
<h4 id="4-no-authorization-on-production-builds">4. No authorization on production builds.</h4>
<p>The logical conclusion of fully automated deployments is being able <a href="https://web.archive.org/web/20210813020247/https://brew.sh/2018/08/05/security-incident-disclosure/">to push to production via SCM operations</a> (aka &ldquo;GitOps&rdquo;). Someone pushes a branch, automation decides it was a release, and now you have a &ldquo;fun&rdquo; incident response surprise party to resolve the accidental deploy.</p>
<p>One option is to enforce sufficient restrictions on who can push to which branches and in what circumstances. Or, you can go on a yearlong silent meditation retreat to cultivate the inner peace necessary to be comfortable with surprise deployments.</p>
<p>The common &ldquo;mitigation&rdquo; &ldquo;plan&rdquo; is to only hire devs who have a full understanding of how git works, train them properly<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> on your precise GitOops workflow, and trust that they&rsquo;ll never make a mistake&hellip; but we all know that&rsquo;s just your lizard brain&rsquo;s reckless optimism telling logic to stop harshing its vibes. Make it go sun on a rock somewhere instead.</p>
<h4 id="5-keys-in-the-deploy-script-which-is-checked-into-github">5. Keys in the deploy script… which is checked into GitHub.</h4>
<p>Sometimes build tooling is janky and deployment tooling even jankier. After all, you don&rsquo;t ship this code to users, so it&rsquo;s okay if it&rsquo;s less tidy (or so we tell ourselves). Because working with key management systems can be frustrating, it&rsquo;s tempting to include the keys in the script itself.</p>
<p>Now anyone who has your source code can deploy it as your normal workflow would. Good luck maintaining an accurate history of who deployed what and when, especially when the &ldquo;who&rdquo; is the intern who <code>git clone</code>&rsquo;d the codebase and started &ldquo;experimenting&rdquo; with it.</p>
<h4 id="6-new-build-uses-assets-or-libraries-on-a-devs-personal-account">6. New build uses assets or libraries on a dev’s personal account.</h4>
<p>You&rsquo;ve decided that developers should be free to choose the best libraries and tools necessary to get the job done, and why shouldn&rsquo;t they? For many, this will be a homegrown monstrosity that has no tests or documentation and is written in their Own Personal Style™. The dev who chose it is the only one who knows how to use it, but it&rsquo;s convenient for them.</p>
<p>But is it the most convenient choice for everyone else? What about when the employee leaves and shutters their github account? The supply chain attack<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> is coming from inside the house!</p>
<h4 id="7-temporary-access-granted-for-the-deployment-isnt-revoked">7. Temporary access granted for the deployment isn’t revoked.</h4>
<p>As MilTOR Freedmem quipped years ago, &ldquo;Nothing is so permanent as a temporary access token.&rdquo;</p>
<p>The deployment is complicated and automating all of the steps is a lot of work, so the logical path is to deploy the service manually <em>just this once</em>. The next quarter, there&rsquo;s an incident and to get the system operational again, it&rsquo;s quickest to let the team lead log in and manually repair it.</p>
<p>But after the access is added, it&rsquo;s all too easy to overlook removing the access. Employees would never take shortcuts or abuse their access, right? And their accounts or devices could never be compromised by attackers, right?</p>
<h4 id="8-former-employees-can-still-deploy">8. Former employees can still deploy.</h4>
<p>Leadership claims your onboarding and offboarding checklists are exhaustive and followed perfectly every time. And, indeed, your resilience and security goals rely on them being followed perfectly. A safety job well done! No one will be able to deploy your application after they&rsquo;ve put in notice!</p>
<p>What&rsquo;s that? That wasn&rsquo;t part of your checklist, too? Or did you skip over that item because it&rsquo;s too hard to rotate the keys if some employees quit because they&rsquo;re too essential and baked into too many systems?</p>
<p>You&rsquo;ve replaced those keys but they aren&rsquo;t destroyed and aren&rsquo;t revoked and don&rsquo;t expire, so your only hope now is the org didn&rsquo;t piss off the employees enough for them to YOLO rage around in prod. Sure, former employees have always expressed goodwill towards your company and no one has ever left disgruntled&hellip; but would you bet on that staying true?</p>
<h4 id="9-your-app-uses-credentials-associated-with-the-account-of-the-employee-you-just-fired">9. Your app uses credentials associated with the account of the employee you just fired.</h4>
<p>Sharing credentials isn&rsquo;t just something engineers and operators share between themselves. If you&rsquo;re extra lucky, they&rsquo;ll bake them into the software or services and then when they leave or transfer to a new department, the system will fail when their permissions are revoked. Maybe sharing <em>isn&rsquo;t</em> caring.</p>
<h4 id="10-login-tokens-are-reset-and-users-get-frustrated-and-churn-as-a-result">10. Login tokens are reset and users get frustrated and churn as a result.</h4>
<p>Some businesses run on engagement. The more users interact with the platform, the more they induce others to interact, which means more advertising messages you can show them with a more precise understanding of what they might buy. Teams track engagement metrics closely and every little design change is justified or rescinded by how it performs on these metrics. It&rsquo;s a merry-go-round of incentives and dark patterns.</p>
<p>But one day you migrate to a new login token format or seed, forcing everyone to log in again and the metrics are fucked because many users don&rsquo;t want to go to the trouble. Those fantastic growth numbers you hoped would bolster your company&rsquo;s next VC round no longer exist because you broke the cycle of engagement addiction.</p>
<h2 id="loggers-and-monitaurs">Loggers and Monitaurs</h2>
<p><em>Logging and monitoring are essential, which is why getting them wrong wounds us like a Minotaur&rsquo;s horn through the heart.</em></p>
<h4 id="11-logs-on-full-blast">11. Logs on full blast.</h4>
<p>Systems are hard to analyze without breadcrumbs describing what happened, so logging is an essential quality of an observable system.</p>
<p>Ever-lurking in engineering teams is the natural temptation to log more things. You might need some information in a scenario you haven&rsquo;t thought of yet, so why not log it? It will be behind <a href="https://sematext.com/blog/logging-levels/">the debug level</a> anyway, so it does no harm in production&hellip;</p>
<p>&hellip;until someone needs to debug a live instance and turns the logging up to 11. Now the system <a href="https://www.eveonline.com/news/view/behind-the-scenes-of-a-long-eve-online-downtime">is bogged down</a> by a deluge of logging messages full of references to internal operations, data structures, and other minutia. The poor soul tasked with understanding the system is looking for hay in a needlestack.</p>
<p>Worse, someone could enable debugging in pre-production where traffic isn&rsquo;t as high<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup> and not notice before deploying to the live environment. Now all your production machines are printing logs with CVS receipt-levels of waste, potentially flooding your logging system. If you&rsquo;re extra unlucky, some of your shared logging infrastructure is taken offline and multiple teams must declare an incident.</p>
<h4 id="12-logs-on-no-blast">12. Logs on no blast.</h4>
<p>Who doesn&rsquo;t want peace and quiet? But when logs are quiet, the peace is potentially artificial.</p>
<p>Logs could be configured to the wrong endpoint or fail to write for whatever reason; you wouldn&rsquo;t even be aware of it because the error message is in the logs that you aren&rsquo;t receiving. Logs could also be turned off; maybe that&rsquo;s an option for performance testing<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>.</p>
<p>Either way, you better hope that the system is performing properly and that you planned adequate capacity. Because if the system ever runs hot or hits a bottleneck, it has no way of telling you.</p>
<h4 id="13-logs-being-sent-nowhere">13. Logs being sent nowhere.</h4>
<p>Your log pipelines were set up years ago by employees long gone. Also long gone is the SIEM to which logs were being sent. Years go by, an incident happens, and <a href="https://www.oig.doc.gov/OIGPublications/OIG-21-034-A.pdf">during investigation</a> you realize this fatal mistake. Your only recourse is locally-saved logs, which, for capacity reasons, are woefully itsy bitsy and you are the spider stuck in a spout, awash in your own tears.</p>
<p><img src="/blog/img/lasso-logs-meme.png" alt="A meme from a Wonder Woman movie. In the first panel, she uses her lasso to constrain a man; she says: the lasso of Hestia compels you to tell the truth. In the second panel, the man grins; he says: our logs aren&amp;rsquo;t being sent anywhere. In reaction, in the third panel, Wonder Woman looks thoroughly disturbed. "></p>
<h4 id="14-canary-is-dead-but-you-didnt-realize-it-so-you-deployed-to-all-the-servers-anyway-and-caused-downtime">14. Canary is dead, but you didn’t realize it so you deployed to all the servers anyway and caused downtime.</h4>
<p>You&rsquo;ve been doing this DevOps thing awhile and have a mature process that involves canary deployments to ensure even failed updates won&rsquo;t incur downtime for users. Deployments are routine and refined to a science. Uptime clearly matters to you. Only this time, the canary fails in a way that your process fails to notice.</p>
<p>An alternative scenario is that some part of the process wasn&rsquo;t followed and a dead canary is overlooked. You miss the corpse that is your new version and kill the entire flock.</p>
<p>Having a process and system in place to prevent failure and then completely ignoring it and failing anyway likely deserves its own achievement award. Do you need a better process, or do you need to fix the tools? How can you avoid this in the future? This will be furiously debated in the post-mortem, which, if blameful rather than blameless will likely result in this failure repeating within the next year.</p>
<h4 id="15-system-fails-silently">15. System fails silently.</h4>
<p>A system is crying out for help. Its calls are sent into the cold, uncaring void. Its lamentable fate is only discovered months later when a downstream system goes haywire or a customer complains about suspiciously missing data.</p>
<p>&ldquo;How could it be failing for so long?&rdquo; you wonder as you stand it back up before adding a &ldquo;please monitor this&rdquo; ticket to the team&rsquo;s backlog that they&rsquo;ll definitely, totes for sure get to in the next sprint.</p>
<h4 id="16-new-version-puts-sensitive-data-in-logs">16. New version puts sensitive data in logs.</h4>
<p>Yay, the new version of the service writes more log data to make it easier to operate, monitor, and debug the service should something ever go wrong! But, there&rsquo;s a catch: some of the new log messages include sensitive data such as passwords or credit card details. This may not even be purposeful. Perhaps it logs the contents of the incoming request when a particular logging mode is enabled.</p>
<p>Unfortunately, there are very specific rules that businesses of your type must follow when handling certain types of data and your logging pipeline doesn’t follow any of them. Now your near-term plans are decimated by the effort to clean up or redact logs that you otherwise wouldn’t have to if the engineer that added that logging knew about the data handling requirements. By the way, the IPO is in a few months. XOXO.</p>
<h2 id="playing-with-deployment-mismatches">Playing with Deployment Mismatches</h2>
<p><em>There were assumptions about what you deployed and those assumptions were wrong.</em></p>
<h4 id="17-what-you-deployed-wasnt-actually-what-you-tested">17. What you deployed wasn’t actually what you tested.</h4>
<p>Builds are automated and we tested the output of the previous build, so what&rsquo;s the harm of rebuilding as part of the deployment process? Not so fast.</p>
<p>Unless your build is reproducible, the results you receive may be somewhat different. Dependencies may have been updated. Docker caching may give you a newer (or older, surprisingly!) base image<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>. Even something as simple as changing the order of linked libraries<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup> could result in software that differs from what was tested.</p>
<p>Configurations fall prey to this, too. &ldquo;Well, it works with allow-all!&rdquo; Right, but it doesn&rsquo;t work in production because the security policy is different in pre-prod. Or, the new version requires additional permissions or resources which were configured manually in the test environment&hellip; but, cranial working memory is terribly finite, and thus they were forgotten in prod.</p>
<p>There are numerous solutions to this problem (like reproducible builds or asset archiving), but you may not bother to employ them until a broken production deploy prompts you to. And some of the solutions descend into a stupid sort of fatalism: &ldquo;If we don&rsquo;t have fully reproducible builds, we don&rsquo;t have anything, there&rsquo;s no point to any of this.&rdquo; And then Nietzsche rolls in his grave.</p>
<h4 id="18-not-testing-important-error-paths">18. Not testing important error paths.</h4>
<p>We have to move fast. New features. Tickets. Story points. Ship, ship, ship. Developers with the velocity of a projectile. Errors? Bah, log them and move on.</p>
<p>If <a href="https://blog.heroku.com/filesystem-corruption-on-heroku-dynos">something is incorrect</a>, surely it will be noticed in test or be reported by users &ndash; spoken by someone who has never faced an angry customer because their data was leaked or discovered their lowest rung employee fuming with resentment when they see the company&rsquo;s fat margins.</p>
<p>Alas, too often we see a new version which forgets to check auth cookies, roles, groups, and so forth because devs test it as admin with the premium enterprise plan, but forget lowly regular users on the the free tier can&rsquo;t do and see everything.</p>
<h4 id="19-untested-upgrade-path">19. Untested upgrade path.</h4>
<p>Your infrastructure is declarative, but the world is not. The app works in isolation, but doesn&rsquo;t accept the data from the previous version or behaves weirdly when faced with it.</p>
<p>Possibly the schema has changed, but the migration path for existing data (like user records) was never tested. You didn&rsquo;t test it because you recreated your environment each time. The new version no longer preserves the same invariants as the old version and you watch in horror as other components in the system topple one by one.</p>
<p>Possibly you&rsquo;re using a NoSQL database or some other data store for which there isn&rsquo;t a schema and now the work of data migration falls on the application&hellip; but no one designed or tested for that.</p>
<p>Or, maybe you&rsquo;re pushing a large number of updates to <a href="https://status.cloud.google.com/incident/compute/17003#5660850647990272">a rarely used part of your networking stack</a>. For those that are all-in on infrastructure as code (IaC), supporting old schema, data, and user sessions can be a thorny problem.</p>
<h4 id="20-its-a-configuration-change-so-theres-no-need-to-test">20. &ldquo;It’s a configuration change, so there&rsquo;s no need to test.”</h4>
<p><a href="https://github.com/danluu/post-mortems#config-errors">A shocking number of outages</a> spawn from what is, in theory, a simple little configuration change. &ldquo;How much could one configuration change break, <a href="https://www.youtube.com/watch?v=Nl_Qyk9DSUw">Michael</a>?&rdquo;</p>
<p>Many teams overlook just how much damage configuration changes can engender. Configuration is the essential connective tissue between services and just about anything that can be configured can cause breakage when misconfigured.</p>
<h4 id="21-deployment-was-untested-because-the-fix-is-urgent">21. Deployment was untested because the fix is urgent.</h4>
<p>The clock is ticking and sweat is sopping your brow. Something must be done to avoid an outage or data loss or some other negative consequence. This fix is at least <em>something</em> and this something seems like it should work<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup>. You deploy it now because time is of the essence. It fails and you now have less time or <a href="https://status.mailgun.com/incidents/p9nxxql8g9rh">have caused more mess to clean up</a>.</p>
<p>Only in hindsight do you realize a better option was available. Or, maybe the option you chose <em>was</em> the best one, but you made a small mistake. Was the haste worth it?</p>
<p>Urgency changes your decision-making. It&rsquo;s a well-intentioned evolutionary design of your brain that causes unfortunate side effects when dealing with computer systems. In fact, &ldquo;urgency&rdquo; could probably be its own macro class of deploy fails given its prevalence as a factor in them.</p>
<h4 id="22-app-wasnt-tested-on-realistic-data">22. App wasn’t tested on realistic data.</h4>
<p>&ldquo;Everything works in staging! How could it have failed when we pushed it live? I thought we did everything right by testing the schema migration with our test data and load testing the new version.&rdquo;</p>
<p><em>Narrator: The software engineer is in their natural habitat. Observe how they pull at their own hair, a hallmark of their species to signal that something has distressed them. It is <a href="https://azure.microsoft.com/en-us/blog/update-on-azure-storage-service-interruption/">very difficult to replicate everything that&rsquo;s happening in production</a> in an artificial test environment without some sort of replication or replay system. This vexes our otherwise clever engineer.</em></p>
<p>&ldquo;It causes a crash!? What kind of deranged mortal would have an apostrophe in their name? Oh, it&rsquo;s common in some cultures? Hmmm&hellip;&rdquo;</p>
<p>If you keep your service online as you deploy, you should really <a href="https://docs.google.com/document/d/1ScqXAdb6BjhsDzCo3qdPYbt1uULzgZqPO8zHeHHarS0/preview?hl=en&amp;forcehl=1#heading=h.dfbilqgnc5sf">test your upgrade path</a> under simulated load. If you don&rsquo;t, you can&rsquo;t be sure if your planned upgrade process will work or how long it will take.</p>
<h4 id="23-deploying-to-the-wrong-environment-accidentally">23. Deploying to the wrong environment accidentally.</h4>
<p>When you make deployments easy, it is possible to make deploying to prod <em>too</em> easy. And easy to use <a href="https://aws.amazon.com/message/680587/">doesn&rsquo;t necessarily mean easy to understand</a>. When a slip of the finger results in code going live, you may want to consider just how far you&rsquo;ve taken automation and if other parts of your process need to catch up.</p>
<p>Because one day, a sleep-deprived Future You is going <a href="https://www.joyent.com/blog/postmortem-for-outage-of-us-east-1-may-27-2014">to run a deploy script</a> where you have to pass in an environment name and you will type <code>dve</code> instead of <code>dev</code>. Once it dawns on you that the deploy system falls back to &ldquo;prod&rdquo; as the default, adrenaline shocks you awake with the force of 9000 espressos and you will never sleep again.</p>
<p>The regrettable reality is that internal tools often offer terrible UX because engineers refuse to give themselves nice things (including therapy). These tools, akin to a rusty sword with no hilt, make these sorts of failures tragically common. The rise of <a href="https://codingsans.com/blog/managing-platform-teams">platform engineering</a> is hopefully an antidote to this phenomenon, treating software engineers as user personas worthy of UX investments, too.</p>
<h4 id="24-no-real-pre-production-environment">24. No real pre-production environment.</h4>
<p>You have a staging environment (congrats!), but it&rsquo;s an ancient clone from production which has seen so many failed builds, bizarre testing runs, and manual configs that it bears only <a href="https://www.eveonline.com/news/view/behind-the-scenes-of-a-long-eve-online-downtime">a pale resemblance</a> to the system it&rsquo;s supposed to epitomize. It gives you confidence that your software could deploy successfully, but not much else.</p>
<p>You wish you could tear it down and rebuild it anew, but everyone&rsquo;s busy and it&rsquo;s never quite important enough for someone to start working on it rather than some other task. Thus you&rsquo;re doomed to clean up small messes that could be caught by a true staging environment.</p>
<p>At the next DevOps conference you attend, every keynote speaker refers to the &ldquo;fact&rdquo; that &ldquo;everyone&rdquo; has a &ldquo;high-fidelity&rdquo; staging environment (&ldquo;obviously&rdquo;) as you weep in silence.</p>
<h4 id="25-production-os-has-a-different-version-than-pre-prod-os-and-the-app-fails-to-start">25. Production OS has a different version than pre-prod OS and the app fails to start.</h4>
<p>Production systems are incredibly important and we must patch frequently to keep them in compliance. But the same diligence isn&rsquo;t applied to pre-prod, development, build and other environments.</p>
<p>The systems in these environments may therefore be wildly out of date and the software they produce may be incompatible with the up-to-date, patched production system. Systems will drift so far from the standard that QA systems look like an alternate reality from production and make you a believer in the multiverse hypothesis.</p>
<h4 id="26-backup-botch-ups">26. Backup botch-ups.</h4>
<p>A production deploy requires a backup because hot damn have we fucked it up so many times and a backup makes everyone feel more confident. The administrator responsible for performing the backup writes the backup over the live system, causing an outage. Furthermore, because the data was overwritten by the botched backup, any existing backups are not recent.</p>
<p>Backup fuckups happen more than <a href="https://www.quora.com/Did-Pixar-accidentally-delete-Toy-Story-2-during-production/answer/Oren-Jacob">anyone admits</a> and when they go down, they go down <em>hard</em>. <a href="https://about.gitlab.com/blog/2017/02/10/postmortem-of-database-outage-of-january-31/">Recovering from them</a> is rough because no one thinks it will happen to them.</p>
<p>Lesser failures in this category include saturating the disk or network IO of the host taking the backup or filling the disk &ndash; each perfectly capable of causing an outage, too.</p>
<h4 id="27-audit-logs-are-turned-off">27. Audit logs are turned off.</h4>
<p>Audit logs are accidentally turned off during a configuration change or as part of a software upgrade and now the system is out of compliance. No one notices until the auditors ask for the audit logs months later and a wave of panic ripples through the teams involved.</p>
<p>Will we fail the audit? Will customers drop us? How much revenue is impacted? Will we still get raises at our quarterly review? Will I have to switch to getting artisanal roasted bean elixirs every other day?</p>
<h4 id="28-iceberg-dependencies">28. Iceberg dependencies.</h4>
<p>Only the simplest services run entirely isolated without any other dependencies. When done right, dependencies are properly documented and the infrastructure dependencies of each component are clear. Even better, the dependencies are specified declaratively, rendering it impossible for the human-generated documentation to drift from the machine specification.</p>
<p>But in less auspicious cases, the dependencies are hazy and can even form chains which loop back on themselves like a branching ouroboros eating its own rotting tails. Debugging a production incident for a system with unknown dependencies is software archeology where the only treasure is tears.</p>
<p>The infrastructure upgrade toppled some of the apps and services running on top of it, but the people deploying the upgrade lack context on those casualties and you all wonder when the Jigsaw puppet will come into view and reveal this has all been a grand experiment to pit you against each other.</p>
<p>“We upgraded the OS, clearly everything will be fine!” My brother in christ your system fetches Kerberos creds automatically on boot, but your first boot on a fresh host fails because the Kerberos fetch infra depends on a QA host that was decommed 6 months ago!</p>
<p>And then there&rsquo;s the ultimate iceberg dependency: DNS. If DNS is borked <a href="https://status.pagerduty.com/incidents/vbp7ht2647l8">or misconfigured</a>, all sorts of thorny problems can emerge.</p>
<h4 id="29-enabling-a-new-feature-in-vendor-software-without-load-testing-it">29. Enabling a new feature in vendor software without load testing it.</h4>
<p>Vendors make all sorts of claims about the behavior of their wares. It&rsquo;s fast and stable. It migrates its data format. It slices and dices. It follows semver. Should you believe them? <a href="https://blog.roblox.com/2022/01/roblox-return-to-service-10-28-10-31-2021/">In a word, no</a>.</p>
<h2 id="configuration-tarnation">Configuration Tarnation</h2>
<p><em>Playing god with your environments does not always result in intelligent design.</em></p>
<h4 id="30-per-environment-configuration-isnt-updated-ahead-of-the-deploy">30. Per-environment configuration isn’t updated ahead of the deploy.</h4>
<p>Per-environment configuration is a fact of life. Hostnames, instance counts, and other configuration settings will be necessarily different between environments. Keeping these up to date can be a challenge and it&rsquo;s all too easy to overlook updating the production template when new configs must be added.</p>
<p>New configuration values are often copied from a staging template into the production one without appropriate adjustments like switching the hostname. You will wonder which evil eldritch god you pissed off when deploying to prod takes down both the production and staging environments. This is so frequent and yet! and yet.</p>
<h4 id="31-deployed-new-configuration-but-forgot-to-restart-the-associated-services">31. Deployed new configuration, but forgot to restart the associated services.</h4>
<p>Deploying a configuration change is easy: apply the configuration, restart the service. You might think it should be easy to remember the steps when there&rsquo;s only two of them, but it&rsquo;s easy to overlook for quick deployments. Only later do you realize you set a new config variable in prod <a href="https://blog.heroku.com/how-i-broke-git-push-heroku-main">without applying it to the prod instances</a>.</p>
<p>Design patterns like the <a href="https://www.techtarget.com/searchsecurity/feature/Experts-say-CIA-security-triad-needs-a-DIE-model-upgrade">D.I.E. triad</a> can help — there&rsquo;s no way for infrastructure to drift if it&rsquo;s redeployed from scratch on each deployment. And, of course, automated deployments can help, too.</p>
<h4 id="32-feature-flag-fuckups">32. Feature flag fuckups.</h4>
<p>Feature flags are a simple and amazing way to explode the number of system states you must test. N flags make for 2^N combinations. Are all of them tested? Are you sure they&rsquo;re all set correctly? Do the people who test your application have the same flags as the unwashed masses? Are there old feature flags in your app that should be retired? What could happen if they were activated mistakenly? (just <a href="https://dougseven.com/2014/04/17/knightmare-a-devops-cautionary-tale/">ask Knight Capital</a>).</p>
<p>Maybe you push a release before the company holiday party and deploy the entire release successfully… until a few intoxicants in you realize you forgot to flip the feature flag and now you’re crying in the bar’s bathroom shakily singing along to Mariah Carey (though you suspect the &ldquo;baby&rdquo; she desperately wants for xmas isn&rsquo;t a feature flag).</p>
<p>It&rsquo;s also possible that you do the exact opposite and flip the flag too soon. Maybe a new product is accidentally announced early, deflating all the carefully constructed marketing plans leading up to the company conference and leading customers to ask why the new feature is “broken.” You had just regained the respect of the customer support team too&hellip;</p>
<p>Perhaps the new feature simply <a href="http://web.archive.org/web/20150404235419/https://stackstatus.net/post/115305251014/outage-postmortem-march-31-2015">uses too many resources</a> and you haven&rsquo;t scaled your infrastructure appropriately. Or maybe the freemium gate is broken and everyone gets access to premium features. Good luck explaining to customers why you now have to take away their new shiny feature unless they pay up for it…</p>
<h4 id="33-delayed-failures">33. Delayed failures.</h4>
<p>Faulty configuration may not necessarily cause failures immediately. It&rsquo;s only after you do some other, seemingly unrelated operation does the fault cause any symptoms. Like medicine, it can be difficult to untangle exactly what faults are the cause of what symptoms.</p>
<p>For systems like load balancers or orchestrators, a bad configuration can remain in place and as long as the system is stable, the misconfiguration will cause no ill effects. But one day when you decommission a cluster as planned, another cluster immediately shits itself &ndash; suffering a total outage baffling everyone &ndash; and only after many painful hours of debugging do you realize its healthchecking was configured against the one you decommed.</p>
<p>If the team owning that other cluster has poor monitoring hygiene, they may only discover their service is dead much later. But the outage gods care not for your mortal troubles and will do nothing to ease the pain of what is now a multi-day incident all due to faulty health checks.</p>
<h4 id="34-what-lies-beneath">34. What lies beneath.</h4>
<p>The layers far underneath your application can still cause your deployment to fail.</p>
<p>Orchestrator fails? Your service is dead. Operating system fails? Dead. Disk controller fails? Dead. BGP? Dead. DNS? Dead. Backhoe cuts the backbone to your sole datacentre? Dead. NVMe subtly violates DMA protocol? Dead. NIC driver fails or <a href="https://news.ycombinator.com/item?id=23020774">goes rogue</a>? Dead. Baseboard management controller borks? Dead. Deploy a bunch of new machines into a cluster with a bad BIOS? This may shock you, but: dead.</p>
<h4 id="35-scheduled-failures">35. Scheduled failures.</h4>
<p>Deployments may appear to succeed only to fail hours or days later if you have periodic background jobs or the ability to schedule tasks. The deployment isn&rsquo;t successful until these jobs and tasks run successfully.</p>
<p>Perhaps you deploy a busted systemd timer which causes all your nodes to self-destruct after 8 hours&hellip; and only discover this &ldquo;fun&rdquo; fact after you deploy to your first tranche in prod. See also: the dreaded slow memory leak.</p>
<p>Another variant is <a href="https://web.archive.org/web/20211104160742/https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare-dns/">the odd date/time bug</a> which causes the application to malfunction only on leap years or when daylight savings time occurs. If you&rsquo;re not swift with your incident response, the incident resolves itself and you&rsquo;re left scratching your heads until someone realizes it&rsquo;s because the clocks rolled back.</p>
<p>Do you bother fixing the bug? Or do you hope to find another job before the next orbital period elapses?</p>
<h4 id="36-accidentally-push-components-beyond-their-limits">36. Accidentally push components beyond their limits.</h4>
<p>Components may have poorly documented or <a href="https://slack.engineering/slacks-outage-on-january-4th-2021/">undocumented limitations</a> or may simply become unusably slow when assigned more work than they were designed for. Does your database have a limit on the number of connections? Better not scale the number of clients beyond that number, then!</p>
<p>Is it a deployment failure? Yes, if a deployment <a href="https://aws.amazon.com/message/11201/">pushes the system beyond its limits</a>, which is more likely to happen when you add new v2 replicas before retiring old v1 replicas.</p>
<p>In the microservices world, this can manifest as running so many k8s jobs without deleting them that all the k8s operations on jobs begin taking tens of seconds because the cluster is bogged down with so much cluster metadata. Is the inevitable conclusion of microservices simply more microservice instances and metadata than actual work and user data? Makes u think.</p>
<h4 id="37-builds-always-use-the-latest-version-of-a-library">37. Builds always use the latest version of a library.</h4>
<p>Some well-meaning person may decide that builds for a piece of software always use the latest version of its dependencies. This ensures that whenever you release, you always have the latest security patches.</p>
<p>This sounds wise until one of the dependencies causes a subtle API breakage and your app fails to function. Or, any of your dependencies&rsquo; authors could decide &ldquo;fuck this, I&rsquo;m not maintaining this open source project anymore and giving corporations free labor&rdquo; and push a dead version of a package.</p>
<p><img src="/blog/img/random-dev-pyro-cat.png" alt="A photograph of a ginger cat sauntering away from a large fire behind it. The cat is labeled: random dev in Nebraska pushing a dead version of their open-source package. The fire is labeled: all modern digital infra."></p>
<p>Now you&rsquo;re unable to build new versions of the app until someone resolves the dependency situation. Worse, if that fed-up developer has pulled their old versions out of spite or frustration with the pain of maintaining OSS, <em>and</em> if you haven&rsquo;t archived builds of old versions, then you may not even be able to deploy at all. And this is how you end up cursing a random dev you hadn&rsquo;t even heard of until just now when you should be taking your lunch break.</p>
<h2 id="statefulness-is-hard">Statefulness is Hard</h2>
<p><em>Mere mortals cannot maintain accurate mental models of data in distributed systems. Even the divines struggle.</em></p>
<h4 id="38-an-irreversible-process-fails-part-way-through">38. An irreversible process fails part way through.</h4>
<p>Some irreversible process fails part way through your deploy. Possibly it was a migration or some other critical step during your deployment. For whatever reason, this step didn&rsquo;t happen when deploying to the other environments; it only happened in the one environment that matters most.</p>
<p>What state is the system actually in? Should you rollback? If you try to roll back, will it even work? You&rsquo;re in uncharted waters under shrouded stars.</p>
<p>Data migrations are often a one-way process. Have you tried migrating all of your existing data to see what happens? How long does it take? Do you have backups? Could you even use the backups, or would restoring result in yet more downtime?</p>
<p>If you don&rsquo;t know the answers to these questions, you might find yourself deploying an <a href="https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping">ORM</a>/data model layer which automatically migrates read-only database values to a new format and somehow corrupts the records, resulting in you frantically trying to patch and deploy a fix before too much of your DB becomes unreadable.</p>
<p>Or perhaps you set <code>--timeout 10</code> on your ORM migration with the innocent assumption that &ldquo;10&rdquo; here refers to second. It&rsquo;s 10 milliseconds. There are no <a href="https://stackoverflow.com/questions/2024632/what-is-the-point-of-database-migrations-down">down migrations</a>. And migrations can be arbitrary JS and therefore not guaranteed to be atomic or idempotent and now you&rsquo;ve started a slow-motion train crash that you cannot stop. One hour of scheduled downtime becomes 18 hours. Your youth and zeal is irreversibly drained.</p>
<h4 id="39-distributed-data-vore">39. Distributed data vore.</h4>
<p>Distributed storage / database systems require careful understanding of their operational characteristics if you are to operate them safely. They can be used to achieve better uptime, reliability, and possibly even lower latency if operated within their safety margins… but they also require more care and feeding than traditional databases with an authoritative primary and can be quite temperamental.</p>
<p>If operated incorrectly, distributed storage can silently lose data or disagree on the data they contain if nodes aren&rsquo;t retired correctly or if an insufficient number of nodes remain healthy. Do you know enough about your data storage layers to operate them safely? Or when you next roll-reboot your Elasticsearch cluster will it silently eat 30% of your data for seemingly no reason at all? The customers now complaining that all their graphs are 30% too low are certainly not silent.</p>
<p>When you deployed new database nodes to prod, did you assume the cluster would rebalance on them? Oopsies, it didn&rsquo;t! And thus when you decommissioned the old nodes, you destroyed 99% of your data in the process. There are not enough oofs in this universe to reflect this oofiness.</p>
<h4 id="40-cache-is-an-unhealthy-monarchy">40. Cache is an unhealthy monarchy.</h4>
<p>If caches aren&rsquo;t healthy, rolling restart instructions aren’t followed or are insufficient and the system fails to start.</p>
<p>Where to begin? Let&rsquo;s start with why caches exist in the first place: to avoid repeated execution of expensive computations by storing a mapping between inputs and their results in memory (aka &ldquo;caching them&rdquo;). Caches will typically discard infrequently-used results automatically to make space for frequently-used results, and can be asked to drop any results that are no longer valid. How can this go wrong? Oh so many ways!</p>
<p>First off, just like database schema, the format of data in the cache might not be compatible with the new version of the app. Similarly, when there is more than one application instance, the old version of the app will run alongside the new version and could see cache entries from its successors. This can cause problems where either the old version or the new version of the app could malfunction from improper data. Deploying a canary can cause all of the instances of the software version to fail.</p>
<p>Have your engineers thought about cross-version compatibility? Do they reject linear notions of spacetime and thus believe compatibility is a blasphemous act against the <a href="https://en.wikipedia.org/wiki/Holographic_principle">holographic principle</a>? &ldquo;Spacetime is just an abstraction,&rdquo; they tell you cooly while sipping their matcha latte.<sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup> You are tempted to remind them that money is also an abstraction and therefore they should abstain from it, too, but it&rsquo;s faster if you just fix it yourself.</p>
<p>Second, the keys might change. If version A of an app uses one nomenclature for keys but its successor (version B) uses another, version B will operate as if the cache is empty. The app now must perform much more work to populate the cache in the new format. If both versions of the app are running simultaneously, they will fight for space in the cache &ndash; and the cache is limited in how much data it can hold by necessity. Now the cache has a lower <a href="https://www.fastly.com/blog/truth-about-cache-hit-ratios">hit ratio</a> and more requests must go through the more costly &ldquo;uncached&rdquo; path.</p>
<p>Third, a common ReCoMmEnDaTiOn is to flush caches when deploying new versions of software (&ldquo;it&rsquo;s a caching issue, clear your browser cache&rdquo; said the frontend dev to the product manager as the PM rolled their eyes). This can be dangerous when using a shared cache since so much extra work must now be performed with every request.</p>
<p>With healthy cache hit ratios commonly being in the 90% range for some workloads, that means the part of the application beyond the cache must handle ten times the throughput until the cache is rebuilt. Could you handle a sudden 10x increase in your workload?</p>
<h2 id="net-not-working">Net-not-working</h2>
<p><em>We make piles of thinking sand talk to each other through light and wonder why weird shit happens.</em></p>
<h4 id="41-accidental-self-dos">41. Accidental self-DoS.</h4>
<p>The accidental self-DoS could be due to many reasons. Maybe new versions of the application inhibit the CDN&rsquo;s ability to cache, but this non-functional requirement wasn&rsquo;t recorded anywhere. Maybe a new analytics feature inundates the application backend with data collected to appease the whims of product management. Maybe a new retry mechanism is being used for failed requests, causing traffic amplification if the backend becomes even a little sluggish.</p>
<p>The end result is the same: the new version of the app <a href="https://aws.amazon.com/message/12721/">swamps the backend service and causes downtime</a>. Engineers tirelessly work to restore service by standing up more instances or filtering the unnecessary traffic the application created for itself.</p>
<p>You ask your devs what happened and they said, &ldquo;Well, it didn&rsquo;t work with CDN so we added cache-busting headers to make it work.&rdquo; You nod quietly while gazing into the abyss.</p>
<h4 id="42-poorly-configured-caching">42. Poorly configured caching.</h4>
<p>The previous version of the app configured common static assets with a long cache duration. This caches the asset for long periods of time in CDNs and in users&rsquo; browsers. Fabulous! The app loads more quickly for users, especially those that visit frequently.</p>
<p>You build a new version of the app with new cached assets. The new version looks great in staging and dev, where testers are unlikely to have stale cached assets. But when you deploy it to production, you receive reports from your most fervent supporters that the app &ldquo;looks weird.&rdquo; It&rsquo;s a Frankenstein&rsquo;s monster mismatch of static assets from the old and new versions and behaves unpredictably.</p>
<p>Before enough understanding of what has happened filters through to the development team, all of the stale caches expire and the dev marks the JIRA ticket closed. The issue repeats again when you release the next minor redesign.</p>
<p>Due to the nature of CDNs and prod websites, there’s a category of people for which this is a persistent problem and they <em>should</em> be able to fix it… and yet can’t. The entirely avoidable fuckup is a formidable beast.</p>
<h4 id="43-a-hurricane-of-reconnections-foments-a-flash-flood">43. A hurricane of reconnections foments a flash flood.</h4>
<p>You disconnect clients simultaneously during your deployment, leading to them <a href="https://aws.amazon.com/message/65648/">all trying to reconnect simultaneously</a> shortly thereafter. Your system was never designed to handle <a href="https://discordstatus.com/incidents/dj3l6lw926kl">a flash flood of connections</a>, so it stays down until <a href="https://web.archive.org/web/20181208123409/https://slackhq.com/this-was-not-normal-really">it&rsquo;s scaled manually</a> well beyond what it was originally budgeted for.</p>
<p>Someone throws a ticket to add exponential backoff with randomization to the bottom of the client team&rsquo;s backlog. Years pass and it happens again as their backlog only grows.</p>
<h4 id="44-dos-yourself-via-cdn-purge">44. DoS yourself via CDN purge.</h4>
<p>Purging a CDN with a cache hit ratio of 90% results in an immediate 10x throughput increase to the origin. Did you deploy the required additional capacity?</p>
<p>It&rsquo;s such an easy button to press, too. Some CDNs don&rsquo;t put a glass case around the button nor require administrator permission to press it. Pressing it immediately grants you the rank of &ldquo;rogue developer&rdquo; and now you&rsquo;ve given your security team a reason to require ten more hours of annual security awareness training. Your access to the secret cools kids Slack channel is purged, too.</p>
<h4 id="45-accidental-network-isolation">45. Accidental network isolation.</h4>
<p>Adjust some network config you read about on Stack Overflow and suddenly the site is down and no one has access to the systems that can bring it back up and ahhhhh. You frantically call your AWS or colo account rep to see what they can do as your mobile device buzzes incessantly.</p>
<p>The essence of this fuckup is that the outage locks you out of the systems which need to be accessed to resolve the outage. This can be something as simple as firewall rules or as complex as <a href="https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/">unicast BGP configurations</a> across complicated multi-vendor networks that locks everyone out of your data centers.</p>
<h4 id="46-the-orchestrator-goes-down-with-the-ship">46. The orchestrator goes down with the ship.</h4>
<p>A core service on which your orchestrator depends is down. You would normally use the orchestrator to deploy the service, but since the service is down, the orchestrator no longer functions. Now someone must dig out the dusty documentation on the old manual way to do this as the clock is ticking. Does the manual way even still work? Who even has access?</p>
<p>Elsewhere, you put Consul into the deploy path six months into its lifespan and it packet-storms itself into oblivion, taking down not only service discovery but also your ability to deploy anything or even log into nodes.</p>
<h2 id="rolls-and-reboots">Rolls and Reboots</h2>
<p><em>“No plan of operations reaches with any certainty beyond the first encounter with production” – Helmchart von Faultke<sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup></em></p>
<h4 id="47-no-rollback-plan">47. No rollback plan.</h4>
<p>It&rsquo;s truly shocking how often orgs don&rsquo;t have a rollback plan. But just like your mom told you about jumping off bridges, just because everyone is doing it doesn&rsquo;t mean it isn&rsquo;t dangerous.</p>
<p>There&rsquo;s more than one way to handle this properly, like CD with canaries, blue / green deploys, full rollback of everything&hellip; but to not have a strategy for this at all and YOLO it? If only we gatekept less against liberal arts majors to fill this chasm of critical thinking.</p>
<p>A special mention goes to the untested rollback plan, too. &ldquo;We have a complicated deployment that went smoothly in staging, pre-prod, and every other environment, so why would we ever need to rollback?&rdquo; you say. &ldquo;It can&rsquo;t possibly fail in production,&rdquo; you say.</p>
<p>You&rsquo;d be correct 9 out of every 10 times&hellip; but how many times do you deploy a year again? So, you painstakingly craft a rollback plan for your deployments, but never test it since it&rsquo;s unlikely to be used. And how little confidence you have in your rollback plans leads to this next fuckup.</p>
<h4 id="48-forward-fixing">48. Forward &ldquo;fixing.&rdquo;</h4>
<p>Something didn’t go as planned, so you decide to roll forward with some new plan you came up with on the spot instead of rolling back – and then something fails in the roll forward.</p>
<p>This is a fuckup sprouting from the “developers are optimistic by nature” problem. A deployment fails on what you believe is some minor technicality. And then you fail to resist the temptation of making a &ldquo;quick fix&rdquo; to patch it while on the call and build a new version of the software so your team can ship&hellip;</p>
<p>&hellip;But it might not be a quick fix and you&rsquo;re proposing deploying something completely untested straight to production. Somehow the SRE team is okay with this, or maybe they&rsquo;re hesitant but let it slide since there are already too many hills on which they must die.</p>
<p>Either way, you&rsquo;re risking your uptime and stress for deploying a little earlier than you otherwise would. A worthy heuristic for this might be: because developers appear to be optimistic by nature, even the &ldquo;tiniest&rdquo; of hotfixes are incomplete and require more testing.</p>
<h4 id="49-scheduled-tasks-build-up-while-the-system-is-down-for-maintenance-and-dos-the-system-upon-startup">49. Scheduled tasks build up while the system is down for maintenance and DoS the system upon startup.</h4>
<p>Your system has a job queue with workers that are carefully tuned not to consume too much money and still complete their work. While the maintenance page is up, the workers are shut off. Deploying the app takes longer than expected and scheduled tasks pile up. The original pool of workers is no longer sufficient to process the backlog of scheduled tasks and people waiting on their results find your team to be insufficient.</p>
<h4 id="50-circular-dependencies-in-infrastructure">50. Circular dependencies in infrastructure.</h4>
<p>Circular infra dependencies result in a particularly nefarious failure pattern. If anything in the chain ever goes down completely, it&rsquo;s impossible to stand the system back up without yolo-rushing a new version of a component to break the chain. For instance, perhaps you store the latest deployed revision <a href="https://buildkite.com/blog/outage-post-mortem-for-august-22nd">on your own host</a>, which means you can&rsquo;t access it when something goes wrong.</p>
<p>You may design your system nicely, but <a href="https://aws.amazon.com/message/41926/">time inexorably marches forward</a> without regard for your intentions. This failure is an emergent property of all the changes people make over time. It’s an iceberg failure that only emerges when another failure has already emerged and is plaguing you. That is to say, circular infra dependencies result in a particularly nefarious failure pattern&hellip;</p>
<h2 id="disorganized-organization">Disorganized Organization</h2>
<p><em>No amount of fancy automation can truly save you from disorganized organizational processes.</em></p>
<h4 id="51-no-one-wants-to-write-docs">51. No one wants to write docs.</h4>
<p>Raise your hand if you&rsquo;ve ever worked at a company with great internal documentation. Try to recall when you&rsquo;ve ever read truly complete and up to date deployment documentation. For many of you (most of you, even), nothing comes to mind, right?</p>
<p>The closest might be a well-commented deployment script and some associated high level description. Perhaps it&rsquo;s a design doc that you trust to be sort of right but cannot assuage your suspicion that the implemented system has drifted away from it. If you trust your documentation to be 100% accurate when deploying software, you&rsquo;re going to have a bad time because it&rsquo;s inevitable that there will be errors in it.</p>
<p>And because no one wants to write docs, numerous fuckups occur. You followed outdated <a href="http://web.archive.org/web/20140831011650/https://stackstatus.net/post/96025967369/outage-post-mortem-august-25th-2014">or misleading</a> docs on how to make the release, which fucked up the deploy. You forgot to update customer-facing docs and they configured something incorrectly and now all your other customers are suffering from the outage.</p>
<p>You forgot to send release notes which, wait, how is that a fuckup? Oh right, the account manager for your largest customer added in terms about releasing their requested feature by a certain date (without telling anyone in product or engineering of this, naturally) and now you&rsquo;re re-negotiating their multi-year contract and giving them a serious discount to stay which is going to be difficult for your CEO to explain on the next earnings call.</p>
<h4 id="52-people-only-get-rewarded-for-diving-saves">52. People only get rewarded for diving saves.</h4>
<p>People are congratulated for resolving the downtime or for catching a failure as it&rsquo;s happening, but no one is rewarded for anticipating failures ahead of time.</p>
<p>The CEO wants things to be shipped now so everything is a rush to get half-baked features out the door quickly. But that causes quality problems elsewhere. At least half the deploys have an emergency “oh shit something is borked” follow-up deploy. And either you roll forward or the app limps along and languishes in a janky existence for the next five days until someone builds the fix and ships it.</p>
<p>Whoever ships the fix is lauded for restoring sanity, but it never should have been broken in the first place. And everyone knows if they had chosen to roll back, the CEO would’ve been angry because his little gamification feature wouldn’t have been there for five days. You suffer, your team suffers, your customers suffer, but bossman is happy and the bleary-eyed engineer who spent days on the recovery gets a pat on the back. Well done, naive salaryman.</p>
<p>Then a conference is coming up; your CEO and CMO demand a splashy announcement for it. That means your Q3 deploys are now beginning-of-Q2 deploys… which is in two weeks. You ship a ton of stuff that is half-baked and barely strung together, but the press release goes out (along with the press releases of all your competitors in an unnavigable sea of babblespeak that the market largely ignores).</p>
<p>The team is congratulated while the architect cries in the bathroom grieving their multiple quarters of work of carefully planned releases as support tickets now pile up with customer complaints about how features are broken. By end of year, half the features are still being &ldquo;stabilized&rdquo; and the other half are mothballed.</p>
<h4 id="53-no-process-for-rarely-performed-tasks">53. No process for rarely-performed tasks.</h4>
<p>A task is rarely performed, so there&rsquo;s no documentation on it. Regrettably, someone must perform the task now and today the universe has decided for that person to be you. You go to look for documentation and find nothing. You look at the code for the systems involved and it&rsquo;s unintelligible. You <code>git log</code> the associated files and discover that everyone involved with the system has already moved on. You wonder if you should move on, too.</p>
<p>When disparate teams <a href="https://www.atlassian.com/engineering/post-incident-review-april-2022-outage">try to coordinate on rarely-performed tasks</a> a special sort of confusion emerges.</p>
<h4 id="54-have-to-build-a-replica-for-noobs-who-cant-write-queries">54. Have to build a replica for noobs who can’t write queries.</h4>
<p>It&rsquo;s deemed necessary for internal data analysts to be able to run queries against production data so they can serve customers and forecast future business (or other such violations of linear time). They&rsquo;re granted read-only credentials to the production database because that should be sufficient. Later, you are paged because the service is down and the database is wedged.</p>
<p>You discover that one of the data analyst&rsquo;s queries is taking up way too much memory and has locked a critical table. You kill the query, sever access, and prepare for hell in the morning. In the end, you deploy a replica so the internal teams can query production data without killing the production database. Leaders considered it too expensive to set up originally, but how expensive was the outage and all the effort which went into restoring service?</p>
<h4 id="55-layer-8-denial-of-service">55. Layer 8 denial of service.</h4>
<p>Once upon a time, you and your team decided to rewrite an app because your company&rsquo;s business model changed and thus very little of it was still useful. You also didn’t like Ruby, so you decided to rewrite it in Scala because Scala was hot and everyone on the team wanted to learn Scala. Great, let’s trust our important business function to people learning a new language!</p>
<p>The first version of the app was supposed to be deployed alongside the Ruby version and coexist with it. That deployment failed and also caused the Ruby app to fail. Repairing that took 8 hours of downtime. Naturally, the sysadmin didn’t particularly appreciate having to stay for an extra 8 hours on a Friday because your team wanted to deploy outside of business hours.</p>
<p>A month later, you try again. It deployed successfully! &hellip;But the migration for the user accounts fucked up. You could use the new app, but no one had accounts for it other than the root account. A week later, you try again with a script to deploy all the user accounts &ndash; and that was successful.</p>
<p>Later, your team discovers the v1 of the app is very slow when actual work is done in it. So, you switch to using Cloudsearch to “optimize” part of the app. And it does! &hellip;Except Cloudsearch is <em>eventually</em> consistent and now users complain that when they add something to the app and click refresh, it doesn’t show up until 30 seconds later.</p>
<p>Your team rushes a hotfix to undo the Cloudsearch integration and restore the previous functionality. The sysadmin says no. You gave them less than a day’s notice to deploy this new version, even though your team knew about it for a week while you worked on undoing the integration. You will be lucky if you ship anything else the rest of the year now.</p>
<p>tl;dr the sysadmin is fed up and doesn’t trust anything your team deploys now.</p>
<h4 id="56-engineers-take-key-bumps-of-yolo-in-prod">56. Engineers take key bumps of YOLO in prod.</h4>
<p>Your company prides itself on being a meritocracy with a flat hierarchy, which is why senior leaders (like your boss) can disregard deploy processes &ndash; like making a production fix for a bug on the production node and recompiling, re-introducing the bug on the subsequent deploy because they never fixed the issue in tree.</p>
<p>This travesty is an argument in favor of making manual deployments impossible or difficult (<a href="#69-manual-deploys">see #69</a>), but there&rsquo;s no guarantee that any proposed safeguards would avoid veto by the Director of YOLO Engineering who is responsible for the fuckup in the first place. Because it&rsquo;s never their fault, is it?</p>
<p>There&rsquo;s also a coding variant to this fuckup: someone yolo-typing new code into a live virtual machine. They hot patch at the Erlang console because they relish living in sin. It might be called performance art if it wasn&rsquo;t fated to desecrate service performance.</p>
<p><img src="/blog/img/hot-patch-erlang-console-meme.png" alt="An AI render of the pope wearing a stylish puffy coat, giving him the appearance of a hip hop artist with ample swagger in juxtaposition with his role as pope. The image is captioned: your manager on their way to hot patch at the Erlang console."></p>
<p>That anyone would be allowed to do this assuredly reflects organizational dysfunction. It is so bonkers to be able to just like, write code on a production box and expect that it works. It is a pathological level of optimism. It is suspiciously reminiscent of <a href="https://www.youtube.com/watch?v=WUhOnX8qt3I">the Pyro in TF2</a> who runs around burning everyone to a crisp with a flamethrower while, from their deranged vantage, they are showering the world in glittering rainbows and bubbles and whimsy.</p>
<p>&ldquo;Well, I&rsquo;d never do <em>that</em>!&rdquo; you say, thinking this doesn&rsquo;t apply to you. And then you&rsquo;d proceed to attach VisualVM to the JMX port and yolo some gc tuning. Or you&rsquo;d run some exploratory bash or SQL on the prod instance to get some data without having tested it fully in a test environment. Maybe you aren&rsquo;t debugging in prod, but using tracing or performance analysis tools in prod to debug problems or tune settings without having tried first in QA at the very least makes you a co-conspirator and likely a Staff YOLO Engineer (maybe even Senior Staff if you continue to do it after reading this! Don&rsquo;t let your dreams be memes).</p>
<h4 id="57-cloud-credits-are-about-to-run-out-so-you-rush-deploys-to-reduce-your-aws-bill">57. Cloud credits are about to run out so you rush deploys to reduce your AWS bill.</h4>
<p>You have to scale down really quickly because your cloud credits ran out and you can no longer afford your infra&hellip; which means you were spending money you didn’t have for a long time because Papa Bezos was your sugar daddy for a bit. As you scale down in a panic, you <a href="https://buildkite.com/blog/outage-post-mortem-for-august-22nd">fail to load test the new database</a> and regret not just selling out at one of the tech giants. Now your organization has successfully reduced costs… but also revenue.</p>
<h4 id="58-behavior-in-your-dependents-fucks-up-your-deploy">58. Behavior in your dependents fucks up your deploy.</h4>
<p>It&rsquo;s trivial to mentally model your service in isolation; the rest of the world is immutable and your deployment is the only change in motion. In reality, other teams are hurling themselves at their OKRs, your sales team is onboarding new accounts, and your data integrations are data pipelines haphazardly built with popsicle sticks and glue. Like nature, the system is in constant flux and no matter how confident you are in your deploy, <a href="https://code.google.com/p/chromium/issues/detail?id=165171#c27">an unexpected shift in the system elsewhere</a> can result in your system failing.</p>
<p>Maybe another team has worse deployment hygiene than you do and they yolo&rsquo;d a version straight to prod without giving you a chance to integrate with it. Maybe they&rsquo;re hotfixing an incident themselves and your service is collateral damage. Maybe a data partner changes their data format without announcing it (see <a href="#51-no-one-wants-to-write-docs">#51</a>) and every system in the path falls flat on its face.</p>
<p>It&rsquo;s not your fault, but it <em>is</em> your problem. Scream into a pillow and sing lamentations to your pet or whatever you need to do to process your grief and move on to acceptance. Because if you want to prevail, you must be nimble and maintain <a href="https://gotopia.tech/bookclub/episodes/security-chaos-engineering">the capacity to recover from unexpected failure</a>.</p>
<h2 id="business-illogic">Business Illogic</h2>
<p><em>The deployment may pass your tests but it can still break your business logic.</em></p>
<h4 id="59-breaking-api-change-for-a-partner">59. Breaking API change for a partner.</h4>
<p>Your team finally tackles tech debt and deploys the new, shiny, streamlined version of the API. A few hours later, a partner is screaming at your CTO because they were using the API in a way you never fathomed was even possible and their integration no longer works due to your change.</p>
<p>Another time, you&rsquo;re celebrating the successful update of the auth method in your SaaS app. It passed all tests, got approval from the security team, and nothing broke after deployment&hellip; but, as you&rsquo;ll soon realize upon wading into a shit show the next morning, you forgot to tell customers about the auth method update. Everyone built access using a certain type of token and switching the service to use a new method completely broke customer access. Guess who will be blamed for lower renewal numbers this quarter?</p>
<p>The &ldquo;funny&rdquo; thing about breaking API changes is devs will often argue what is or isn’t breaking. Semver this, semver that. It still takes the same signature and they only fixed a “bug” in the behavior of the other parameters… but what if the other software was relying on that behavior? Now it’s different and different is bad when customers rely on things staying the same.</p>
<h4 id="60-compliance-calamity">60. Compliance calamity.</h4>
<p>Compliance stuff is boring but it matters. Some subtle design, layout, wording, or data retention change in a highly regulated part of the system causes it to no longer be in compliance with one of the onerous compliance regimes it <em>must</em> be a part of for the business to remain viable.</p>
<p>For instance, your payment flow changed and now you&rsquo;re no longer in compliance with PCI. This remains undiscovered until much later, as most failures of this type are. If you&rsquo;re unlucky it&rsquo;s the auditor who discovers it and you&rsquo;re now buried in paperwork. Or you erode trust by violating user expectations about <a href="https://meta.stackoverflow.com/questions/340960/a-post-mortem-on-the-recent-developer-story-information-leak">how you handle their data</a>.</p>
<h4 id="61-robotstxt-that-inhibits-search-engine-indexing-and-traffic-plummets-as-a-result">61. robots.txt that inhibits search engine indexing and traffic plummets as a result.</h4>
<p>You change something in a way that results in search engines or other traffic sources deranking or delisting you. Maybe it&rsquo;s as subtle as borking the preview cards; sure, the links still work, but it&rsquo;s no longer as clickbaity to the ever-shortening attention spans of the plebeian spectators. Congratulations, you just killed your traffic source and meal ticket!</p>
<p>Everyone frantically tries to figure out what is going wrong as bank accounts drain. It might not even be something you changed — sometimes giants simply roll over in their sleep and crush smaller players. But it could also be that you messed up the robots.txt and are now poor.</p>
<h2 id="the-audacity-of-spacetime">The Audacity of Spacetime</h2>
<p><em>Deploying the system at scale is different than deploying the little test sandbox version of it.</em></p>
<h4 id="62-deployment-assumes-all-servers-are-updated-at-the-same-time-but-theyre-not">62. Deployment assumes all servers are updated at the same time, but they’re not.</h4>
<p>This fuckup is so, so common. It breaks the simplified, but wrong, mental model that users will talk to your servers and only to that one server. It’s a useful model because it simplifies a bunch of things and is mostly true; when it’s not true, it&rsquo;s often fine to overlook the effects. But, occasionally, the effects are catastrophic and nothing behaves properly until reality settles.</p>
<h4 id="63-a-new-deployment-begins-while-a-previous-one-is-still-in-progress">63. A new deployment begins while a previous one is still in progress.</h4>
<p>Canaries and staged multi-region deploys can, by design, take a while &ndash; so your upgrade is only partially tested and deployed, <a href="https://web.archive.org/web/20201214102416/https://blog.etsy.com/news/2012/demystifying-site-outages/">resulting in an outage</a>.</p>
<p>Most of the fuckups on this list are due to immature processes. But this one emerges as your processes begin to mature. Observing how your failures transform over time can elucidate your progress, a kind of mindfulness that is admittedly difficult to cultivate when feeling the crushing weight of disappointment.</p>
<h4 id="64-multi-stage-deploys-of-unrelated-components">64. Multi-stage deploys of unrelated components.</h4>
<p>You&rsquo;ve had so many deployment failures in the past and every deployment has been painful. Some well-meaning person has decided that deployments need to be surveilled with hawkish intensity. Deployment frequency plummets accordingly and every deployment is a potpourri of changes that various stakeholders demand go live.</p>
<p>Good ol&rsquo; batch deploys take forever. People get burned out or fatigued and then naturally make mistakes. Or it&rsquo;s not their component and they don&rsquo;t have skin in the game<sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup> and consequently are careless when handling it.</p>
<p>When failure does transpire, everyone&rsquo;s frustration inflames. It&rsquo;s either their component that failed and they&rsquo;re frustrated at the lack of care by their peers, or it&rsquo;s not their component and they&rsquo;re frustrated that they have to be on this stupid Zoom call until 04:00.</p>
<p>The answer is probably splitting the deploys out; the only reason not to do separate deploys is likely organizational process or dysfunction (see also: <a href="#disorganized-organization">Disorganized Organization</a>).</p>
<h4 id="65-accidentally-deploy-more-than-you-thought-you-did">65. Accidentally deploy more than you thought you did.</h4>
<p>You&rsquo;ve put a ton of work into automating your deployments. The automated tooling is effective and deploys exactly what you asked of it &ndash; but what you asked of it didn&rsquo;t match your expectations.</p>
<p>Perhaps you thought you were deploying a branch containing only a hotfix, but it was started from the wrong base branch. Or maybe you thought you were asking it to target only a few canary nodes, but accidentally rolled the whole fleet. Perhaps the automation <a href="https://web.archive.org/web/20201214102416/https://blog.etsy.com/news/2012/demystifying-site-outages/">tries its best to make all of the servers consistent by ensuring changes must be deployed in the same sequence</a>. Whatever it was, automation ruthlessly executed your command and now you&rsquo;re scrambling to recover.</p>
<p>In many organizations, it&rsquo;s difficult to justify improving the safety and user experience of internal tools since it doesn&rsquo;t directly affect customers and &ldquo;just&rdquo; makes the system confusing for our engineers working with it. The silver lining is this outage will at least make the case that developer experience is important.</p>
<h4 id="66-zombie-hosts">66. Zombie hosts.</h4>
<p>Your new version operates under the assumption that the fleet is only running the new version and all instances speak the same protocol. But in reality, some hosts came back from the dead (i.e. maintenance) running an old version of the software after the deployment completed.</p>
<p>Now you have a zombie apocalypse on your hands with nothing to defend yourself but your laptop. You now regret choosing the ultraportable version rather than the hefty tank boi. And just like zombies, zombie hosts can sneak up on you when you least expect it, long after your deployment is complete when the post-apocalyptic landscape that is your prod environment seems almost serene.</p>
<h4 id="67-running-out-of-cloud-resources">67. Running out of cloud resources.</h4>
<p>One fine morning, you discover you&rsquo;ve run out of the specific instance type your service needs. Like, there are <em>literally</em> no more i3.16xlarge instances that exist for you to purchase in this universe (or possibly just the availability zone).</p>
<p>It turns out you are their largest customer, which, of course, the vendor never made clear for strategic reasons. Scaling beyond the capabilities of a vendor inevitably results in downtime. Either you convince the vendor to <a href="https://en.wiktionary.org/wiki/git_gud">git gud</a> or you patch to make the app creak along as you frantically build a migration path to a substitute, disrupting the roadmap in the process.</p>
<p>Or, on a Zoom meeting with a bloated attendee list, a dev notes that the app is slower: &ldquo;I refactored the code to make it easier to read, but now it&rsquo;s slower, so we need 3x the servers to run it.&rdquo; You swallow bile. <a href="https://www.youtube.com/watch?v=Nl_Qyk9DSUw">Lucille Bluth asks</a> in your head, “How much could one server cost, Michael?”</p>
<p>If you have rollbacks, you should be fine. If you have autoscaling, you can just pay to address this problem. But nothing can help you automatically scale your tolerance to bullshit or rollback your life choices.</p>
<h4 id="68-proactively-overloading-your-systems">68. Proactively overloading your systems.</h4>
<p>Scaling one part of the system puts pressure on other parts&hellip; and now they&rsquo;re failing. You now must deal with an outage somewhere you weren&rsquo;t expecting, all because you were proactive in anticipating capacity you&rsquo;d need in the future. Worse, if that capacity is required right this millisecond, you face the dilemma of choosing which part of the system to sacrifice temporarily while you figure out how to fix the bottleneck.</p>
<h2 id="manual-deploys">Manual Deploys</h2>
<h4 id="69-manual-deploys">69. Manual deploys.</h4>
<p>Manual deploys are truly terrible. If there is a villain in the story of DevOps, it is manual deploys. They are not the serpent in the garden promising forbidden knowledge. Manual deploys are the Diablo boss that probably smells like rotten onions and toe fungus IRL and whose only purpose is to destroy any and all life.</p>
<p>Not convinced yet? Here are reasons A through Z to stop living in Clown Town. Each should be enough to convince you to automate at least the tedious parts of your deploys. Please, we beg you on behalf of humanity and reason, automate all the repetitive tasks you can, even if your org has an aversion to it. Humans are not meant for executing the same thing the same way every time.</p>
<p><strong>A</strong>n engineer walks into a bar, has two beers, and now is deploying to the entire cluster as they order a third. The bartender says, “You know, if you used an orchestrator, you could order something stronger.” That bartender’s name? Q. Burr-Netty.</p>
<p><strong>B</strong>ackups of the database probably don’t work. Every time you take a snapshot, it’s someone reading the docs off a DigitalOcean post on how to back up MySQL.</p>
<p><strong>C</strong>opy pasta is always served with failsauce. Copying a config from an existing build to a new one, then forgetting to change the version number. Copying SSH authorized keys between machines… and if you’re managing them like that, it’s probably append-only which means your old ops people still have access to your prod servers.</p>
<p><strong>D</strong>isk management as a matryoshka doll of disasters: capacity management, failing to provision enough space<sup id="fnref:12"><a href="#fn:12" class="footnote-ref" role="doc-noteref">12</a></sup>, IOPS management, SAN management and all the babysitting required for distributed disks, we probably don’t need to go on.</p>
<p><strong>E</strong>xpiration of certificates or domains, the tech tragicomedy. You know this will happen again in a year. You see the rhino charging towards you in the distance but there&rsquo;s always something more urgent to do until it&rsquo;s too late.</p>
<p><strong>F</strong>orget to smoke test the whole environment. You perform manual tests but they <a href="https://www.bungie.net/en/News/Article/48723">only hit the &ldquo;good&rdquo; servers</a>. Luck favors the automated.</p>
<p><strong>G</strong>eoDNS routing with manual region switching so you can take down a data center and update it without any traffic… but actually DNS takes awhile to propagate so you still have a trickle of traffic coming in (does anyone care that much about those lost requests?).</p>
<p><strong>H</strong>andling hardware failures is nigh on impossible. Are your systems even failing over?</p>
<p><strong>I</strong>mproper sequence when deploying components. Just like your dance moves, the order of your deploy steps is all wrong.</p>
<p><strong>J</strong>umpbox that people use as a dumping ground for random assets they need in prod, like random JAR files or Debian packages, movies they torrent at the office that they want to get on their home machine, random database dumps that people need for various purposes…</p>
<p><strong>K</strong>een to have the deploy done, you do not wait for changes to propagate, the cache to become warm, nor the system to become healthy. “No, sir, the engineer really worth having won&rsquo;t wait for anybody.” ~ F. Scoff Gitzgerald<sup id="fnref:13"><a href="#fn:13" class="footnote-ref" role="doc-noteref">13</a></sup></p>
<p><strong>L</strong>onesome server runs the wrong version because you forgot to update all the servers. Or, you forgot one region when you’re doing multi-region updates.</p>
<p><strong>M</strong>ismatched component versions. It&rsquo;s very easy to do when you&rsquo;re slinging deploys manually and how many database servers do we have again? Is Tantalum down or decommissioned? This IP naming scheme makes no sense. Is it even a database server?</p>
<p><strong>N</strong>ot copying code to all of the servers and not removing the old code from it, leading to conflicts worse than the tantrums on your executive team.</p>
<p><strong>O</strong>verlook which environment you’re in. If it happens, it’s probably a process failure. Because it’s <a href="https://about.gitlab.com/blog/2017/02/10/postmortem-of-database-outage-of-january-31/">an easy thing to overlook</a>, but there should be a lot more processes in place to catch someone from accidentally farting about in prod. Ideally, you shouldn’t even be able to make this mistake.</p>
<p><strong>P</strong>rovision users manually. Not only is it a pain in the ass, it is also fraught with peril.</p>
<p><strong>Q</strong>uarrels between IP addresses and hostnames that rival a Real Housewives reunion special.</p>
<p><strong>R</strong>otate the password or keys, but forget to update <a href="https://www.traviscistatus.com/incidents/khzk8bg4p9sy">the service config with the new password</a>. You rotate the password, so of course you have to update the config, but there may be numerous configs and <a href="https://www.traviscistatus.com/incidents/khzk8bg4p9sy">it can be easy to miss one</a> if it’s not documented or automated.</p>
<p><strong>S</strong>moke tests aren’t performed after manual production deploy. If you’re doing deploys the wrong way (i.e. the manual way), smoke tests are a way to mitigate some of the issues &ndash; but you must remember to actually conduct them.</p>
<p><strong>T</strong>rusting that your on-call team will be paged despite never testing the paging plan.</p>
<p><strong>U</strong>pdating the monitoring system is overlooked. If you autoscale, the system managing the autoscaling will self-monitor the hosts. If you add a host manually to a system that doesn&rsquo;t autoscale, you probably want the system to register with the agent that’s supposed to do the monitoring.</p>
<p><strong>V</strong>PN that is a single-point-of-failure and held together with duct tape and twine. The VPN is required to access the network to do the deploys but apparently making it not suck is not required.</p>
<p><strong>W</strong>ait for DNS propagation? Who has time for that?</p>
<p><strong>X</strong>11 and RDP-based deploys where a tired sysadmin remotely logs into the virtual desktop of a system that shouldn&rsquo;t even have a graphical environment and haphazardly drags files around until the new release is live. The commands can&rsquo;t even be audited because there were no commands, only mouse movements.</p>
<p><strong>Y</strong>our sysadmin does maintenance on the database so that it can stay up, but in the morning you discover the settings they’ve changed cause the database to no longer run its background maintenance processes and you’ve just deferred your downtime until later.</p>
<p><strong>Z</strong>IP or JAR file is copied from the developer’s laptop and now you have no record of what was deployed.</p>
<hr>
<p>Thank you to the following co-conspirators for their contributions to this list: C. Scott Andreas, Matthew Baltrusitis, Zac Duncan, Dr. Nicole Forsgren, Bea Hughes, Kyle Kingsbury, Toby Kohlenberg, Ben Linsay, Caitie McCaffrey, Mikhail Panchenko, Alex Rasmussen, Leif Walsh, Jordan West, and Vladimir Wolstencroft.</p>
<hr>
<p><em>Enjoy this post? You might like <a href="securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>This brings to mind Vonnegut’s advice of <em>“Be a sadist. No matter how sweet and innocent your leading characters, make awful things happen to them—in order that the reader may see what they are made of.”</em>&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>As “Duskin” rightly noted <a href="https://kellyshortridge.com/blog/posts/when-something-disappears-from-the-internet/">in an investigation of a fire at an ammonia plant back in 1979</a>: “If you depend only on well-trained operators, you may fail.”&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>These sorts of seldom-used libraries are much less likely to be poisoned than the mainstream libraries which occasionally have CVEs, but infosec folks ambulance chase off them until our sanity is flattened and bloodied like roadkill.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>And why should traffic in pre-prod be as high as prod? Replaying all traffic to pre-production all the time is expensive af! So it&rsquo;s a reasonable assumption, in isolation.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>but oh honey why are you performance testing an option that&rsquo;s faster than what you&rsquo;ll actually deploy??&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p><a href="https://www.baeldung.com/linux/docker-build-cache">The options</a> are even more misleading than you might expect. <code>--no-cache</code> only inhibits the cache for layers created by the Dockerfile and does not skip the image cache. You need <code>--pull</code> to skip the image cache.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>Usually linkers order the objects they&rsquo;re instructed to link by the order they&rsquo;re presented. If you specify the order, you&rsquo;ll always get the same order. If you have Make or whatever build system you&rsquo;re using send the linker all the .o files in the directory, it will send them in the order the filesystem lists them, which can change depending on some internal filesystem properties (usually what order their metadata was last written). Usually it doesn&rsquo;t matter, but maybe the code has some undefined behavior based on the layout of the code itself. Maybe there are static initializers that get run in a different order and some data structure is corrupted before the program even starts doing anything useful.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>Action bias is a bitch. See also a recent paper I co-authored: <a href="https://josiahdykstra.com/wp-content/uploads/2022/06/HFES2022_OpportunityCostAndActionBias.pdf">Opportunity Cost of Action Bias in Cybersecurity Incident Response</a>&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>I did not have to come at myself this hard. (That&rsquo;s what they said).&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:10">
<p>The original quote by Helmuth von Moltke is &ldquo;No plan of operations reaches with any certainty beyond the first encounter with the enemy&rsquo;s main force.&rdquo; from <em>Kriegsgeschichtliche Einzelschriften</em> (1880). <a href="https://www.oxfordreference.com/view/10.1093/acref/9780191826719.001.0001/q-oro-ed4-00007547">It is commonly quoted</a> as &ldquo;No plan survives first contact with the enemy.&rdquo;&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>&ldquo;Skin in the game&rdquo; is such a strange idiom. It makes me think of skeletonless fleshlings flailing around on a football pitch trying to flop wobbling meatflaps at the ball. Neurotypical lingo never ceases to amaze.&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:12">
<p>You might think that with the growth of data collection, machine learning, and other <del>flagrant privacy violations</del> business intelligence practices that data storage is the primary dimension of capacity planning. This is often not the case. In the last decade or so, capacity has grown phenomenally but throughput and latency have not kept pace. As a result, IOPS and throughput are more commonly the bottleneck that needs planning while storage capacity is overprovisioned. On the cloud, allocated throughput and IOPS are assigned based on volume size, so it&rsquo;s common to see vast overprovisioning of volume size to realize sufficient IOPS. It also occurs on storage SANs, where the number and capacity of disks are selected to match the required sustained read and write rates. All of this is phenomenally complicated but as a first approximation, IOPS and throughput matter more than storage capacity for many use cases.&#160;<a href="#fnref:12" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:13">
<p>Paraphrased from Chapter 2 of <em>This Side of Paradise</em> by F. Scott Fitzgerald: <a href="https://www.bartleby.com/115/22.html">https://www.bartleby.com/115/22.html</a>&#160;<a href="#fnref:13" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Attackers have better things to do than corrupt your builds</title>
            <link>https://kellyshortridge.com/blog/posts/attackers-have-better-things-to-do-than-corrupt-your-builds/</link>
            <pubDate>Thu, 30 Mar 2023 08:00:22 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/attackers-have-better-things-to-do-than-corrupt-your-builds/</guid>
            <description>
The cybersecurity discourse is, of late, festooned with fear mongering about vulnerabilities in build pipelines. If an attacker exploits a vuln in our build pipeline, are we doomed? No, because it’s pointless for them to do so. But there is a real problem revealed by this clucking and clamoring: many security professionals (and vendors) don’t know how build pipelines work.
The twisted security tale they’ve spun is: One horrible day, our build infrastructure reads attacker-controlled data that triggers exploitation of a vulnerability. Yet, to achieve this, the attacker must gain access to our build system; if they can access the build system, they can change what it does and what gets built. Why do they need to exploit a vulnerability when they’ve already cinched their victory? Even male peacocks aren’t this wasteful.
Here’s how the real story unfolds. I, a nefarious attacker, want to corrupt the software builds coming out of BlandCorp’s GitHub Actions build farm. I’m already versed in how most build processes work at modern enterprises because attacking them is part of my job1.
BlandCorp, like many enterprises, runs the Actions runner inside numerous pods on Kubernetes (or another build runner inside build infrastructure). These pods receive builds from BlandCorp’s GitHub server and then run the build steps that are specified in the target repository, such as:
checkout the code install the language toolchain fetch the dependencies build the software run the automated tests upload the resulting artifact2 Build pipelines are not like a web application where there’s pervasive interactivity or it takes input by design. Build infrastructure is designed to grab the source artifacts, perform some work to verify and transform those source artifacts, and then deploy the results of that work somewhere.
The main interaction with the outside world is grabbing the artifacts. Build pipelines don’t ingest form fields or input from a command line; a build pipeline does its thing very well but its thing, in the grand scheme of things, is limited.
If there’s a vulnerability in the build pipeline I want to exploit as an attacker, I must find a way to interact with it. This interactivity is designed to be impossible – a testament to the efficacy of design-based solutions for security and reliability. Security vendors will not tell you this for somewhat obvious reasons. Vendors want to scare you about build pipeline vulnerabilities because if it were possible to exploit them, it would be dire, and they want you to pay them to soothe your fears.
If not through exploitation, then how does the story unfold? Imagine I have a code execution vuln I want to exploit. If I can change the data, I can already commit code – so I may as well write code that does what I want. As Raymond Chen said decades ago, “You shouldn’t be surprised that allowing people to run code lets them run code.”
Or, if I can change the software that runs the build runners, I can replace it with a malicious version. I don’t need to exploit a vulnerability at all because I already have the access I want to gain control over BlandCorp’s build infrastructure.
So, as an attacker, I can ferret my way into BlandCorp’s build infrastructure through three primary paths:
tampering the source code substituting the dependencies or language toolchain corrupting the underlying runner that performs the work How do I reason about these paths as an attacker?
Path #1: Tampering the source code The most direct and obvious way for me to tamper the source code is to commit new code to whatever I want to tamper, like a component that will be built by BlandCorp’s build runner. This is also likely the least stealthy way to compromise a build pipeline.
I, as an attacker, cannot simply submit a pull request (PR) with my malicious modification; or maybe I can, but it’s very unlikely to be approved by a human involved with the project. BlandCorp likely has branch protection, too, which prevents my ability to force push my malicious code. This displeases me as an attacker.
Path #2: Substituting the dependencies or language toolchain Next is the most expensive path. The dependencies and language toolchain are where, as an attacker, I can inject data in the build process (like substituting or replacing the dependencies). But BlandCorp’s runner, like any runner, will fetch dependencies from their upstream locations on the internet and cryptographically verify them to ensure they match what developers expect.
Thus, to interlope with this software, I must incinerate tens of millions of dollars of CPU time to find a hash collision and meddle-in-the-middle the build workers. As an attacker, this also displeases me.
Path #3: Corrupting the underlying runner that performs the work The runner that performs all this work in the build process is not cryptographically verified. But if we trust GitHub (or an equivalent vendor) to store our source code, we should trust them to be able to run it, too. Verifying the underlying infrastructure and keeping it safe is a lot of work. If we do it ourselves instead, we don’t gain any assurances about tampering and lack thereof.
To corrupt the underlying runner, I (the attacker) must invest ample time, money, and cognitive effort to either:
Compromise BlandCorp, who maintains a stuffy “no SaaS allowed” policy and thus self-hosts; if I compromise BlandCorp to gain enough access to their self-hosted build infra to tamper with it, I’m already deep inside BlandCorp (so no need to exploit a vuln unless I want to flirt with future incident responders) Compromise GitHub itself (or an equivalent vendor), specifically in a way that allows me to successfully modify the GitHub Actions code or infrastructure as befits my devious schemes. For either option, I can social engineer a developer or admin to poach their credentials or gain access to their machine, from which I can pivot (with varying degrees of difficulty depending on their IAM architecture). In the CircleCI compromise, attackers stole customers’ keys in this fashion (by pwning a CircleCI dev’s laptop) – a terrifying scenario for customers. But, for the purposes of this post, it’s worth noting the attackers didn’t corrupt the underlying runner because they already accessed the resources they wanted and why pursue something harder?3
I’m not spending tens of millions of dollars in either case, but this option likely leaves me wanting something easier as an attacker.
The caveat But ay, here’s the rub4. BlandCorp might take shortcuts or they might use vendors that take shortcuts. One of those shortcuts is disrupting the verification steps – or not applying them to some component included in the build.
What do these verification steps involve? The worker (the tool in the build step that downloads dependencies) verifies a cryptographic hash provided in the application’s source code, usually right after the asset is downloaded and before it’s extracted or used (see path #1). The cryptographic hash is stored in a lock file that is versioned alongside the application source code.5
So far, so good. Let’s zoom out to the build steps themselves. Each CI system has its own little language for describing how to build a project. These CI systems want to bequeath us the freedom to build whatever we want, like building something custom with a bash script. But this freedom allows us to do things we shouldn’t do, like download unverified files from the internet and run them.
Thus, the special trust you must maintain is that whoever writes the build steps doesn’t include randomly fetching data from the internet in those steps. Generally, they don’t – it’s very uncommon to do that because it’s frowned upon by all parties but also because verifying things is the default. You’d have to go out of your way as a developer to write build steps that download data unverified from a remote location. So, you know, don’t.
It only takes a single step of “download data and install it, without verifying it” to poison your builds – as we witnessed in the CodeCov compromise. CodeCov offered an install process of “copy this line of code into your build pipeline” – specifically bash &lt;(curl -s https://codecov.io/bash) – and that line of code (now deprecated, but at the time) fetches a file from their website, downloads it, and runs it. No one likes this.
The reason why security professionals detest this installation process is obvious; they are paranoid, even when unwarranted, so they distrust most code downloads. But software engineers also dislike this form of installation process because it destabilizes and jeopardizes reliability.
Security is a subset of software quality I mention reliability because reliability is the reason why build systems are designed this way – a way that frustrates attackers by design. Engineers want to ensure that when they make a build, test it and deem it correct, then deploy another build to production, that the second build won’t be meaningfully different from the first one. Security may not serve as the primary motivator, but it benefits from this stringent reliability requirement.
Much of what we seek from a security perspective is enveloped by reliability. Security is ultimately a subset of software quality. This is a lesson that more security professionals should heed, especially those that protest that software engineers “don’t care about security.”
Reliability is also why many software engineers feel like the less security teams meddle in the build process (and other parts of software delivery), the better – the higher quality, more secure – it would be. Many of the things I read about “securing” build pipelines are half-baked and result in less reliable software, which means less secure software.
Is adding more things with opaque and unverified steps in your build pipelines a good thing? Check your security vendors’ install processes, too; how many used (or still use) CodeCov’s same approach to shove their scanners and wares into your pipelines? Glass houses, pots and kettles, etc.
Instead of barking up errant trees, security professionals should seek opportunities to invest in reliability with auxiliary security benefits so everyone wins. When we propose security “solutions” that destabilize reliability – like some newer security solutions requiring you to completely renovate your build pipelines to accommodate them – our colleagues will be baffled by our audacity. Understand the thing you are trying to “secure” before you thrust yourself in it.
If a security professional isn’t familiar with reliability in the context of software, that’s an urgent problem. If we hope to “secure” the software delivery process, we need to understand the innovations around reliability – whether site reliability engineering or software quality – that enable the ability to achieve all this high-falootin’ modern software stuff in the first place.
Start with the wiki on reproducible builds. Build reproducibility is something we care about for reliability purposes, but most cybersecurity teams today aren’t equipped to assess it. That must change.
Tech leaders and software engineers should consider where reliability investments may impart security benefits. We should teach our security colleagues about our software reliability efforts – especially how these investments exasperate attackers and impede their objectives (by skyrocketing the effort attackers must invest). Brainstorm how to further exacerbate attacker frustrations through these innovations.
Create a decision tree of how attackers might compromise your build infrastructure and capture existing design-based mitigations (as described above). It may spare you from unreasonable demands to fix CriTiCaL SuPeR uRgEnT bugs that can’t be exploited through any reasonable means.
My hope is that both communities can find common ground by thinking more about security solutions by design – but that starts with understanding our organizations’ systems and what purpose they serve. Otherwise, I fear the entrenched “vuln scan all the things” monomania will deepen and waste our precious time and effort on tilting at windmills.
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems, available at Amazon, Bookshop, and other major retailers online.
Thanks to Alex Rasmussen, C. Scott Andreas, Camille Fournier, Leif Walsh, and Ryan Petrich for feedback.
If only security people took this understanding as seriously as attackers. ↩︎
If BlandCorp integrates security cruft into their build pipelines, there might be steps like: run the vuln scanner or generate the SBOM ticket (alas). ↩︎
This is why I’ve been saying for quite a few years now that IAM is the hardest security problem related to modern infra. Most security vendors in that area are not very helpful (especially the like Identity Posture Hygiene Surface ones). Solutions like time-based access feel more promising. ↩︎
Of course a nod to Hamlet, Act III, Scene I https://poets.org/poem/hamlet-act-iii-scene-i-be-or-not-be ↩︎
If you want to update a dependency on your local machine, your local build tooling (like npm install) automates the process for you. The engineer selects the new version they want and asks their local build tooling to install it for them. The local build tooling downloads the requested version from the upstream repository then records the new version and its cryptographic hash in the project’s lock file. The engineer commits this lock file – as well as any other changes needed to use the new version of the dependency – in their PR, which their peers will review before merging it into the codebase. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/hacking-ur-build-pipelines.png" alt="A cyberpunk painting of a cat with goggles tampering with a pipeline in a data center. Everything is bathed in neon in shades of electric pink, vivid purple, shocking cyan, and lime green. The hacker cat looks intent, one paw on the pipeline to ensure all the fluid bits are pilfered into its canister."></p>
<p>The cybersecurity discourse is, of late, festooned with fear mongering about vulnerabilities in build pipelines. If an attacker exploits a vuln in our build pipeline, are we doomed? No, because it’s pointless for them to do so. But there is a real problem revealed by this clucking and clamoring: many security professionals (and vendors) don’t know how build pipelines work.</p>
<p>The twisted security tale they’ve spun is: One horrible day, our build infrastructure reads attacker-controlled data that triggers exploitation of a vulnerability. Yet, to achieve this, the attacker must gain access to our build system; if they can access the build system, they can change what it does and what gets built. Why do they need to exploit a vulnerability when they’ve already cinched their victory? Even male peacocks aren’t this wasteful.</p>
<p>Here’s how the real story unfolds. I, a nefarious attacker, want to corrupt the software builds coming out of BlandCorp’s GitHub Actions build farm. I’m already versed in how most build processes work at modern enterprises because attacking them is part of my job<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>.</p>
<p>BlandCorp, like many enterprises, runs the Actions runner inside numerous pods on Kubernetes (or another build runner inside build infrastructure). These pods receive builds from BlandCorp’s GitHub server and then run the build steps that are specified in the target repository, such as:</p>
<ul>
<li>checkout the code</li>
<li>install the language toolchain</li>
<li>fetch the dependencies</li>
<li>build the software</li>
<li>run the automated tests</li>
<li>upload the resulting artifact<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup></li>
</ul>
<p>Build pipelines are not like a web application where there’s pervasive interactivity or it takes input by design. Build infrastructure is designed to grab the source artifacts, perform some work to verify and transform those source artifacts, and then deploy the results of that work somewhere.</p>
<p>The main interaction with the outside world is grabbing the artifacts. Build pipelines don’t ingest form fields or input from a command line; a build pipeline does its thing very well but its thing, in the grand scheme of things, is limited.</p>
<p>If there’s a vulnerability in the build pipeline I want to exploit as an attacker, I must find a way to interact with it. This interactivity is designed to be impossible – a testament to the efficacy of design-based solutions for security and reliability. Security vendors will not tell you this for somewhat obvious reasons. Vendors want to scare you about build pipeline vulnerabilities because if it were possible to exploit them, it would be dire, and they want you to pay them to soothe your fears.</p>
<p>If not through exploitation, then how does the story unfold? Imagine I have a code execution vuln I want to exploit. If I can change the data, I can already commit code – so I may as well write code that does what I want. As <a href="https://devblogs.microsoft.com/oldnewthing/20070706-00/?p=26123">Raymond Chen said</a> decades ago, “You shouldn’t be surprised that allowing people to run code lets them run code.”</p>
<p>Or, if I can change the software that runs the build runners, I can replace it with a malicious version. I don’t need to exploit a vulnerability at all because I already have the access I want to gain control over BlandCorp’s build infrastructure.</p>
<p>So, as an attacker, I can ferret my way into BlandCorp’s build infrastructure through three primary paths:</p>
<ol>
<li>tampering the source code</li>
<li>substituting the dependencies or language toolchain</li>
<li>corrupting the underlying runner that performs the work</li>
</ol>
<p>How do I reason about these paths as an attacker?</p>
<h4 id="path-1-tampering-the-source-code">Path #1: Tampering the source code</h4>
<p>The most direct and obvious way for me to tamper the source code is to commit new code to whatever I want to tamper, like a component that will be built by BlandCorp’s build runner. This is also likely the least stealthy way to compromise a build pipeline.</p>
<p>I, as an attacker, cannot simply submit a pull request (PR) with my malicious modification; or maybe I can, but it’s very unlikely to be approved by a human involved with the project. BlandCorp likely has <a href="https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches">branch protection</a>, too, which prevents my ability to force push my malicious code. This displeases me as an attacker.</p>
<h4 id="path-2-substituting-the-dependencies-or-language-toolchain">Path #2: Substituting the dependencies or language toolchain</h4>
<p>Next is the most expensive path. The dependencies and language toolchain are where, as an attacker, I can inject data in the build process (like substituting or replacing the dependencies). But BlandCorp’s runner, like any runner, will fetch dependencies from their upstream locations on the internet and cryptographically verify them to ensure they match what developers expect.</p>
<p>Thus, to interlope with this software, I must incinerate tens of millions of dollars of CPU time to find a hash collision <em>and</em> meddle-in-the-middle the build workers. As an attacker, this also displeases me.</p>
<h4 id="path-3-corrupting-the-underlying-runner-that-performs-the-work">Path #3: Corrupting the underlying runner that performs the work</h4>
<p>The runner that performs all this work in the build process is not cryptographically verified. But if we trust GitHub (or an equivalent vendor) to store our source code, we should trust them to be able to run it, too. Verifying the underlying infrastructure and keeping it safe is a lot of work. If we do it ourselves instead, we don’t gain any assurances about tampering and lack thereof.</p>
<p>To corrupt the underlying runner, I (the attacker) must invest ample time, money, and cognitive effort to either:</p>
<ol>
<li>Compromise BlandCorp, who maintains a stuffy “no SaaS allowed” policy and thus self-hosts; if I compromise BlandCorp to gain enough access to their self-hosted build infra to tamper with it, I’m already deep inside BlandCorp (so no need to exploit a vuln unless I want to flirt with future incident responders)</li>
<li>Compromise GitHub itself (or an equivalent vendor), specifically in a way that allows me to successfully modify the GitHub Actions code or infrastructure as befits my devious schemes.</li>
</ol>
<p>For either option, I can social engineer a developer or admin to poach their credentials or gain access to their machine, from which I can pivot (with varying degrees of difficulty depending on their IAM architecture). In the <a href="https://circleci.com/blog/jan-4-2023-incident-report/">CircleCI compromise</a>, attackers stole customers&rsquo; keys in this fashion (by pwning a CircleCI dev&rsquo;s laptop) &ndash; a terrifying scenario for customers. But, for the purposes of this post, it&rsquo;s worth noting the attackers didn&rsquo;t corrupt the underlying runner because they already accessed the resources they wanted and why pursue something harder?<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup></p>
<p>I’m not spending tens of millions of dollars in either case, but this option likely leaves me wanting something easier as an attacker.</p>
<h2 id="the-caveat">The caveat</h2>
<p>But ay, here’s the rub<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>. BlandCorp might take shortcuts or they might use vendors that take shortcuts. One of those shortcuts is disrupting the verification steps – or not applying them to some component included in the build.</p>
<p>What do these verification steps involve? The worker (the tool in the build step that downloads dependencies) verifies a cryptographic hash provided in the application’s source code, usually right after the asset is downloaded and before it’s extracted or used (see path #1). The cryptographic hash is stored in <a href="https://blog.shalvah.me/posts/understanding-lockfiles">a lock file</a> that is versioned alongside the application source code.<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup></p>
<p>So far, so good. Let’s zoom out to the build steps themselves. Each CI system has its own little language for describing how to build a project. These CI systems want to bequeath us the freedom to build whatever we want, like building something custom with a bash script. But this freedom allows us to do things we shouldn’t do, like download unverified files from the internet and run them.</p>
<p>Thus, the special trust you must maintain is that whoever writes the build steps <strong>doesn’t include randomly fetching data from the internet</strong> in those steps. Generally, they don’t – it’s very uncommon to do that because it’s frowned upon by all parties but also because verifying things is the default. You’d have to go out of your way as a developer to write build steps that download data unverified from a remote location. So, you know, don’t.</p>
<p>It only takes a single step of “download data and install it, without verifying it” to poison your builds – as we witnessed in <a href="https://about.codecov.io/security-update/">the CodeCov compromise</a>. CodeCov offered an install process of “copy this line of code into your build pipeline” – specifically <code>bash &lt;(curl -s https://codecov.io/bash)</code> – and that line of code (now deprecated, but at the time) fetches a file from their website, downloads it, and runs it. No one likes this.</p>
<p>The reason why security professionals detest this installation process is obvious; they are paranoid, even when unwarranted, so they distrust most code downloads. But software engineers also dislike this form of installation process because it destabilizes and jeopardizes reliability.</p>
<h2 id="security-is-a-subset-of-software-quality">Security is a subset of software quality</h2>
<p>I mention reliability because reliability is the reason why build systems are designed this way – a way that frustrates attackers by design. Engineers want to ensure that when they make a build, test it and deem it correct, then deploy another build to production, that the second build won’t be meaningfully different from the first one. Security may not serve as the primary motivator, but it benefits from this stringent reliability requirement.</p>
<p>Much of what we seek from a security perspective is enveloped by reliability. <strong>Security is ultimately a subset of software quality.</strong> This is a lesson that more security professionals should heed, especially those that protest that software engineers “don’t care about security.”</p>
<p>Reliability is also why many software engineers feel like the less security teams meddle in the build process (and other parts of software delivery), the better – the higher quality, more secure – it would be. Many of the things I read about “securing” build pipelines are half-baked and result in less reliable software, which means less secure software.</p>
<p>Is adding more things with opaque and unverified steps in your build pipelines a <em>good</em> thing? Check your security vendors’ install processes, too; how many used (or still use) CodeCov’s same approach to shove their scanners and wares into your pipelines? Glass houses, pots and kettles, etc.</p>
<p>Instead of barking up errant trees, security professionals should seek opportunities to invest in reliability with auxiliary security benefits so everyone wins. When we propose security “solutions” that destabilize reliability – like some newer security solutions requiring you to completely renovate your build pipelines to accommodate them – our colleagues will be baffled by our audacity. Understand the thing you are trying to “secure” before you thrust yourself in it.</p>
<p>If a security professional isn’t familiar with reliability in the context of software, that’s an urgent problem. If we hope to “secure” the software delivery process, we need to understand the innovations around reliability – whether site reliability engineering or software quality – that enable the ability to achieve all this high-falootin’ modern software stuff in the first place.</p>
<p>Start with the wiki on <a href="https://en.wikipedia.org/wiki/Reproducible_builds">reproducible builds</a>. Build reproducibility is something we care about for reliability purposes, but most cybersecurity teams today aren’t equipped to assess it. That must change.</p>
<p>Tech leaders and software engineers should consider where reliability investments may impart security benefits. We should teach our security colleagues about our software reliability efforts – especially how these investments exasperate attackers and impede their objectives (by skyrocketing the effort attackers must invest). Brainstorm how to further exacerbate attacker frustrations through these innovations.</p>
<p>Create a <a href="https://github.com/rpetrich/deciduous">decision tree</a> of how attackers might compromise your build infrastructure and capture existing design-based mitigations (as described above). It may spare you from unreasonable demands to fix CriTiCaL SuPeR uRgEnT bugs that can’t be exploited through any reasonable means.</p>
<p>My hope is that both communities can find common ground by thinking more about security solutions by design – but that starts with understanding our organizations’ systems and what purpose they serve. Otherwise, I fear the entrenched “vuln scan all the things” monomania will deepen and waste our precious time and effort on tilting at windmills.</p>
<hr>
<p><em>Enjoy this post? You might like <a href="securitychaoseng.com/">my book</a>, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong>, available at <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<hr>
<p>Thanks to Alex Rasmussen, C. Scott Andreas, Camille Fournier, Leif Walsh, and Ryan Petrich for feedback.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>If only security people took this understanding as seriously as attackers.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>If BlandCorp integrates security cruft into their build pipelines, there might be steps like: run the vuln scanner or generate the SBOM ticket (alas).&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>This is why I&rsquo;ve been saying for quite a few years now that IAM is the hardest security problem related to modern infra. Most security vendors in that area are not very helpful (especially the like Identity Posture Hygiene Surface ones). Solutions like <a href="https://segment.com/blog/access-service/">time-based access</a> feel more promising.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Of course a nod to <em>Hamlet, Act III, Scene I</em> <a href="https://poets.org/poem/hamlet-act-iii-scene-i-be-or-not-be">https://poets.org/poem/hamlet-act-iii-scene-i-be-or-not-be</a>&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>If you want to update a dependency on your local machine, your local build tooling (like <code>npm install</code>) automates the process for you. The engineer selects the new version they want and asks their local build tooling to install it for them. The local build tooling downloads the requested version from the upstream repository then records the new version and its cryptographic hash in the project’s lock file. The engineer commits this lock file – as well as any other changes needed to use the new version of the dependency – in their PR, which their peers will review before merging it into the codebase.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Cyber Startup Buzzword Bingo: 2023 Edition</title>
            <link>https://kellyshortridge.com/blog/posts/cyber-startup-buzzword-bingo-2023/</link>
            <pubDate>Mon, 13 Mar 2023 12:00:00 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/cyber-startup-buzzword-bingo-2023/</guid>
            <description>In 2023, cybersecurity is chasing the chimera of software doomsday devices. We can contemplate this windmill-tilting through the language cybersecurity startups use to describe their shelfware products.
You see, rather than reflecting on our inability to mitigate untrustworthy code by design – how wayward we strayed from the dreams of the first security thought leaders in the 70s and 80s1 – in 2023 we’ve decided that we must purchase at least seven tools shoved into developer workflows (with one more for each developer that procrastinates completing the 5 hour security awareness training); design regulatory obstacle courses navigable only by the most digitally transformed of golden retrievers (by mortals who will never have to traverse it themselves)2; and that to really shift left, we must strap dolphins with vulnerability scanners to echolocate bugs in undersea cables (we wanted to try with bats in data centers but general counsel shot us down).
We have decided that all of this performative pomp led by Captain Ahabs everywhere will definitely Fix Things This Time – and it shows in our buzzwords.
This edition of my annual Cyber Startup Buzzword Bingo elucidates the current zeitgeist through which buzzwords are most popular among cybersecurity startups.
I surveyed 100 infosec companies’ websites3, the vast majority of which are startups who raised VC funding in the past nine to twelve months or else are notable (like having booths in RSAC’s Early Stage Expo). The idea behind the bingo card is to take it with you on journeys through vendor halls, sales pitches, or startup websites and see whether you can replace your eyerolls and abyss-gazing with the surprise and delight of “Bingo!”.
Without further introduction, below is the 2023 Cyber Startup Buzzword Bingo card – read on if you want more analysis:
What words are growing in influence? All bolded buzzwords are on the rise (unless otherwise indicated).
We grew more sensitive this year, perhaps due to the intensifying focus on posture. Indeed, I’ve often marveled at the similarities between cybersecurity vendors and chiropractors.
We care more about CI/CD and APIs this year; even Kubernetes is mentioned more than endpoints, although I would waste an unreasonable amount of time watching security people try to explain what Kubernetes is.
The security industry is finally aware that developers exist, although they seem less enthused than Steve Ballmer was once upon a time (but maybe as sweaty about it).
Infosec also realized that workflows exist, which isn’t shocking given they seemingly remain unware of the existence of UX. Does this mean cybersecurity is creeping its way into the modern era? Software engineers strongly suspect security vendors are still full of shit.
In 2023, there are three simple words cybersecurity vendors want to hear from buyers: “You complete me.” They want you to discover all the insights they have to share and really wish you would prioritize them over all the other vendors you could take to prom; they are the fabled local single eager to meet you.
Vendors believe we feel the need for speed, our security engine antsy to go faster. Given how disruptive most DevSecOps tools are to software velocity, I suspect these vendors might be yanking our supply chain.
I don’t quite know what to make of the fact that effective soared in popularity while accurate plummeted. Perhaps these security tools are less like documentary coverage and more like reality TV – and the commonality of indecent exposure strengthens the case.
Finally, the world will perish in the battle between zero trust and trusted. Hopefully a mushroom cloud won’t follow, else we must resume our society beneath the Earth’s surface, Fallout-style. Or maybe ChatGPT will take control first. Remember, the goal is to convince the AI overlords that you’re a pet, not cattle!
Which words are falling out of favor? All bolded buzzwords are on the decline.
Vendors suspect we care less about advanced, sophisticated, and zero-day attacks, which I can only hope is true because it’s about time we focus on the less sexy activity. Yet, I worry a new targeted buzzword threat is conspiring to rise…
While AI may be all the rage among VCs desperate to feel like they’re on the thoughtleading edge rather than the awkward outskirts of the dance floor, both it and ML are less popular in cyber startup product messaging this year. I guess it wasn’t as effortless as vendors assumed.
Vendors are now less unparallelled, unmatched, world-class, best-in-class, and enterprise-grade. How else were we supposed to understand their differentiators without those filler adjectives?? Jokes aside, I suspect this ultimately enhanced their messaging.
It is not a deep insight to recognize that some buzzy verbs seemingly flew too close to the sun last year: empower, enforce, optimize, orchestrate, and enrich declined considerably. Were they not powerful enough? Or did buyers’ eyeballs not find these verbs as seamless to digest as marketers hoped?
The term blindspots is thankfully falling out of favor, too; only 3 startups included it. If they arent going to be inclusive, then hopefully buyers won’t include them in their security stacks, either.
In general, I, for one, hope the decline of filler and fluff accelerates – but purpose-built buzzwords lurk on the horizon, as we shall see in the next section. Marketing pros are remarkably agile, despite the protestations by our sanity.
What words should we fear becoming A Thing? All bolded buzzwords seem to be emerging.
Trying to manifest their ideal buyer persona into the universe, vendors started to use the term no-brainer in 2023. We talk a lot about attack surface, but less about the ever-growing buzzword surface and how its sprawl has a corrosive effect on our cognition.
Perhaps late to the Marie Kondo hype, cyber vendors are leaning into the word minimal. Is it a sign of maturity or growth? Or is it the first rumblings of an ominous shadow creature from the depths of Buzzoria? Will we cling to the guardrails as we hear its thunderous steps approaching or shall we perish?
2023 also budded the buzzword no-code, which is a pithy way of summarizing what CISOs which they could tell their software engineering teams.
Finally, cybersecurity vendors are simultaneously becoming more tailored and more holistic, perhaps belying their lack of coherent collective strategy – of course, other than to milk security buyers’ wallets as their foremost mission.
What will mortals on social media yell at me for not including? Every year, mortals are mad at reality and take it out on me. As always, what buzzwords vendors spew to you in meetings are not captured by my scraper and I can only shudder to imagine what the results would be if I scraped #Security #ThoughtLeader Linkedin posts.
I know current consultants and future consultants regulators are trying to make SBOMs happen but, like fetch, they have not yet happened. MITRE grew yet again but has never hit critical mass.
I was actually surprised that observability is flat year-over-year, given there’s a push on that buzzword by Big Buzzword Agriculture research analyst firms. Perhaps there are smudges on the single-pane-of-glass that occlude vendors’ vision.
Speaking of cloudy, multi-cloud is relatively flat from last year and still slim in usage. You might find this to be a remote possibility, but I assure you it’s true.
And, finally, as evidenced when getting too close to the crowds at cybersecurity conferences, hygiene remains niche.
MacKenzie, Donald. Mechanizing proof: computing, risk, and trust. MIT Press, 2004. ↩︎
Because all the other compliance standards have totally not distorted incentives and wasted countless time, attention, and effort… ↩︎
I did not scrape their entire website, only the main page and, if present, product/platform page. If buzzwords appear in blogs, for instance, that isn’t captured. The goal is to hone in on how cybersecurity startups presently present themselves to the market. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>In 2023, cybersecurity is chasing the chimera of software doomsday devices. We can contemplate this windmill-tilting through the language cybersecurity startups use to describe their <del>shelfware</del> products.</p>
<p>You see, rather than reflecting on our inability to mitigate untrustworthy code by design &ndash; how wayward we strayed from the dreams of the first security thought leaders in the 70s and 80s<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> &ndash; in 2023 we&rsquo;ve decided that we must purchase at least seven tools shoved into developer workflows (with one more for each developer that procrastinates completing the 5 hour security awareness training); design regulatory obstacle courses navigable only by the most digitally transformed of golden retrievers (by mortals who will never have to traverse it themselves)<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>; and that to <em>really</em> shift left, we must strap dolphins with vulnerability scanners to echolocate bugs in undersea cables (we wanted to try with bats in data centers but general counsel shot us down).</p>
<p>We have decided that all of this performative pomp led by Captain Ahabs everywhere will definitely Fix Things This Time &ndash; and it shows in our buzzwords.</p>
<p>This edition of <a href="/posts/buzzword-bingo-all-editions/">my annual Cyber Startup Buzzword Bingo</a> elucidates the current zeitgeist through which buzzwords are most popular among cybersecurity startups.</p>
<p>I surveyed 100 infosec companies’ websites<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>, the vast majority of which are startups who raised VC funding in the past nine to twelve months or else are notable (like having booths in RSAC&rsquo;s Early Stage Expo). The idea behind the bingo card is to take it with you on journeys through vendor halls, sales pitches, or startup websites and see whether you can replace your eyerolls and abyss-gazing with the surprise and delight of “Bingo!”.</p>
<p>Without further introduction, below is the 2023 Cyber Startup Buzzword Bingo card – read on if you want more analysis:</p>
<p><img src="/blog/img/cyber-startup-buzzword-bingo-2023.png" alt="The 2023 edition of the cyber buzzword bingo card. The background is terrible cyber art that is roughly a head made up of code with glowing digital bits swirling around it. The head hovers over a purple and pink checkered floor; it&amp;rsquo;s all very cyber. In order from left to right, starting on the upper left side, the buzzwords included on the card are as follows. Posture. Engine. Coverage. DevSecOps. Sensitive. Faster. Discover. Full. API. Control. Modern. Complete. Platform, which is the center word of the bingo card. Workflows. Trusted. Prioritize. Zero Trust. Exposure. CI/CD. Surface. Developers. Single. Cloud. Insights. Supply Chain."></p>
<h2 id="what-words-are-growing-in-influence">What words are growing in influence?</h2>
<p><em>All bolded buzzwords are on the rise (unless otherwise indicated).</em></p>
<p>We grew more <strong>sensitive</strong> this year, perhaps due to the intensifying focus on <strong>posture</strong>. Indeed, I&rsquo;ve often marveled at the similarities between cybersecurity vendors and chiropractors.</p>
<p>We care more about <strong>CI/CD</strong> and <strong>APIs</strong> this year; even <strong>Kubernetes</strong> is mentioned more than <strong>endpoints</strong>, although I would waste an unreasonable amount of time watching security people try to explain what Kubernetes is.</p>
<p>The security industry is finally aware that <strong>developers</strong> exist, although they seem less enthused than <a href="https://www.youtube.com/watch?v=Vhh_GeBPOhs">Steve Ballmer was once upon a time</a> (but maybe as sweaty about it).</p>
<p>Infosec also realized that <strong>workflows</strong> exist, which isn&rsquo;t shocking given they seemingly remain unware of the existence of UX. Does this mean cybersecurity is creeping its way into the <strong>modern</strong> era? Software engineers strongly suspect security vendors are still <strong>full</strong> of shit.</p>
<p>In 2023, there are three simple words cybersecurity vendors want to hear from buyers: &ldquo;You <strong>complete</strong> me.&rdquo; They want you to <strong>discover</strong> all the <strong>insights</strong> they have to share and really wish you would <strong>prioritize</strong> them over all the other vendors you could take to prom; they are the fabled local <strong>single</strong> eager to meet you.</p>
<p>Vendors believe we feel the need for <strong>speed</strong>, our security <strong>engine</strong> antsy to go <strong>faster</strong>. Given how disruptive most <strong>DevSecOps</strong> tools are to software velocity, I suspect these vendors might be yanking our <strong>supply chain</strong>.</p>
<p>I don&rsquo;t quite know what to make of the fact that <strong>effective</strong> soared in popularity while <strong>accurate</strong> plummeted. Perhaps these security tools are less like documentary <strong>coverage</strong> and more like reality TV &ndash; and the commonality of indecent <strong>exposure</strong> strengthens the case.</p>
<p>Finally, the world will perish in the battle between <strong>zero trust</strong> and <strong>trusted</strong>. Hopefully a mushroom <strong>cloud</strong> won&rsquo;t follow, else we must resume our society beneath the Earth&rsquo;s <strong>surface</strong>, Fallout-style. Or maybe ChatGPT will take <strong>control</strong> first. Remember, the goal is to convince the AI overlords that you&rsquo;re a pet, not cattle!</p>
<h2 id="which-words-are-falling-out-of-favor">Which words are falling out of favor?</h2>
<p><em>All bolded buzzwords are on the decline.</em></p>
<p>Vendors suspect we care less about <strong>advanced</strong>, <strong>sophisticated</strong>, and <strong>zero-day</strong> attacks, which I can only hope is true because it&rsquo;s about time we focus on the less sexy activity. Yet, I worry a new <strong>targeted</strong> buzzword <strong>threat</strong> is conspiring to rise&hellip;</p>
<p>While <strong>AI</strong> may be all the rage among VCs desperate to feel like they&rsquo;re on the thoughtleading edge rather than the awkward outskirts of the dance floor, both it and <strong>ML</strong> are less popular in cyber startup product messaging this year. I guess it wasn&rsquo;t as <strong>effortless</strong> as vendors assumed.</p>
<p>Vendors are now less <strong>unparallelled</strong>, <strong>unmatched</strong>, <strong>world-class</strong>, <strong>best-in-class</strong>, and <strong>enterprise-grade</strong>. How else were we supposed to understand their differentiators without those filler adjectives?? Jokes aside, I suspect this ultimately <strong>enhanced</strong> their messaging.</p>
<p>It is not a <strong>deep</strong> insight to recognize that some buzzy verbs seemingly flew too close to the sun last year: <strong>empower</strong>, <strong>enforce</strong>, <strong>optimize</strong>, <strong>orchestrate</strong>, and <strong>enrich</strong> declined considerably. Were they not <strong>powerful</strong> enough? Or did buyers&rsquo; eyeballs not find these verbs as <strong>seamless</strong> to digest as marketers hoped?</p>
<p>The term <strong>blindspots</strong> is thankfully falling out of favor, too; only 3 startups included it. If they arent going to be inclusive, then hopefully buyers won&rsquo;t include them in their security stacks, either.</p>
<p>In general, I, for one, hope the decline of filler and fluff <strong>accelerates</strong> &ndash; but <strong>purpose-built</strong> buzzwords lurk on the horizon, as we shall see in the next section. Marketing pros are remarkably <strong>agile</strong>, despite the protestations by our sanity.</p>
<h2 id="what-words-should-we-fear-becoming-a-thing">What words should we fear becoming A Thing?</h2>
<p><em>All bolded buzzwords seem to be emerging.</em></p>
<p>Trying to manifest their ideal buyer persona into the universe, vendors started to use the term <strong>no-brainer</strong> in 2023. We talk a lot about attack surface, but less about the <strong>ever-growing</strong> buzzword surface and how its <strong>sprawl</strong> has a corrosive effect on our cognition.</p>
<p>Perhaps late to the Marie Kondo hype, cyber vendors are leaning into the word <strong>minimal</strong>. Is it a sign of <strong>maturity</strong> or <strong>growth</strong>? Or is it the first rumblings of an ominous <strong>shadow</strong> creature from the depths of Buzzoria? Will we cling to the <strong>guardrails</strong> as we hear its thunderous steps approaching or shall we perish?</p>
<p>2023 also budded the buzzword <strong>no-code</strong>, which is a pithy way of summarizing what CISOs which they could tell their software engineering teams.</p>
<p><img src="/blog/img/nocode-onlysecure.jpg" alt="A variation of the no take, only throw meme. It is a three panel comic of a dog holding a frisbee. In the first panel, the dog has an earnest expression and asks, please secure? In the second panel, a human hand reaches out towards the frisbee; the dog is angered by this and exclaims, no code!! In the third panel, we zoom in on the dogs irate face as it says, in all caps, only secure."></p>
<p>Finally, cybersecurity vendors are simultaneously becoming more <strong>tailored</strong> and more <strong>holistic</strong>, perhaps belying their lack of coherent collective <strong>strategy</strong> &ndash; of course, other than to milk security buyers&rsquo; wallets as their foremost <strong>mission</strong>.</p>
<h2 id="what-will-mortals-on-social-media-yell-at-me-for-not-including">What will mortals on social media yell at me for not including?</h2>
<p>Every year, mortals are mad at reality and take it out on me. As always, what buzzwords vendors spew to you in meetings are not captured by my scraper and I can only shudder to imagine what the results would be if I scraped #Security #ThoughtLeader Linkedin posts.</p>
<p>I know current consultants and <del>future consultants</del> regulators are trying to make <strong>SBOM</strong>s happen but, like fetch, they have not yet happened. <strong>MITRE</strong> grew yet again but has never hit critical mass.</p>
<p>I was actually surprised that <strong>observability</strong> is flat year-over-year, given there&rsquo;s a push on that buzzword by <del>Big Buzzword Agriculture</del> research analyst firms. Perhaps there are smudges on the <strong>single-pane-of-glass</strong> that occlude vendors&rsquo; vision.</p>
<p>Speaking of cloudy, <strong>multi-cloud</strong> is relatively flat from last year and still slim in usage. You might find this to be a <strong>remote</strong> possibility, but I assure you it&rsquo;s true.</p>
<p>And, finally, as evidenced when getting too close to the crowds at cybersecurity conferences, <strong>hygiene</strong> remains niche.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>MacKenzie, Donald. <em>Mechanizing proof: computing, risk, and trust.</em> MIT Press, 2004.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Because all the other compliance standards have totally not distorted incentives and wasted countless time, attention, and effort&hellip;&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>I did not scrape their entire website, only the main page and, if present, product/platform page. If buzzwords appear in blogs, for instance, that isn’t captured. The goal is to hone in on how cybersecurity startups presently present themselves to the market.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>When Something Disappears From the Internet</title>
            <link>https://kellyshortridge.com/blog/posts/when-something-disappears-from-the-internet/</link>
            <pubDate>Tue, 28 Feb 2023 08:00:30 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/when-something-disappears-from-the-internet/</guid>
            <description> Something I desperately needed disappeared from the internet.
I was working through copy edits for my upcoming book and saw my copy editor’s comment that a link to a source wasn’t working for her. Strange, but okay. I clicked and – it wasn’t working for me, either.
Fine, I thought, I’ll just find the link on the Wayback machine. Nope. Okay, let’s google the link to get the cached version in the little hamburger menu. Nope. What about googling the quote? There were citations of the quote, but not the source document I read. I tried changing where I placed the quotation marks thinking perhaps I had quoted it incorrectly. Nope.
“How hard could an article about an ammonia plant incident from 1979 be to find?” I wondered aloud to my cat. He trilled at me with the blithe optimism of youth.
I moved onto a different branch of action, going back to the source I found my first search that cites the quote. I checked its references for the full citation, pasting it into Google only to find more materials citing the specific quote. Alas, none were the source.
The next obvious move to try was searching for the article title in Google Scholar. Nothing. I tried pasting just the title of the article (“synthesis start-up heater failures”) in quotation marks in Google Scholar – nothing – then Google – also nothing.
I tried without quotation marks – lots of irrelevant results to sift through, none fruitful. I tried with quotation marks in Bing, finding a single, but different, citation of the quote (with a slightly different title, curiously; “heater failure” vs. “heater failures”) but not the source document. I tried it in Baidu, and yet again found only another citation.
I distinctly remember thinking, “Well, fuck.” In vain hope, knowing my frenetic research patterns of devouring sources in rapid succession and, at best, bookmarking them, I checked my Downloads folder for the article. It was not there. I had relied on the purportedly immortal memory of the internet; but memory, like the moon, is a harsh mistress.
I stared indistinctly at my monitor in disbelief for a few moments before venturing down yet a new path: tracking down volume 22 of “Ammonia plant safety (and related facilities)” to see if the article was digitized and just OCR’d poorly – explaining why it might not show up in any search results.
This was more promising. I discovered the name of the organization who publishes it – the American Institute of Chemical Engineers (AIChE) – as well as a used book seller in the UK who appeared to be the only one selling a physical copy of it, but I worried it wouldn’t arrive in time for the extremely tight turnaround for copyediting (or perhaps even for the book going to the printers). And AIChE didn’t seem to have any archives available online.
At this point I was flummoxed. My mental model of reality was showing cracks, as it never occurred to me that something might completely and utterly disappear from the internet. After all, adults admonished us as children that whatever we did on the internet would be a permanent, indelible mark on our lives (or, at least, how others perceived us).
Yet, I am notoriously stubborn and refused to wallow in a deluge of existential panic; “get your shit together, Shortridge,” my brain scolded my brain into a brainstorming. The best course of action, I felt, was calling AIChE to ask if they had a version of volume 22 I could peruse or buy. I am allergic to phone calls but such an arduous quest calls for courage and heroism, so I steeled myself for the endeavor.
Someone quickly answered and asked me if I was fine being put on hold; I acquiesced. But instead of hold music, I received a “this number is not connected” message and listened to the busy signal until the line, and a small morsel of my soul, died.
I called them again. This time, the AIChE representative informed me that they do not keep archives and suggested I call or email the Linda Hall Library instead. Having just subjected myself to a phone call and believing that an email with a generic request was too slow for my now incandescent inquiry, I visited the library’s website to do a catalog search.
I discovered that my precious volume 22 exists! Kind of! It specifically exists in CD-ROM form (three of them total!) for fifty years of proceedings. I was ecstatic – I could check out the CD-ROMs, so there was hope yet! But wait, are they also located in NYC like AIChE? No… Kansas City.
Well, fuck. Alright. I popped an antihistamine and called the Linda Hall Library. They answered with a voice recording indicating that the library was closed due to inclement weather in the Kansas City area; I searched to verify this claim and it was a legitimate excuse. Fine, I sighed, I’ll call tomorrow, which felt forever away in the midst of my epic quest.
In the meantime, a few additional avenues had occurred to me. I searched for the article on the Internet Archive and received the response, “invalid or no response from Elasticsearch.” It, too, was experiencing inclement weather.
But, I mused, maybe the Internet Archive does have it and I’m searching poorly. I searched for “ammonia plant safety” on the main page instead of the Books section and found that the Internet Archive has volumes 23, 24, 27, 28, 30, 35, 36, 37, and 42 – but not 22.
Well, fuck. I stared at my computer for a minute, indulging in a bit of renewed self-pity over this abrupt disruption of my mental model of the internet. Then I decided that fuck it, I want a souvenier of this experience, so I went back to the used bookseller and bought the physical copy anyway, timelines be damned.
After this pyrrhic victory, I finally went back to reviewing copy edits… but the mystery kept itching in my brain so in less than ten minutes I was back to hunting for the evanescent article. I went to the Google Books page for volume 22 for like the tenth time and finally clicked “Find in a library.” This took me to worldcat.org, which displayed a list of libraries and their contents. The New York Public Library, my obvious favorite, only had volumes going back to 1995; I needed 1980. I felt slightly betrayed.
I discovered Stanford’s library not only has volume 22 in their archives but will even scan it to PDF… but only if you’re affiliated with Stanford. I wondered how quickly I could become a visiting scholar or sign up for a PhD. Both, as you might suspect but I still verified just in case, are lengthy processes. Instead, I sent out a plea for help on Mastodon – but no one cared and I can’t blame them.
I looked up other libraries that possess my precious article, wracking my brain for academics I know. Princeton seems to have it, but I don’t know anyone there. The MIT library – where I do know people – lists seven other libraries that they know have it and I know no one in any of those, so… Stony Brook also has it; but I don’t know anyone there and going out to Long Island is Absolutely Certain A Burden (as any New Yorker can attest, ACAB).
At this point I’ve spent at least an hour, if not two, in pursuit of this volume and I realize I have better things to do with my life. I decided to just call the Linda Hall Library again tomorrow and see what they can do.
I called the next morning. The soft-spoken gentleman on the phone was very helpful in tracking down that the Linda Hall Library possesses not just the CD-ROM copy but a physical copy as well and, even better, they can provide a scanned version for a total fee of $49.50. I rejoice. I submitted the request for it with the 6 hour delivery premium option so that, if everything goes to plan, I receive a scanned copy by day’s end.
I received the scanned copy far sooner than day’s end, much to my delight, and nearly wept with joy when I saw the quote in question on the fifth page of the document1. There, at a tilted angle, read:
“If you only depend on well-trained operators, you may fail. I think you really must depend on the design approach and don’t depend much at all on the operation.”
The parallels to software engineering are likely obvious just from reading the quote. There are other quotes in this article – specifically in the discussion section regarding the ammonia plant accident – that are highly pertinent, too, like:
“The point that makes this operation the most dangerous is the fact that the heater is not used daily. You may only use it a couple of times a year, and most of the time it’s just sitting idle, so that you don’t really pay too much attention to it.”
It makes me want to collect an exhaustive list of similar publications from other niche domains that publicize their incident reviews. What else might we glean?
My forthcoming book is, in part, a bold attempt at pattern matching across domains like chemical engineering but also healthcare, air transportation, forestry, ecology, aerospace, urban infrastructure, natural disasters, and other complex systems domains to borrow opportunities to sustain resilience in our complex software systems.
Yet, I know I am only scratching the surface since so much of this knowledge is siloed; what other industry publications with valuable lessons learned might have vanished from the internet as well, or never made it there in the first place?
My physical copy from the UK arrived in under two weeks (but after the completion of the copy edit, as predicted), so I received a second dopamine spike when feeling the volume in my hands and reading the quote directly from its worn, weathered pages. I never thought I would feel such joy from owning a publication about the agonies and ecstasies of ammonia production from many years before I was born, but in another sense, that feels entirely on brand for me.
)
Conclusion What is the lesson of this tale? Libraries and esoteric bookstores are marvelous and we should fight to preserve them how we can. Also, if you find a source that is quite important to your project, download a local version just in case (or ensure it makes it on the Wayback machine).
It isn’t everyday that something disappears from the internet. But, like any complex system, we must appreciate that surprises are natural and prepare accordingly.
Enjoy this post? You might like my book, Security Chaos Engineering: Sustaining Resilience in Software and Systems (with Aaron Rinehart), available for preorder on Amazon, Bookshop, and other major retailers online.
Even the fabulous Linda Hall Library had the article listed as “Synthesis Start-up Heater Failures” (plural), even though, as evident from the physical copy, it’s “Synthesis Start-up Heater Failure” (singular). I find that kind of discrepancy a bit charming in the digital age. ↩︎
</description>
            <atom:content type="html"><![CDATA[<hr>
<p>Something I desperately needed disappeared from the internet.</p>
<p>I was working through copy edits for <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">my upcoming book</a> and saw my copy editor’s comment that a link to a source wasn’t working for her. Strange, but okay. I clicked and – it wasn’t working for me, either.</p>
<p>Fine, I thought, I’ll just find the link on the Wayback machine. Nope. Okay, let’s google the link to get the cached version in the little hamburger menu. Nope. What about googling the quote? There were citations of the quote, but not the source document I read. I tried changing where I placed the quotation marks thinking perhaps I had quoted it incorrectly. Nope.</p>
<p>“How hard could an article about an ammonia plant incident from 1979 be to find?” I wondered aloud to my cat. He trilled at me with the blithe optimism of youth.</p>
<p>I moved onto a different branch of action, going back to the source I found my first search that cites the quote. I checked its references for the full citation, pasting it into Google only to find more materials citing the specific quote. Alas, none were the source.</p>
<p>The next obvious move to try was searching for the article title in Google Scholar. Nothing. I tried pasting just the title of the article (“synthesis start-up heater failures”) in quotation marks in Google Scholar – nothing – then Google – also nothing.</p>
<p>I tried without quotation marks – lots of irrelevant results to sift through, none fruitful. I tried with quotation marks in Bing, finding a single, but different, citation of the quote (with a slightly different title, curiously; “heater failure” vs. “heater failures”) but not the source document. I tried it in Baidu, and yet again found only another citation.</p>
<p>I distinctly remember thinking, “Well, fuck.” In vain hope, knowing my frenetic research patterns of devouring sources in rapid succession and, at best, bookmarking them, I checked my Downloads folder for the article. It was not there. I had relied on the purportedly immortal memory of the internet; but memory, like the moon, is a harsh mistress.</p>
<p>I stared indistinctly at my monitor in disbelief for a few moments before venturing down yet a new path: tracking down volume 22 of “Ammonia plant safety (and related facilities)” to see if the article was digitized and just OCR’d poorly – explaining why it might not show up in any search results.</p>
<p>This was more promising. I discovered the name of the organization who publishes it – the American Institute of Chemical Engineers (AIChE) – as well as a used book seller in the UK who appeared to be the only one selling a physical copy of it, but I worried it wouldn’t arrive in time for the extremely tight turnaround for copyediting (or perhaps even for the book going to the printers). And AIChE didn’t seem to have any archives available online.</p>
<p>At this point I was flummoxed. My mental model of reality was showing cracks, as it never occurred to me that something might completely and utterly disappear from the internet. After all, adults admonished us as children that whatever we did on the internet would be a permanent, indelible mark on our lives (or, at least, how others perceived us).</p>
<p>Yet, I am notoriously stubborn and refused to wallow in a deluge of existential panic; “get your shit together, Shortridge,” my brain scolded my brain into a brainstorming. The best course of action, I felt, was calling AIChE to ask if they had a version of volume 22 I could peruse or buy. I am allergic to phone calls but such an arduous quest calls for courage and heroism, so I steeled myself for the endeavor.</p>
<p>Someone quickly answered and asked me if I was fine being put on hold; I acquiesced. But instead of hold music, I received a “this number is not connected” message and listened to the busy signal until the line, and a small morsel of my soul, died.</p>
<p>I called them again. This time, the AIChE representative informed me that they do not keep archives and suggested I call or email the Linda Hall Library instead. Having just subjected myself to a phone call and believing that an email with a generic request was too slow for my now incandescent inquiry, I visited the library’s website to do a catalog search.</p>
<p>I discovered that my precious volume 22 exists! Kind of! It specifically exists in CD-ROM form (three of them total!) for fifty years of proceedings. I was ecstatic – I could check out the CD-ROMs, so there was hope yet! But wait, are they also located in NYC like AIChE? No… Kansas City.</p>
<p>Well, fuck. Alright. I popped an antihistamine and called the Linda Hall Library. They answered with a voice recording indicating that the library was closed due to inclement weather in the Kansas City area; I searched to verify this claim and it was a legitimate excuse. Fine, I sighed, I’ll call tomorrow, which felt forever away in the midst of my epic quest.</p>
<p>In the meantime, a few additional avenues had occurred to me. I searched for the article on the Internet Archive and received the response, “invalid or no response from Elasticsearch.” It, too, was experiencing inclement weather.</p>
<p><img src="/blog/img/disappearing-article/elasticsearch-message.png" alt="A screenshot of the Internet Archive showing the response “No results matched your criteria; invalid or no response from Elasticsearch.”"></p>
<p>But, I mused, maybe the Internet Archive does have it and I’m searching poorly. I searched for “ammonia plant safety” on the main page instead of the Books section and found that the Internet Archive has volumes 23, 24, 27, 28, 30, 35, 36, 37, and 42 – but not 22.</p>
<p>Well, fuck. I stared at my computer for a minute, indulging in a bit of renewed self-pity over this abrupt disruption of my mental model of the internet. Then I decided that fuck it, I want a souvenier of this experience, so I went back to the used bookseller and bought the physical copy anyway, timelines be damned.</p>
<p>After this pyrrhic victory, I finally went back to reviewing copy edits… but the mystery kept itching in my brain so in less than ten minutes I was back to hunting for the evanescent article. I went to the Google Books page for volume 22 for like the tenth time and finally clicked “Find in a library.” This took me to worldcat.org, which displayed a list of libraries and their contents. The New York Public Library, my obvious favorite, only had volumes going back to 1995; I needed 1980. I felt slightly betrayed.</p>
<p>I discovered Stanford’s library not only has volume 22 in their archives but will even scan it to PDF… but only if you’re affiliated with Stanford. I wondered how quickly I could become a visiting scholar or sign up for a PhD. Both, as you might suspect but I still verified just in case, are lengthy processes. Instead, I sent out a plea for help <a href="https://hachyderm.io/@shortridge">on Mastodon</a> – but no one cared and I can’t blame them.</p>
<p>I looked up other libraries that possess my precious article, wracking my brain for academics I know. Princeton seems to have it, but I don’t know anyone there. The MIT library – where I do know people – lists seven other libraries that they know have it and I know no one in any of those, so&hellip; Stony Brook also has it; but I don’t know anyone there and going out to Long Island is Absolutely Certain A Burden (as any New Yorker can attest, ACAB).</p>
<p>At this point I’ve spent at least an hour, if not two, in pursuit of this volume and I realize I have better things to do with my life. I decided to just call the Linda Hall Library again tomorrow and see what they can do.</p>
<p>I called the next morning. The soft-spoken gentleman on the phone was very helpful in tracking down that the Linda Hall Library possesses not just the CD-ROM copy but a physical copy as well and, even better, they can provide a scanned version for a total fee of $49.50. I rejoice. I submitted the request for it with the 6 hour delivery premium option so that, if everything goes to plan, I receive a scanned copy by day&rsquo;s end.</p>
<p><img src="/blog/img/disappearing-article/transaction-status.png" alt="A screenshot of my Linda Hall Library transaction for the title “Ammonia Plant Safety: Synthesis Start-up Heater Failures.” The status reads: Awaiting Document Delivery Processing."></p>
<p>I received the scanned copy far sooner than day’s end, much to my delight, and nearly wept with joy when I saw the quote in question on the fifth page of the document<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. There, at a tilted angle, read:</p>
<blockquote>
<p>“If you only depend on well-trained operators, you may fail. I think you really must depend on the design approach and don’t depend much at all on the operation.”</p>
</blockquote>
<p><img src="/blog/img/disappearing-article/duskin-quote.png" alt="A screenshot of the quote from the article in question, attributed to “Duskin, ICI.” It reads: “We’ve had two failures, one of which was very similar to yours, and our analysis also was surprisingly similar to yours. I have two comments to make and one question. There are two approaches you can take on this: the design approach and the operational approach. In the design approach you must check out the instrumentation and trip and alarm parameters, and you’ve got to watch the metallurgy as well. The operational approach is the one your paper discusses. Now, what we have been finding is that the best approach to take is really the design one, and I’m sure you’ve gone for the correct approach on flow. I think if you consider only metallurgy, and make sure you put in stainless steel, then no matter that [sic] temperatures you reach, you should be safe. If you depend only on well-trained operators, you may fail. I think you really must depend on the design approach and don’t depend much at all on the operation. You also said you had temperature trips checked two weeks previously, but you didn’t mention them again. Presumably that evidence was destroyed in the fire?”"></p>
<p>The parallels to software engineering are likely obvious just from reading the quote. There are other quotes in this article – specifically in the discussion section regarding the ammonia plant accident – that are highly pertinent, too, like:</p>
<blockquote>
<p>“The point that makes this operation the most dangerous is the fact that the heater is not used daily. You may only use it a couple of times a year, and most of the time it’s just sitting idle, so that you don’t really pay too much attention to it.”</p>
</blockquote>
<p>It makes me want to collect an exhaustive list of similar publications from other niche domains that publicize their incident reviews. What else might we glean?</p>
<p>My forthcoming book is, in part, a bold attempt at pattern matching across domains like chemical engineering but also healthcare, air transportation, forestry, ecology, aerospace, urban infrastructure, natural disasters, and other complex systems domains to borrow opportunities to sustain resilience in our complex software systems.</p>
<p>Yet, I know I am only scratching the surface since so much of this knowledge is siloed; what other industry publications with valuable lessons learned might have vanished from the internet as well, or never made it there in the first place?</p>
<p>My physical copy from the UK arrived in under two weeks (but after the completion of the copy edit, as predicted), so I received a second dopamine spike when feeling the volume in my hands and reading the quote directly from its worn, weathered pages. I never thought I would feel such joy from owning a publication about the agonies and ecstasies of ammonia production from many years before I was born, but in another sense, that feels entirely on brand for me.</p>
<p><img src="/blog/img/disappearing-article/geralt-ammonia-book.jpg" alt="A photograph of a fluffy tuxedo cat sniffing an old, weathered copy of “Ammonia Plant Safety (and related facilities)” prepared by the editors of Chemical Engineering Progress and published by the American Institute of Chemical Engineers. Its cover is various shades of tan and beige and features an architecture diagram of a subsystem related to ammonia plants. In the background of the photograph, there are string lights in shades of pink, purple, and blue. The cat, in this case, is Chekov’s cat given he was mentioned earlier in this post.">)</p>
<p><img src="/blog/img/disappearing-article/synthesis-startup-heater-failure-article.jpg" alt="A photograph of the article “Synthesis Start-Up Heater Failure” on top of my custom desk by Chassie, which features a circuit-board meets magic runes design in pastel shades of pink, purple, and blue. My matching keyboard is barely visible in the top part of the photograph. The synopsis of the article reads: “The fire at Monsanto’s ammonia plant resulted from a rupture in one of the two synthesis start-up heater coils. The failure was caused by the localized overheating because of insufficient flow through the heater. There were no injuries to personnel.”"></p>
<h2 id="conclusion">Conclusion</h2>
<p>What is the lesson of this tale? Libraries and esoteric bookstores are marvelous and we should fight to preserve them how we can. Also, if you find a source that is quite important to your project, download a local version just in case (or ensure it makes it on the Wayback machine).</p>
<p>It isn’t everyday that something disappears from the internet. But, like any complex system, we must appreciate that surprises are natural and prepare accordingly.</p>
<br>
<p><em>Enjoy this post? You might like my book, <strong>Security Chaos Engineering: Sustaining Resilience in Software and Systems</strong> (with Aaron Rinehart), available for preorder on <a href="https://www.amazon.com/Security-Chaos-Engineering-Sustaining-Resilience/dp/1098113829">Amazon</a>, <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Bookshop</a>, and other major retailers online.</em></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>Even the fabulous Linda Hall Library had the article listed as “Synthesis Start-up Heater Failures” (plural), even though, as evident from the physical copy, it’s “Synthesis Start-up Heater Failure” (singular). I find that kind of discrepancy a bit charming in the digital age.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>My 2022 Reading List</title>
            <link>https://kellyshortridge.com/blog/posts/2022-reading-list/</link>
            <pubDate>Wed, 21 Dec 2022 08:00:00 +0000</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2022-reading-list/</guid>
            <description>We’ve reached the winter solstice once more, so it’s time for my reading list. In 2022, I was seeped in writing the upcoming Security Chaos Engineering book – a tome dedicated to resilience in software – but still managed to squeeze reading into the nooks and crannies of my life.
I averaged exactly 3 books per month in 2022, lower than both 2021 and 2020 – but the number of books I wrote skyrocketed (the aforementioned O’Reilly book and the second draft of my fiction novel). So, seems reasonable all-in-all.
In last year’s post, I noted I read nearly 10 papers per month but this year I lost track of how many I crammed into my brain, so an exact tally shall remain a mystery. The number of research papers I wrote with esteemed collaborators is far easier to count (3); they cover action bias in incident response; the ROI of security chaos engineering; and the “sludge” strategy for systems defense. Our research article on Deception Environments was also published in the June 2022 edition of Communications of the ACM, although we wrote it last year.
As always, I am not rating or recommending any specific works in the list below. With that said, I dedicate considerable effort to screening books beforehand since time is the most precious, fleeting resource we possess and we must protect it like a dragon hoards gold.
If you’re looking for more science fiction, speculative fiction, pretentious literary fiction, philosophical navel-gazing, or non-fiction recommendations, check out my reading lists from prior years:
2021 reading list 2020 reading list 2019 reading list 2018 reading list 2017 reading list 2016 reading list Fiction All the Birds in the Sky by Charlie Jane Anders
Babel: Or the Necessity of Violence: An Arcane History of the Oxford Translators’ Revolution by R.F Kuang
The City We Became by N.K. Jemisin
Dark Matter: A Century of Speculative Fiction from the African Diaspora by Sheree R. Thomas
The Deep by Rivers Solomon
Downbelow Station by C.J. Cherryh
Fathers and Sons by Ivan Turgenev
Faust by Johann Wolfgang von Goethe
Fevered Star by Rebecca Roanhorse
Figuring by Maria Popova
Fledgling by Octavia E. Butler
The Man Without Qualities by Robert Musil
A Master of Djinn by P. Djèlí Clark
Moby Dick: Or, the Whale by Herman Melville (re-read)
The Night Watchman by Louise Erdrich
Orlando by Virgninia Woolf
Piranesi by Susanna Clarke
The Recognitions by William Gaddis
The Remains of the Day by Kazuo Ishiguro
The Tempest by William Shakespeare (re-read)
Things Fall Apart by Chinua Achebe
Too Loud a Solitude by Bohumil Hrabal
Winter in the Blood by James Welch
Non-fiction The Artist’s Way by Julia Cameron
The Black Agenda: Bold Solutions for a Broken System edited by Anna Gifty Opoku-Agyeman
Bolivar: American Liberator by Marie Arana
The Death and Life of Great American Cities by Jane Jacobs
Influence Is Your Superpower: The Science of Winning Hearts, Sparking Change, and Making Good Things Happen by Zoe Chance
The Life of the Mind by Hannah Arendt
Normal Accidents: Living with High Risk Technologies by Charles Perrow
Of Sound Mind: How Our Brain Constructs a Meaningful Sonic World by Nina Kraus
Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed by James C. Scott (re-read)
The Science of Can and Can’t: A Physicist’s Journey Through the Land of Counterfactuals by Chiara Marletto
What is Life? by Lynn Margulis and Dorion Sagan
Wired for Love: A Neuroscientist’s Journey Through Romance, Loss, and the Essence of Human Connection by Stephanie Cacioppo
Your Brain Is a Time Machine: The Neuroscience and Physics of Time by Dean Buonomano
</description>
            <atom:content type="html"><![CDATA[<p>We&rsquo;ve reached the winter solstice once more, so it&rsquo;s time for my reading list. In 2022, I was seeped in writing the upcoming <a href="https://bookshop.org/p/books/security-chaos-engineering-developing-resilience-and-safety-at-speed-and-scale-aaron-rinehart/18793471">Security Chaos Engineering book</a> &ndash; a tome dedicated to resilience in software &ndash; but still managed to squeeze reading into the nooks and crannies of my life.</p>
<p>I averaged exactly 3 books per month in 2022, lower than both 2021 and 2020 &ndash; but the number of books I wrote skyrocketed (the <a href="https://www.amazon.com/Security-Chaos-Engineering-Developing-Resilience/dp/1098113829">aforementioned O&rsquo;Reilly book</a> and the second draft of my fiction novel). So, seems reasonable all-in-all.</p>
<p>In last year&rsquo;s post, I noted I read nearly 10 papers per month but this year I lost track of how many I crammed into my brain, so an exact tally shall remain a mystery. The number of research papers I wrote with esteemed collaborators is far easier to count (3); they cover <a href="https://kellyshortridge.com/blog/posts/opportunity-cost-action-bias-cybersecurity-incident-response/">action bias in incident response</a>; <a href="https://ieeexplore.ieee.org/document/9973042/">the ROI of security chaos engineering</a>; and <a href="https://arxiv.org/abs/2211.16626">the &ldquo;sludge&rdquo; strategy for systems defense</a>. Our <a href="https://cacm.acm.org/magazines/2022/6/261170-lamboozling-attackers/fulltext">research article on Deception Environments</a> was also published in the June 2022 edition of <em>Communications of the ACM</em>, although we wrote it last year.</p>
<p>As always, I am not rating or recommending any specific works in the list below. With that said, I dedicate considerable effort to screening books beforehand since time is the most precious, fleeting resource we possess and we must protect it like a dragon hoards gold.</p>
<p>If you’re looking for more science fiction, speculative fiction, pretentious literary fiction, philosophical navel-gazing, or non-fiction recommendations, check out my reading lists from prior years:</p>
<ul>
<li><a href="/blog/posts/2021-reading-list">2021 reading list</a></li>
<li><a href="/blog/posts/2020-reading-list">2020 reading list</a></li>
<li><a href="/blog/posts/2019-reading-list">2019 reading list</a></li>
<li><a href="/blog/posts/2018-reading-list">2018 reading list</a></li>
<li><a href="/blog/posts/2017-reading-list">2017 reading list</a></li>
<li><a href="/blog/posts/2016-reading-list">2016 reading list</a></li>
</ul>
<h2 id="fiction">Fiction</h2>
<p><a href="https://bookshop.org/p/books/all-the-birds-in-the-sky-charlie-jane-anders/7103510">All the Birds in the Sky</a> by Charlie Jane Anders</p>
<p><a href="https://bookshop.org/p/books/babel-or-the-necessity-of-violence-an-arcane-history-of-the-oxford-translators-revolution-r-f-kuang/18269577">Babel: Or the Necessity of Violence: An Arcane History of the Oxford Translators&rsquo; Revolution</a> by R.F Kuang</p>
<p><a href="https://bookshop.org/p/books/the-city-we-became-n-k-jemisin/113989">The City We Became</a> by N.K. Jemisin</p>
<p><a href="https://bookshop.org/p/books/dark-matter-a-century-of-speculative-fiction-from-the-african-diaspora-sheree-r-thomas/16427055">Dark Matter: A Century of Speculative Fiction from the African Diaspora</a> by Sheree R. Thomas</p>
<p><a href="https://bookshop.org/p/books/the-deep-rivers-solomon/6706728">The Deep</a> by Rivers Solomon</p>
<p><a href="https://bookshop.org/p/books/downbelow-station-c-j-cherryh/6783184">Downbelow Station</a> by C.J. Cherryh</p>
<p><a href="https://bookshop.org/p/books/fathers-and-sons-ivan-sergeevich-turgenev/16637191">Fathers and Sons</a> by Ivan Turgenev</p>
<p><a href="https://bookshop.org/p/books/faust-part-one-part-one-j-w-von-goethe/12039342">Faust</a> by Johann Wolfgang von Goethe</p>
<p><a href="https://bookshop.org/p/books/fevered-star-volume-2-rebecca-roanhorse/18573858">Fevered Star</a> by Rebecca Roanhorse</p>
<p><a href="https://bookshop.org/p/books/figuring-maria-popova/10226406">Figuring</a> by Maria Popova</p>
<p><a href="https://bookshop.org/p/books/fledgling-octavia-e-butler/18331847">Fledgling</a> by Octavia E. Butler</p>
<p><a href="https://bookshop.org/p/books/man-without-qualities-robert-musil/18761811">The Man Without Qualities</a> by Robert Musil</p>
<p><a href="https://bookshop.org/p/books/a-master-of-djinn-p-djeli-clark/15126050">A Master of Djinn</a> by P. Djèlí Clark</p>
<p><a href="https://bookshop.org/p/books/moby-dick-or-the-whale-herman-melville/18595562">Moby Dick: Or, the Whale</a> by Herman Melville (re-read)</p>
<p><a href="https://bookshop.org/p/books/the-night-watchman-louise-erdrich/7326842">The Night Watchman</a> by Louise Erdrich</p>
<p><a href="https://bookshop.org/p/books/orlando-a-biography/18876942">Orlando</a> by Virgninia Woolf</p>
<p><a href="https://bookshop.org/p/books/piranesi-susanna-clarke/15861178">Piranesi</a> by Susanna Clarke</p>
<p><a href="https://bookshop.org/p/books/the-recognitions-william-gaddis/14141581">The Recognitions</a> by William Gaddis</p>
<p><a href="https://bookshop.org/p/books/the-remains-of-the-day-kazuo-ishiguro/6713500">The Remains of the Day</a> by Kazuo Ishiguro</p>
<p><a href="https://bookshop.org/p/books/the-tempest-william-shakespeare/17484052">The Tempest</a> by William Shakespeare (re-read)</p>
<p><a href="https://bookshop.org/p/books/things-fall-apart-chinua-achebe/6698050">Things Fall Apart</a> by Chinua Achebe</p>
<p><a href="https://bookshop.org/p/books/too-loud-a-solitude-bohumil-hrabal/6688762">Too Loud a Solitude</a> by Bohumil Hrabal</p>
<p><a href="https://bookshop.org/p/books/winter-in-the-blood-james-welch/11699390">Winter in the Blood</a> by James Welch</p>
<h2 id="non-fiction">Non-fiction</h2>
<p><a href="https://bookshop.org/p/books/the-artist-s-way-30th-anniversary-edition-julia-cameron/6665657">The Artist&rsquo;s Way</a> by Julia Cameron</p>
<p><a href="https://bookshop.org/p/books/the-black-agenda-bold-solutions-for-a-broken-system-anna-gifty-opoku-agyeman/16721778">The Black Agenda: Bold Solutions for a Broken System</a> edited by Anna Gifty Opoku-Agyeman</p>
<p><a href="https://bookshop.org/p/books/bolivar-american-liberator-marie-arana/10560827">Bolivar: American Liberator</a> by Marie Arana</p>
<p><a href="https://bookshop.org/p/books/the-death-and-life-of-great-american-cities-jane-jacobs/6717374">The Death and Life of Great American Cities</a> by Jane Jacobs</p>
<p><a href="https://bookshop.org/p/books/influence-is-your-superpower-the-science-of-winning-hearts-sparking-change-and-making-good-things-happen-zoe-chance/16987804">Influence Is Your Superpower: The Science of Winning Hearts, Sparking Change, and Making Good Things Happen</a> by Zoe Chance</p>
<p><a href="https://bookshop.org/p/books/life-of-the-mind-one-thinking-two-willing-hannah-arendt/6681282">The Life of the Mind</a> by Hannah Arendt</p>
<p><a href="https://bookshop.org/p/books/normal-accidents-living-with-high-risk-technologies-updated-edition-revised-charles-perrow/10369279">Normal Accidents: Living with High Risk Technologies</a> by Charles Perrow</p>
<p><a href="https://bookshop.org/p/books/of-sound-mind-how-our-brain-constructs-a-meaningful-sonic-world-nina-kraus/18317252">Of Sound Mind: How Our Brain Constructs a Meaningful Sonic World</a> by Nina Kraus</p>
<p><a href="https://bookshop.org/p/books/seeing-like-a-state-how-certain-schemes-to-improve-the-human-condition-have-failed-james-c-scott/8526763">Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed</a> by James C. Scott (re-read)</p>
<p><a href="https://bookshop.org/p/books/the-science-of-can-and-can-t-a-physicist-s-journey-through-the-land-of-counterfactuals-chiara-marletto/15119570">The Science of Can and Can&rsquo;t: A Physicist&rsquo;s Journey Through the Land of Counterfactuals</a> by Chiara Marletto</p>
<p><a href="https://www.goodreads.com/book/show/91262.What_Is_Life_">What is Life?</a> by Lynn Margulis and Dorion Sagan</p>
<p><a href="https://bookshop.org/p/books/wired-for-love-a-neuroscientist-s-journey-through-romance-loss-and-the-essence-of-human-connection-stephanie-cacioppo/16871097">Wired for Love: A Neuroscientist&rsquo;s Journey Through Romance, Loss, and the Essence of Human Connection</a> by Stephanie Cacioppo</p>
<p><a href="https://bookshop.org/p/books/your-brain-is-a-time-machine-the-neuroscience-and-physics-of-time-dean-buonomano/8773332">Your Brain Is a Time Machine: The Neuroscience and Physics of Time</a> by Dean Buonomano</p>
]]></atom:content>
        </item>
        
        <item>
            <title>What &#34;Security&#34; Means in the Information Society (Track VI)</title>
            <link>https://kellyshortridge.com/blog/posts/what-security-means-in-the-information-society-part-6/</link>
            <pubDate>Thu, 20 Oct 2022 08:24:15 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/what-security-means-in-the-information-society-part-6/</guid>
            <description>This essay concludes our etymological journey of the word “security,” examining the meaning of security in the information society then summarizing what we’ve discovered on our semantic safari. It is Track VI of a longer concept album exploring what we mean when we use the word ‘security’ (and what it should mean).
You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
The Meaning of “Security” in the Information Society We end our journey in the modern era, the information society1. The best way to summarize the concept of security in modern times is that it’s controversial af and dependent on context2. But, Wolfers’ two-part definition of “security” from 1962 is widely cited3:
In an objective sense, security measures the absence of threats to acquired values In a subjective sense, security reflects the absence of fear that such values will be attacked The term “values”, here, of course, is ambiguous and open-ended. But let’s think about what this means for the cybersecurity context.
A realist would say security is achieved when “the dangers posed by manifold threats, challenges, vulnerabilities and risks” in the digital realm are “avoided, prevented, managed, coped with, mitigated and adapted to” by individuals, groups, or organizations4.
A social constructivist would say security is achieved “once the perception and fears of security ‘threats’, ‘challenges’, ‘vulnerabilities’ and ‘risks’ are allayed and overcome.” That is, objective security is not enough; the subjective will always wield considerable influence in the cybersecurity context.
In my experience, tech bros really do not like the idea that emotions or subjectivity come into play in tech stuff at all. They tend to describe their emotions as “logic” and subjective experience as “facts.” We don’t have enough time to unpack all of that in this post (and if only more of them would go to therapy). But it’s a very real problem when traditional cybersecurity folk wisdom was often woven by people who think the objective is all that matters.
What’s worse about the cybersecurity status quo is that the subjective is dismissed but the objective isn’t even really measured. Again, objective security measures the absence of threats to acquired values. We do not have objective security in traditional infosec and, by that definition, it’s not even really what’s being pursued. Even when fleshing out the realist interpretation of objective security, traditional infosec mostly focuses on the “avoided” or “prevented” part rather than the managed, coped with, adapted to part5.
From the perspective of modern scholars, security is meant to lead to more goal-oriented behavior while insecurity leads to threat-oriented behavior. As anyone who’s walked the RSAC vendor hall knows all too well, basically everything in cybersecurity today is about THREATS. Everything is a potential THREAT: your API, your CI/CD pipelines, your laptop, your phone, your fridge, your colleagues, your loved ones, even your own BRAIN is a THREAT because what if you make a MISTAKE and become the very INSIDER THREAT you swore to destroy!?!?!
Everything about the infosec status quo today reflects threat-oriented behavior, therefore implying insecurity rather than security. Traditional infosec isn’t about preserving and upholding values – like prosperity or productivity or an inclusive work environment. Traditional infosec is about preventing and avoiding threats, aiming for the impossible standard of attacks never successfully happening.
The cybersecurity status quo forgets the whole point of stopping the threats is to preserve certain values.
This fetishization of threats and elimination of them as an aim in itself is how we end up with infosec programs which cause so much grief and anxiety and friction for everyone else in the organization. If the infosec industry actually focused on preservation of values, then UX would probably be one of the most important skills in the discipline (but how would incumbent cybersecurity vendors milk that for cash?).
After all, what’s the point of protecting the cherished organizational value of productivity from potential attacks – which likely only happen sometimes rather than continuously, from an impact perspective – if you’re going to erode that value daily through security policies that seem divorced from real goals, constraints, and workflows?
What’s the point in protecting the organization against a potential financial loss due to attack when you’re not only spending its money on security (which could be spent elsewhere), but also slowing down its ability to grow revenue due to security procedures? For an organization with $100 million in revenue wanting to gain market share, shipping 20% fewer features per year due to friction created by the security program has more material impact short, medium, and long term than a ransomware operator demanding $1 million, $5 million, or even $10 million.
Waldron’s quite recent definition of the word security summarizes this all nicely and is worth repeating here:
“… security now comprises protection against harm to one’s basic mode of life and economic values, as well as reasonable protection against fear and terror, and the presence of a positive assurance that these values will continue to be maintained in the future.”6
Cybersecurity as it stands today flunks this definition. It is impossible to provide assurance that basic and economic values will be maintained in the future if you do not know what they are, which the infosec status quo does not know because they do not care because all of that is irrelevant to their noble need to sacrifice everyone’s time, energy, and money at the altar of the FUD gods to gain more budget, more headcount, more influence and they shroud this ritual in a lab coat of “rational” paranoia.
Before architecting a security program or allocating cybersecurity budget, we should understand the organization’s basic mode of life and economic values, including at the level of any teams who will be especially subject to security procedures (like software engineers). From there, we should aim to provide reasonable protection against fear and terror – that is, to provide subjective security, that ancient-school version of securitas which meant freedom from anxiety, fear, or care.
Our job as defenders should be to reduce the complexity of the security problem to such an extent that the rest of the organization is free from care about it (in fact, the systems theorist Niklas Luhmann argued that security efforts explicitly aim to reduce the complexity of the world7). And cybersecurity’s job should be to provide positive assurance that the organization’s values (like prosperity, productivity, inclusion, whatever) can be maintained going forward.
But all of the above requires user research and empathy and curiosity about things beyond infosec’s viewing frustum. This modern definition of security means the organization must treat security as an interactive discipline, not a prescriptive one. The existence of a security program cannot be justified with “there is a risk here and it will never go away,”8 multiplied across all identified “risks,” which thereby implies a security organization that can only grow in scope and authority.
If those who provide security are the rulers and the users the ruled, what security really requires is the rulers respecting the ruled and the rulers earning the respect of the ruled rather than extracting it. This reflects a radical departure from traditional infosec and thus there is and will be resistance from the entrenched9.
Security, in practice, is supposed to reside at the beneficial balance between two evils: absolute fear and absolute security – and absolute security, per Kant, can only be found at the cemetery.10
Summarizing what we mean when we say “security” Security is now one of those big words like justice and freedom and liberty which serve more as symbols with fuzzy flavors of feeling – that is, as concepts – rather than as words with straightforward definitions. As we’ve seen, asking the titular question, “When We Say Security, What Do We Mean?” is an exploratory exercise rather than an excavation. There is no ground truth we shall hit with enough sweat and shoveling.
We traversed a tapestry of meanings throughout this concept album. We finish it with a rough sense of, like, “Security is about preserving chill vibes in the presence of threats to those vibes.”
But more usefully (yet less concretely), we have a better mouthfeel for what “security” means. The threats aren’t the point; the poignant part is the potential absence of a valuable good or state of being which we very much wish to preserve.
An absence of threats is only worthwhile if it guarantees the presence of serenity and prosperity. In the word “security” is also a promise – that you hold onto something of value and that this value might grow in the future.
Perhaps most of all, this semantic journey of ours today reveals how wayward traditional cybersecurity is from these notions; it resembles a nemesis of the security concept rather than its descendent. I hope you join me on the quest to finally realize the full potential of the security concept, to grow peaches rather than lemons11, to build a sweeter future for ourselves and all the other stakeholders in this strange system we call society.
Conclusion You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
It also brings us to one of my favorite book quotes: “In the information society, nobody thinks. We expected to banish paper, but we actually banished thought.” (said by Ian Malcolm in Jurassic Park by Michael Crichton). ↩︎
“Security is ambiguous and elastic in its meaning.” – Art, 1993 ↩︎
Wolfers, A. (1962). Discord and collaboration: essays on international politics. Baltimore: Johns Hopkins Press. ↩︎
Brauch, H. G. (2011). Concepts of security threats, challenges, vulnerabilities and risks. In Coping with global environmental change, disasters and security (pp. 61-106). Springer, Berlin, Heidelberg. https://link.springer.com/content/pdf/10.1007/978-3-642-17776-7_2.pdf ↩︎
Yet again, this is a dynamic Security Chaos Engineering (SCE) is seeking to change. ↩︎
Waldron, J. (2006). Safety and security. Neb. L. Rev., 85, 454. ↩︎
Luhmann, N. (2018). Trust and power. John Wiley &amp; Sons. ↩︎
I have much, much more to say on this topic (inspired by this paper: Power, M. (2009). The risk management of nothing. Accounting, organizations and society, 34(6-7), 849-855.) ↩︎
Surprise, surprise, Security Chaos Engineering (SCE) is aligned with the vibe of earning respect. ↩︎
Arenas, J. F. M. (2008). From Homer to Hobbes and Beyond—Aspects of ’security’ in the European Tradition. In Globalization and environmental challenges (pp. 263-277). Springer, Berlin, Heidelberg. ↩︎
Shortridge, Kelly (2022). From Lemons to Peaches: Improving Security ROI through Security Chaos Engineering. IEEE SecDev 2022, forthcoming. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>This essay concludes our etymological journey of the word &ldquo;security,&rdquo; examining the meaning of security in the information society then summarizing what we&rsquo;ve discovered on our semantic safari. It is Track VI of a longer concept album exploring what we mean when we use the word &lsquo;security&rsquo; (and what it <em>should</em> mean).</p>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<h2 id="the-meaning-of-security-in-the-information-society">The Meaning of &ldquo;Security&rdquo; in the Information Society</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-enabling-tranquility.png" alt="A marble goddess sits on a gilded throne in pastel clouds. She is cleansing a laptop which is beaming with iridescent light."></p>
<p>We end our journey in the modern era, the information society<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. The best way to summarize the concept of security in modern times is that it’s controversial af and dependent on context<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>. But, Wolfers’ two-part definition of “security” from 1962 is widely cited<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>:</p>
<ol>
<li>In an objective sense, security measures the absence of threats to acquired values</li>
<li>In a subjective sense, security reflects the absence of fear that such values will be attacked</li>
</ol>
<p>The term “values”, here, of course, is ambiguous and open-ended. But let’s think about what this means for the cybersecurity context.</p>
<p>A realist would say security is achieved when “the dangers posed by manifold threats, challenges, vulnerabilities and risks” in the digital realm are “avoided, prevented, managed, coped with, mitigated and adapted to” by individuals, groups, or organizations<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>.</p>
<p>A social constructivist would say security is achieved “once the perception and fears of security ‘threats’, ‘challenges’, ‘vulnerabilities’ and ‘risks’ are allayed and overcome.” That is, objective security is not enough; the subjective will always wield considerable influence in the cybersecurity context.</p>
<p>In my experience, tech bros really do not like the idea that emotions or subjectivity come into play in tech stuff at all. They tend to describe their emotions as “logic” and subjective experience as “facts.” We don’t have enough time to unpack all of that in this post (and if only more of them would go to therapy). But it’s a very real problem when traditional cybersecurity folk wisdom was often woven by people who think the <em>objective</em> is all that matters.</p>
<p>What’s worse about the cybersecurity status quo is that the subjective is dismissed but the objective isn’t even really measured. Again, objective security <strong>measures</strong> the absence of threats to acquired values. We do not have objective security in traditional infosec and, by that definition, it’s not even really what’s being pursued. Even when fleshing out the realist interpretation of objective security, traditional infosec mostly focuses on the “avoided” or “prevented” part rather than the managed, coped with, adapted to part<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>.</p>
<p>From the perspective of modern scholars, security is meant to lead to more <strong>goal-oriented</strong> behavior while insecurity leads to <strong>threat-oriented</strong> behavior. As anyone who&rsquo;s walked the RSAC vendor hall knows all too well, basically everything in cybersecurity today is about THREATS. Everything is a potential THREAT: your API, your CI/CD pipelines, your laptop, your phone, your fridge, your colleagues, your loved ones, even your own BRAIN is a THREAT because what if you make a MISTAKE and become the very INSIDER THREAT you swore to destroy!?!?!</p>
<p>Everything about the infosec status quo today reflects threat-oriented behavior, therefore implying <em>insecurity</em> rather than <em>security</em>. Traditional infosec isn’t about preserving and upholding values – like prosperity or productivity or an inclusive work environment. Traditional infosec is about preventing and avoiding threats, aiming for the impossible standard of attacks never successfully happening.</p>
<p>The cybersecurity status quo forgets the whole point of stopping the threats is to preserve certain <em>values</em>.</p>
<p>This fetishization of threats and elimination of them as an aim in itself is how we end up with infosec programs which cause so much grief and anxiety and friction for everyone else in the organization. If the infosec industry actually focused on preservation of values, then UX would probably be one of the most important skills in the discipline (but how would incumbent cybersecurity vendors milk that for cash?).</p>
<p>After all, what’s the point of protecting the cherished organizational value of productivity from potential attacks – which likely only happen sometimes rather than continuously, from an impact perspective – if you’re going to erode that value daily through security policies that seem divorced from real goals, constraints, and workflows?</p>
<p>What’s the point in protecting the organization against a potential financial loss due to attack when you’re not only spending its money on security (which could be spent elsewhere), but also slowing down its ability to grow revenue due to security procedures? For an organization with $100 million in revenue wanting to gain market share, shipping 20% fewer features per year due to friction created by the security program has more material impact short, medium, and long term than a ransomware operator demanding $1 million, $5 million, or even $10 million.</p>
<p>Waldron’s quite recent definition of the word security summarizes this all nicely and is worth repeating here:</p>
<blockquote>
<p>“… security now comprises protection against harm to one’s basic mode of life and economic values, as well as reasonable protection against fear and terror, and the presence of a positive assurance that these values will continue to be maintained in the future.”<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup></p>
</blockquote>
<p>Cybersecurity as it stands today flunks this definition. It is impossible to provide assurance that basic and economic values will be maintained in the future if you do not know what they are, which the infosec status quo does not know because they do not care because all of that is irrelevant to their noble need to sacrifice everyone’s time, energy, and money at the altar of the FUD gods to gain more budget, more headcount, more influence and they shroud this ritual in a lab coat of “rational” paranoia.</p>
<p>Before architecting a security program or allocating cybersecurity budget, we should understand the organization’s <strong>basic mode of life and economic values</strong>, including at the level of any teams who will be especially subject to security procedures (like software engineers). From there, we should aim to provide <strong>reasonable protection against fear and terror</strong> – that is, to provide <em>subjective</em> security, that <a href="/the-dawn-of-security-the-noun-securitas-part-4/">ancient-school version</a> of <em>securitas</em> which meant freedom from anxiety, fear, or care.</p>
<p>Our job as defenders <em>should</em> be to reduce the complexity of the security problem to such an extent that the rest of the organization <em>is</em> free from care about it (in fact, the systems theorist Niklas Luhmann argued that security efforts explicitly aim to reduce the complexity of the world<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>). And cybersecurity&rsquo;s job <em>should</em> be to provide positive assurance that the organization’s values (like prosperity, productivity, inclusion, whatever) can be maintained going forward.</p>
<p>But all of the above requires user research and empathy and curiosity about things beyond infosec’s viewing frustum. This modern definition of security means the organization must treat security as an <em>interactive</em> discipline, not a prescriptive one. The existence of a security program cannot be justified with “there is a risk here and it will never go away,”<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup> multiplied across all identified “risks,” which thereby implies a security organization that can only grow in scope and authority.</p>
<p>If those who provide security are the rulers and the users the ruled, what security really requires is the rulers respecting the ruled and the rulers <em>earning</em> the respect of the ruled rather than extracting it. This reflects a radical departure from traditional infosec and thus there is and will be resistance from the entrenched<sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup>.</p>
<p>Security, in practice, is supposed to reside at the beneficial balance between two evils: absolute fear and absolute security – and absolute security, per Kant, can only be found at the cemetery.<sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup></p>
<h2 id="summarizing-what-we-mean-when-we-say-security">Summarizing what we mean when we say &ldquo;security&rdquo;</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-guardian-cat.png" alt="A magical cat curled upon a planet, protecting it."></p>
<p>Security is now one of those big words like justice and freedom and liberty which serve more as symbols with fuzzy flavors of feeling &ndash; that is, as concepts &ndash; rather than as words with straightforward definitions. As we’ve seen, asking the titular question, &ldquo;When We Say Security, What Do We Mean?&rdquo; is an exploratory exercise rather than an excavation. There is no ground truth we shall hit with enough sweat and shoveling.</p>
<p>We traversed a tapestry of meanings throughout this concept album. We finish it with a rough sense of, like, &ldquo;Security is about preserving chill vibes in the presence of threats to those vibes.”</p>
<p>But more usefully (yet less concretely), we have a better mouthfeel for what “security” means. The threats aren’t the point; the poignant part is the potential <em>absence</em> of a valuable good or state of being which we very much wish to preserve.</p>
<p>An absence of threats is only worthwhile if it guarantees the presence of serenity and prosperity. In the word “security” is also a promise – that you hold onto something of value and that this value might grow in the future.</p>
<p>Perhaps most of all, this semantic journey of ours today reveals how wayward traditional cybersecurity is from these notions; it resembles a nemesis of the security concept rather than its descendent. I hope you join me on the quest to finally realize the full potential of the security concept, to grow peaches rather than <a href="https://en.wikipedia.org/wiki/The_Market_for_Lemons">lemons</a><sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup>, to build a sweeter future for ourselves and all the other stakeholders in this strange system we call society.</p>
<p><img src="/blog/img/sec-etymology/what-does-security-mean-cover-art.png" alt="The cover art for the album with the title: What do we mean when we say security? It depicts an island floating in a sky filled with rainbow and pastel clouds in shades of periwinkle and violet. The island itself is a paradise, a blend of fantasy and cyberpunk aesthetics. Lush trees blanket its ledges while waterfalls cascade from each ledge, frozen in time and resembling a beautiful digital glitch. It is meant to reflect the utopia we might achieve with our systems &amp;ndash; our own islands &amp;ndash; if we embraced the original meanings of the word security."></p>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>It also brings us to one of my favorite book quotes: “In the information society, nobody thinks. We expected to banish paper, but we actually banished thought.” (said by Ian Malcolm in <em>Jurassic Park</em> by Michael Crichton).&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p><em>&ldquo;Security is ambiguous and elastic in its meaning.”</em> – Art, 1993&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Wolfers, A. (1962). <em>Discord and collaboration: essays on international politics</em>. Baltimore: Johns Hopkins Press.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Brauch, H. G. (2011). Concepts of security threats, challenges, vulnerabilities and risks. In <em>Coping with global environmental change, disasters and security</em> (pp. 61-106). Springer, Berlin, Heidelberg. <a href="https://link.springer.com/content/pdf/10.1007/978-3-642-17776-7_2.pdf">https://link.springer.com/content/pdf/10.1007/978-3-642-17776-7_2.pdf</a>&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>Yet again, this is a dynamic <a href="https://kellyshortridge.com/book.html">Security Chaos Engineering (SCE)</a> is seeking to change.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Waldron, J. (2006). Safety and security. <em>Neb. L. Rev., 85</em>, 454.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>Luhmann, N. (2018). <em>Trust and power</em>. John Wiley &amp; Sons.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>I have much, much more to say on this topic (inspired by this paper: Power, M. (2009). The risk management of nothing. <em>Accounting, organizations and society, 34</em>(6-7), 849-855.)&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>Surprise, surprise, <a href="https://kellyshortridge.com/book.html">Security Chaos Engineering (SCE)</a> is aligned with the vibe of earning respect.&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:10">
<p>Arenas, J. F. M. (2008). From Homer to Hobbes and Beyond—Aspects of ’security’ in the European Tradition. In <em>Globalization and environmental challenges</em> (pp. 263-277). Springer, Berlin, Heidelberg.&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>Shortridge, Kelly (2022). From Lemons to Peaches: Improving Security ROI through Security Chaos Engineering. <em>IEEE SecDev 2022, forthcoming</em>.&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>The Evolving Meaning of Security as &#39;Securitas&#39; in the Early Modern Era (Track V)</title>
            <link>https://kellyshortridge.com/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5/</link>
            <pubDate>Thu, 20 Oct 2022 08:14:21 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5/</guid>
            <description>This essay examines the evolving meaning of security as the word “securitas” in the Early Modern Era, from the Enlightenment through to where it intersects with the concepts of welfare and dignity. It is Track V of a longer concept album exploring what we mean when we use the word ‘security’ (and what it should mean).
You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
The State and “Security” (Securitas) Centuries passed and the relevance of the word securitas faded. Thomas Hobbes, one of the founders of modern political philosophy in the 17th century, was really the hype man for securitas to keep it from dissolving into disuse1.
Hobbes’ depicts the goal of securitas as the genesis and maintenance of peace, which, as we’ve already discussed, is quite unlike the cybersecurity status quo. Securitas is cultivated through alliances to make it dangerous for the remaining “all” to attack. Samuel, baron von Pufendorf2 emphasized the need for allies with a less cynical angle, arguing that an individual human needs companions to aid them in order to realize securitas (which perhaps foreshadows the concept of “social security”).
Are cybersecurity professionals today known for gathering allies? Quite the opposite. For instance, the relationship between developers and security pros seems to only be getting worse3. Traditional infosec strategy does not enforce security policy through cooperation, but through coercion.
To keep a long journey into Hobbes’ rather paranoid – and exceptionally cynical – perspective short, he ultimately proposes that a sovereign should be the one to guarantee securitas by doling out punishments for violating agreements, which requires subjugation of the ruled by the ruler.
Punishing humans who step out of line and requiring obedience to their rules – for the ruled to subjugate their other wants as secondary to the needs of the sovereign… is this not the playbook of traditional cybersecurity? It is the easiest option to pursue because eliminating or reducing hazards by design requires far more effort than demanding obedience. And if there’s one thing Homo sapiens love above all else, it is cognitive efficiency.
It is quite interesting that securitas was used as imperial propaganda during the Roman era to insist that the state was necessary and by Hobbes to insist that the state must subjugate its citizens. Does this tell us something about status quo cybersecurity? Or should we instead deem it “security imperialism”?
Security, Welfare, Dignity, and the Early Modern Era Around the same time Hobbes was slandering humanity’s nature and proposing the need for a strong-armed state (the 16th century), securitas also started to absorb a financial meaning: something pledged as a guarantee that an obligation would be fulfilled – that the debtor has no need to worry because something has been pledged against the debt.
In this colloquial meaning (which persisted for centuries), securitas is rooted in a feeling – that the lender doesn’t need to worry. And, similarly, we see a theme throughout the Enlightenment that the state should assure citizens that they do not have to fear violence, not just ensure that they are free from violence in their everyday lives. Basically, that the state has a duty to consider the feelings of citizens, not just protect them.
It is in this era and through the Industrial era that security starts to be seen as a human right, as an essential requirement for humans to enjoy all of the other rights. After all, if you’re the victim of violence (particularly a violent death) – or in a perpetual state of worry about it – it’s pretty hard to pursue liberty or prosperity.
Thus, over time, security evolved to mean a guarantee or assurance that certain things would be accessible to an entity – like “water security” reflecting the assurance that a human individual will have access to clean water on an ongoing basis4.
The temporal implication of this meaning is important: it is not just about having access to a thing (whether a physical good or an intangible value) now, but about the guarantee that you will have access to it in the future, too. Not just that you do not have to fear a violent death now, but that you do not have to fear a violent death in the multitude of possible futures on the horizon, either.
We can trace this notion through to the more recent “social security.” The term was coined on a whim because “pension” carried too much baggage to be palatable to a wide audience. So, they defined social security as a “type of security which would… promote the welfare of society as a whole.”5 (emphasis mine)
Thus, the purpose of security is to promote the welfare of a particular entity. Extending this, the purpose of information security is to promote the welfare of information, the purpose of computer security is to promote the welfare of computers, the purpose of cyber security is to promote the welfare of cyber things. While the last one may feel silly, there’s something important here: promoting welfare is not just about stopping threats.
What else is embedded in this purpose of promoting welfare? As we explored, dignity was tightly coupled with security during the Roman period and this association resurged with the concept of “human security,&#34; which arose from the rejection of Hobbsian state-centric security.
While the term’s precise meaning is still subject to ample debate6, a foundational facet of “human security” is respect: that a critical part of ensuring a human is secure is ensuring their humanity is respected. Because dehumanizing certain populations and stripping them of dignity is one of the ways authoritarianism cultivates power; it is how a society slips into fascism.
What, then, should we make of the fact that the infosec industry sneakily strips users – whether the accountant clicking on a link to wire money, the marketing professional who downloads a PDF, the developer who makes a mistake when writing code – of their dignity?
The disrespectful sneer is palpable in the designation of “human error” as the cause of incidents. Security awareness training requires users to remember dozens of rules that ignore the realities of their work on thing-clicking machines and implies that it will be their fault if something bad happens. There is no respect for their time, attention, intelligence, or autonomy.
To quote the legendary James Mickens, “This is uncivilized and I demand more from life.”
But imagine a world in which infosec programs prioritized respect as a core value of security! Respect for users’ private data; respect for users’ time; respect for users’ cognitive and emotional energy; respect for users’ pursuit of their priorities; respect for the organization’s pursuit of its priorities as a collection of users serving other users.
In fact, the term “users” may even be part of the problem. Users are abstract, faceless, behind a screen. It makes it easier to disrespect them and resent them for not supporting our own goals. It makes it easier to not see them as people, but as exploitable resources that either we control or attackers do. It’s perhaps harder to blame a sleep-deprived caretaker of a lover or child or parent who, just trying to do their job well enough to keep their health insurance, clicks on something designed to look urgent and important.
Blaming a “user” for being so careless as to click on an obfuscated link and enter in their VPN credentials on the malicious site makes it a more antiseptic affair. It makes us feel like it’s a more just world rather than a chaotic one – like the problem is a user stepping out of line rather than complexities conspiring towards compromise. This dehumanization makes it easier to absolve the ruler and deride the ruled – these “users” – who are simply resources towards our ends, ever holy, ever noble.
Continue with Track VI: What Security Means in the Information Society
Conclusion You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
Hobbes, with the benefit of hindsight and historical documentation, viewed the Peloponnesian War as a civil war among the Greek people. It seems at the time it was not perceived that way by Athens, its allies, or its enemies. The Persians were the starkest “other” throughout much of ancient Greek history, but by the time of the Peoponnesian War, the Persian “threat” was more like a distant, hazy shadow. Thus, the “other” from Athens’ perspective was other city-states, including its own allies who they feared would betray them (which they did, although “betray” perhaps is not the best characterization of the affair). ↩︎
I promise I did not make this name up. ↩︎
Bridging the Developer and Security Divide, VMWare, Forrester Research (2021) ↩︎
UN-Water, 2013. Water security and the global water agenda—A UN- Water analytical brief . Hamilton: United Nations University. See also: https://www.unwater.org/publications/water-security-infographic/ ↩︎
Social Security: Origin of the Term at https://socialwelfare.library.vcu.edu/social-security/social-security-origin-of-the-term/ ↩︎
Christie, R., &amp; Amitav, A. (2008). Human security research: progress, limitations and new directions (pp. 11-08). Working Paper. Centre for Governance and International Affairs. http://www.bris.ac.uk/media-library/sites/spais/migrated/documents/christiearcharya1108.pdf ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>This essay examines the evolving meaning of security as the word &ldquo;securitas&rdquo; in the Early Modern Era, from the Enlightenment through to where it intersects with the concepts of welfare and dignity. It is Track V of a longer concept album exploring what we mean when we use the word &lsquo;security&rsquo; (and what it <em>should</em> mean).</p>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5/">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<p><img src="/blog/img/sec-etymology/what-is-security-knight-servers.png" alt="A painting of a knight in shining armor holding freshly conquered servers. He is standing in pastel clouds."></p>
<h2 id="the-state-and-security-securitas">The State and &ldquo;Security&rdquo; (Securitas)</h2>
<p><a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">Centuries passed</a> and the relevance of the word <em>securitas</em> faded. Thomas Hobbes, one of the founders of modern political philosophy in the 17th century, was really the hype man for <em>securitas</em> to keep it from dissolving into disuse<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>.</p>
<p>Hobbes’ depicts the goal of <em>securitas</em> as the genesis and maintenance of peace, which, as we’ve <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">already discussed</a>, is quite unlike the cybersecurity status quo. Securitas is cultivated through alliances to make it dangerous for the remaining “all” to attack. Samuel, baron von Pufendorf<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> emphasized the need for allies with a less cynical angle, arguing that an individual human needs companions to aid them in order to realize <em>securitas</em> (which perhaps <a href="/blog/posts/security-welfare-dignity-and-the-early-modern-era-securitas-part-7/">foreshadows</a> the concept of “social security”).</p>
<p>Are cybersecurity professionals today known for gathering allies? Quite the opposite. For instance, the relationship between developers and security pros seems to only be getting worse<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>. Traditional infosec strategy does not enforce security policy through cooperation, but through coercion.</p>
<p>To keep a long journey into Hobbes’ rather paranoid – and exceptionally cynical – perspective short, he ultimately proposes that a sovereign should be the one to guarantee <em>securitas</em> by doling out punishments for violating agreements, which requires subjugation of the ruled by the ruler.</p>
<p>Punishing humans who step out of line and requiring obedience to their rules – for the ruled to subjugate their other wants as secondary to the needs of the sovereign… is this not the playbook of traditional cybersecurity? It is the easiest option to pursue because eliminating or reducing hazards by design requires far more effort than demanding obedience. And if there’s one thing <em>Homo sapiens</em> love above all else, it is cognitive efficiency.</p>
<p>It is quite interesting that <em>securitas</em> was used as imperial propaganda during the Roman era to insist that the state was necessary and by Hobbes to insist that the state must subjugate its citizens. Does this tell us something about status quo cybersecurity? Or should we instead deem it &ldquo;security imperialism&rdquo;?</p>
<h2 id="security-welfare-dignity-and-the-early-modern-era">Security, Welfare, Dignity, and the Early Modern Era</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-melting-time.png" alt="A painting of a padlock clock that is exploding."></p>
<p>Around the same time Hobbes was slandering humanity’s nature and proposing the need for a strong-armed state (the 16th century), <em>securitas</em> also started to absorb a financial meaning: something pledged as a guarantee that an obligation would be fulfilled – that the debtor has no need to worry because something has been pledged against the debt.</p>
<p>In this colloquial meaning (which persisted for centuries), <em>securitas</em> is rooted in a feeling – that the lender doesn’t need to worry. And, similarly, we see a theme throughout the Enlightenment that the state should <em>assure</em> citizens that they do not have to fear violence, not just <em>ensure</em> that they are free from violence in their everyday lives. Basically, that the state has a duty to consider the feelings of citizens, not just protect them.</p>
<p>It is in this era and through the Industrial era that security starts to be seen as a human right, as an essential requirement for humans to enjoy all of the other rights. After all, if you’re the victim of violence (particularly a violent death) – or in a perpetual state of worry about it – it’s pretty hard to pursue liberty or prosperity.</p>
<p>Thus, over time, security evolved to mean a guarantee or assurance that certain things would be accessible to an entity – like “water security” reflecting the assurance that a human individual will have access to clean water on an ongoing basis<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>.</p>
<p>The temporal implication of this meaning is important: it is not just about having access to a thing (whether a physical good or an intangible value) now, but about the guarantee that you will have access to it in the <em>future</em>, too. Not just that you do not have to fear a violent death <em>now</em>, but that you do not have to fear a violent death in the multitude of possible futures on the horizon, either.</p>
<p>We can trace this notion through to the more recent “social security.” The term was <a href="https://www.ssa.gov/policy/docs/ssb/v55n1/v55n1p63.pdf">coined on a whim</a> because “pension” carried too much baggage to be palatable to a wide audience. So, they defined social security as a “type of security which would… <strong>promote the welfare</strong> of society as a whole.”<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> (emphasis mine)</p>
<p>Thus, the purpose of security is to promote the welfare of a particular entity. Extending this, the purpose of information security is to promote the welfare of information, the purpose of computer security is to promote the welfare of computers, the purpose of cyber security is to promote the welfare of cyber things. While the last one may feel silly, there’s something important here: <strong>promoting welfare is not just about stopping threats</strong>.</p>
<p>What else is embedded in this purpose of promoting welfare? As we explored, dignity was tightly coupled with security <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">during the Roman period</a> and this association resurged with the concept of “human security,&quot; which arose from the rejection of Hobbsian state-centric security.</p>
<p>While the term’s precise meaning is still subject to ample debate<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>, a foundational facet of &ldquo;human security&rdquo; is respect: that a critical part of ensuring a human is secure is ensuring their humanity is respected. Because dehumanizing certain populations and stripping them of dignity is <a href="https://www.npr.org/2011/03/29/134956180/criminals-see-their-victims-as-less-than-human">one of the ways</a> authoritarianism cultivates power; it is how a society slips into fascism.</p>
<p>What, then, should we make of the fact that the infosec industry sneakily strips users – whether the accountant clicking on a link to wire money, the marketing professional who downloads a PDF, the developer who makes a mistake when writing code – of their dignity?</p>
<p>The disrespectful sneer is palpable in the designation of “human error” as the cause of incidents. Security awareness training requires users to remember dozens of rules that ignore the realities of their work on <a href="https://twitter.com/swagitda_/status/1451203420673740800?s=20&amp;t=8yiYulSDFV_Hdb7iV6pQ-g">thing-clicking machines</a> and implies that it will be their fault if something bad happens. There is no respect for their time, attention, intelligence, or autonomy.</p>
<p>To quote <a href="https://www.usenix.org/system/files/1401_08-12_mickens.pdf">the legendary James Mickens</a>, “This is uncivilized and I demand more from life.”</p>
<p>But imagine a world in which infosec programs prioritized respect as a core value of security! Respect for users&rsquo; private data; respect for users&rsquo; time; respect for users&rsquo; cognitive and emotional energy; respect for users&rsquo; pursuit of their priorities; respect for the organization’s pursuit of its priorities as a collection of users serving other users.</p>
<p>In fact, the term “users” may even be part of the problem. Users are abstract, faceless, behind a screen. It makes it easier to disrespect them and resent them for not supporting our own goals. It makes it easier to not see them as people, but as exploitable resources that either we control or attackers do. It’s perhaps harder to blame a sleep-deprived caretaker of a lover or child or parent who, just trying to do their job well enough to keep their health insurance, clicks on something designed to look urgent and important.</p>
<p>Blaming a “user” for being so careless as to click on an obfuscated link and enter in their VPN credentials on the malicious site makes it a more antiseptic affair. It makes us feel like it’s a more just world rather than a chaotic one – like the problem is a user stepping out of line rather than complexities conspiring towards compromise. This dehumanization makes it easier to absolve the ruler and deride the ruled – these “users” – who are simply resources towards our ends, ever holy, ever noble.</p>
<blockquote>
<p>Continue with <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">Track VI: What Security Means in the Information Society</a></p>
</blockquote>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<p><img src="/blog/img/sec-etymology/what-does-security-mean-cover-art.png" alt="The cover art for the album with the title: What do we mean when we say security? It depicts an island floating in a sky filled with rainbow and pastel clouds in shades of periwinkle and violet. The island itself is a paradise, a blend of fantasy and cyberpunk aesthetics. Lush trees blanket its ledges while waterfalls cascade from each ledge, frozen in time and resembling a beautiful digital glitch. It is meant to reflect the utopia we might achieve with our systems &amp;ndash; our own islands &amp;ndash; if we embraced the original meanings of the word security."></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>Hobbes, with the benefit of hindsight and historical documentation, viewed the Peloponnesian War as a civil war among the Greek people. It seems at the time it was not perceived that way by Athens, its allies, or its enemies. The Persians were the starkest “other” throughout much of ancient Greek history, but by the time of the Peoponnesian War, the Persian “threat” was more like a distant, hazy shadow. Thus, the “other” from Athens’ perspective was other city-states, including its own allies who they feared would betray them (which they did, although “betray” perhaps is not the best characterization of the affair).&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>I promise I did not make this name up.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Bridging the Developer and Security Divide, VMWare, Forrester Research (2021)&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>UN-Water, 2013. Water security and the global water agenda—A UN-
Water analytical brief . Hamilton: United Nations University. See also: <a href="https://www.unwater.org/publications/water-security-infographic/">https://www.unwater.org/publications/water-security-infographic/</a>&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>Social Security: Origin of the Term at <a href="https://socialwelfare.library.vcu.edu/social-security/social-security-origin-of-the-term/">https://socialwelfare.library.vcu.edu/social-security/social-security-origin-of-the-term/</a>&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Christie, R., &amp; Amitav, A. (2008). <em>Human security research: progress, limitations and new directions</em> (pp. 11-08). Working Paper. Centre for Governance and International Affairs. <a href="http://www.bris.ac.uk/media-library/sites/spais/migrated/documents/christiearcharya1108.pdf">http://www.bris.ac.uk/media-library/sites/spais/migrated/documents/christiearcharya1108.pdf</a>&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>The Multifaceted Meaning of &#34;Security&#34; as &#39;Securitas&#39; in the Roman Era (Track IV)</title>
            <link>https://kellyshortridge.com/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/</link>
            <pubDate>Thu, 20 Oct 2022 07:49:34 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/</guid>
            <description>This essay examines the multi-faceted meaning of the word “security” (as the word “securitas”) during the Roman era. It is Track IV of a longer concept album exploring what we mean when we use the word ‘security’ (and what it should mean).
You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
While a sense of dignity is not the vibe most of us feel when clicking through mandatory cybersecurity awareness training courses, dignity and security were seen as closely coupled concepts in the Roman era.
Cicero, living during the last century B.C., noted that whomever has tranquillitate animi (a tranquil mind) and securitas will have dignitas (dignity)1. Cicero’s meaning of securitas here involves the absence of care or fear as well – and he saw this tranquility of mind as a prerequisite for an individual’s personal happiness and prestige in society. (We will talk about respect and dignity in the context of security more once we time travel to more modern definitions of the word).
A century later, Seneca framed securitas as a mindset[^24], a lovely extension of the existing notion of security as a bundle of emotions. Inspired by Socrates, Seneca viewed securitas – and the absence of the fear emotion[^22] – as how the wise can come closer to god because only a god has no reason to fear death[^23].
Securitas as this nearly-divine mindset quickly morphed into an association with divinity itself during the reign of Nero (the 1st century AD), specifically reflecting the divinity of the Emperor[^25] on coinage. It also started to reflect an environmental vibe rather than emotions or mindset; the surrounding world, the genesis of a subject’s freedom from care, could also be securitas, possessing a peaceful and tranquil atmosphere.
But Seneca also laid the groundwork for coupling the security of the state with security of individuals, in the specific sense of public securitas contributing to the capacity to live according to virtue. In this framing, securitas was explicitly based on mutual trust between the ruler and the ruled.
This reflects yet another semantic deviation with cybersecurity, which is generally mistrustful of any parties outside infosec, including those who should be allies (like software engineers). How else should we characterize the common refrain that any employee is a potential “insider threat”? And the cybersecurity status quo certainly does not seem to foster mutual trust by helping potential allies live well, offering minimal proof that they have the best interests of the collective in mind; if anything, they often prove the opposite.
In fact, Seneca highlights that it is a mistake to think that “a ruler is [only] safe when nothing is safe from the ruler.”2 The pox of “shadow” assets – whether shadow IT, shadow SaaS, shadow containers, shadow APIs – shows that the infosec establishment succumbs to this mistake readily. In fact, infosec’s general fetishization of control – vitalized by vendors – is a continuous realization of this mistake.
As anyone familiar with the motifs of history could predict, the subsequent rulers did not listen to Seneca’s admonition, which eventually led to an explicit rejection of hereditary rulership in the Nerva-Antonine dynasty. (Take that for what you will in the context of our current cybersecurity rulership). This led Tacitus to express the new securitas publica (public security) as the confidence of the citizens that the state will no longer threaten them[^27]. That mutual trust was the core ingredient of securitas during that phase and reflected a check on authority.
It is interesting to ponder the notion of securitas publica in the organizational context; an organization’s citizens would be confident that the enforcers of security policy can no longer disrupt their way of life or erode their pursuit of fulfillment. How many cybersecurity programs might be characterized as such today? How many programs instead feel disruptive, corrosive to productivity, and fostering anything but a “peaceful and tranquil atmosphere”?
Securitas’ confident spirit evolved into meaning of “assurance of faith” (as opposed to doubt) during Roman Antiquity, as espoused by Tertullian and, later, Saint Augustine. This “opposition to doubt” again is at odds with one of the letters of the acronym which defines traditional cybersecurity: F.U.D. (fear, uncertainty, and doubt). As we’ve seen time and again throughout this series (and we’re only into the 2nd - 4th century A.D. here!), the earlier and variegated meanings of securitas fly in the face of traditional infosec. Traditional infosec wants to doubt everything. It takes pride in doubting everything. Assurance of faith is seen as a security sin!3
Speaking of faith, early Roman Antiquity also saw the creation of Securitas as a deity. While the mythos surrounding Securitas, the goddess, is lamentably shallow, it’s worth noting that she was the goddess of security and stability4. Given the evidence from the DORA research and metrics that stability and speed work in tandem and are complimentary, this suggests that if we worship at the altar of security, then we must also worship at the altar of speed.
The Roman god of speed, Mercurius (aka Mercury), was also the god of shopkeepers, merchants, travelers, transporters of goods, thieves, and tricksters. It’s worth noting that the coupling of commerce and prosperity with security is quite common throughout its history (more on this once we get to the 16th century). Traditional cybersecurity, in contrast, often pretends like prosperity is not the primary goal or, worse, views prosperity as a foe to security.
“Why don’t companies prioritize security? Don’t they know THE THINGS can be HACKED??” Well, dear security people, what do you think allows the companies to pay for your six or seven figure salary? It is because they prioritize money that they can afford to spend it on security endeavors that do not remunerate them and often cannot even be tied to tangible success outcomes beyond “we saw these malware samples or known bad IPs this month” spoonfed from vendor dashboards in symbiotic self-perpetuation.
The infosec industry forgets that security, even in its more modern meaning, is not just about protecting threats; it’s about protecting threats against something. In the business context, it’s about protecting threats against prosperity. Through this lens, is it not a victory if a security program waters the seeds of revenue growth? And is the security program not a tragic failure if it chokes and cages this material growth because of a “risk” that exists only as an incorporeal counterfactual?
Between the profligate spending on ineffectual security tools and the obstructionism imposed by security programs, it’s quite possible that the threat to enterprise prosperity by traditionalist information security rivals that posed by actual attackers.
This distinction is also emphasized in the term “national security,” even as we mean it today: national security is about defending threats to what? Liberty, prosperity, the pursuit of happiness… and we rightly dislike security measures that get in the way of these goals (often labeling them as “Security Theater”5).
Thus, we must ask, information security defends against threats to what? Largely the same things, but in businessy and computery contexts. If liberty or prosperity or the pursuit of happiness is choked out by security measures, then security is the threat in itself and the subjects are left in need of security against security6. Indeed, this is where we find ourselves with cybersecurity today.
But we are not yet done with this era. A few centuries after the deification of securitas, its meaning as “carefree” was twisted by religious leaders into an undesirable form: the state of being careless, reckless, heedless, and negligent7.
This notion of security is perhaps closest to the status quo in infosec today, which is quite careless with human (user, developer, colleague) time and attention, reckless with organizational budget, and negligent to design-based security solutions that are more reliable than attempting to control human behavior. The cybersecurity industry is heedless with its FUD-fueled zealotry, fretting about irreleventia while pretending nothing can be done about the grey rhinos charging into our systems.
Securitas was also relevant in the context of “Roman security” and specifically meant the Roman Empire’s peaceful and orderly domination of the world. Would we characterize traditional infosec programs as peaceful and orderly today? Even diehard zealots of the cybersecurity status quo readily admit that much of infosec in practice is firefighting and disorder. A worthy question is: who benefits from this paradigm?
Alas, the Roman Empire declined, as did securitas, whose meanings were largely stolen by the word certitudo. Thus, we must go to the provincial stables of the Middle Ages to continue our semantic safari. The two meanings of securitas not consumed by certitudo included pax (peace) and religious indifference.
The latter meaning persisted (albeit without nearly as much popularity as before) through to Martin Luther in the 16th century, who labeled “die Sicheren” as the people he was fighting against – people who did not truly trust the Holy Spirit and substituted true faith for religious rituals and conspicuous, performative acts. In his time, spiritual unity was preserved “through coercion and violence… dissent from orthodoxy was outlawed, heresy was rooted out and punished by fire and sword.” Luther was excommunicated for his “errors” about the Holy Spirit, including the “error” of believing the Christian god wouldn’t want heretics burned alive.
In our era, the traditionalist Security People put quite a bit of trust in their folk wisdom and rituals, despite their unclear success. It is still counterculture to suggest that humans shouldn’t be punished for security “errors.”8 And does it not benefit the vendors and research analysts to continue spoon feeding this advice to security leaders?
Just as Martin Luther felt centuries past about religious belief, is it wrong to want to reconstruct our entire approach to cybersecurity? Just because power structures are in place, incumbents entrenched, money flowing, does not mean something new, bold, and based on real acts of security rather than displays of it – on outcomes vs. outputs – could not supplant the status quo. Fatalism is not true to our nature as humans and certainly not true to the spirit of the “security” concept as we have seen.
But there is more for us to see and for that, we must venture onward into the pre-Enlightenment period and beyond…
Continue with Track V: The Evolving Meaning of “Security” after the Roman Era (Securitas).
Conclusion You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
From De Officiis passage 69 (nice): “Vacandum autem omni est animi perturbatione, cum cupiditate et metu, tum etiam aegritudine et voluptate nimia[64] et iracundia, ut tranquillitas animi et securitas adsit, quae affert cum constantiam, tum etiam dignitatem.” This translates roughly to: “But it is necessary to be freed from all disturbance of the mind, with desire and fear, and also from sickness and excessive pleasure and anger, so that there may be peace of mind and security, which brings with it constancy, as well as dignity.” ↩︎
Schrimm-Heins, A. (1991). Gewissheit und Sicherheit: Geschichte und Bedeutungswandel der Begriffe certitudo und securitas (Teil I). Archiv für Begriffsgeschichte, 34, 123-213. ↩︎
Of course, in SCE, we want to foster this sort of assurance through repeated experimentation – cultivating confidence through empirical evidence affirming or denying our hypotheses about the resilience of our systems. ↩︎
Upon learning this, I immediately updated my brain dictionary lookup to display Security as a gorgeous transbian goddess whose favorite language, naturally, is Rust. I am hoping for a crossover episode in which our representative enby god, Loki, woos her by donning thigh highs made of the tendons of her enemies. ↩︎
Levenson, E. (2014). The TSA Is in the Business of’Security Theater,’ Not Security. The Atlantic. ↩︎
Quis custodiet ipsos custodes? ↩︎
It was Pope Gregory I as hypeman for this interpretation and, yet again, the parallels between traditional infosec and the authoritarianism of the Catholic Church are… intriguing to say the least. ↩︎
It’s been fun watching the industry catch up to me. ~6 - 7 years ago when I was dropping spicy takes about how bullshit “gotchya” security tests are (along with a bunch of other behavioral science-informed takes), I got a ton of pushback and usually vitrol. BuT ReAL aTTaCkErS dOn’T CaRe AbOuT fEeLinGs. Many of those same people now launder those takes and pretend like they were always on board. There’s probably a post in itself about the adoption cycle of hot takes where, at the beginning, people bristle because it’s new and bold and different but eventually it’s accepted enough that it’s worth changing your beliefs and evangelizing it to look “thought leadering.” Hopefully one day I’ll be similarly vindicated with my (still wildly unpopular) take that “DevSecOps” is an unnecessary and harmful term. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>This essay examines the multi-faceted meaning of the word &ldquo;security&rdquo; (as the word &ldquo;securitas&rdquo;) during the Roman era. It is Track IV of a longer concept album exploring what we mean when we use the word &lsquo;security&rsquo; (and what it <em>should</em> mean).</p>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<p><img src="/blog/img/sec-etymology/what-is-security-goddess-with-laptop.png" alt="A marble statue of a goddess uses a laptop. She has a spear on her back and looks erudite and divine."></p>
<p>While a sense of dignity is not the vibe most of us feel when clicking through mandatory cybersecurity awareness training courses, dignity and security were seen as closely coupled concepts in the Roman era.</p>
<p><a href="https://plato.stanford.edu/entries/cicero/">Cicero</a>, living during the last century B.C., noted that whomever has <em>tranquillitate animi</em> (a tranquil mind) and <em>securitas</em> will have <em>dignitas</em> (dignity)<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. Cicero’s meaning of <em>securitas</em> here involves the absence of care or fear <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">as well</a> – and he saw this tranquility of mind as a prerequisite for an individual’s personal happiness and prestige in society. (We will talk about respect and dignity in the context of security more once <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5/">we time travel to</a> more modern definitions of the word).</p>
<p>A century later, <a href="https://plato.stanford.edu/entries/seneca/">Seneca</a> framed <em>securitas</em> as a mindset[^24], a lovely extension of the existing notion of security as a bundle of emotions. Inspired by Socrates, Seneca viewed <em>securitas</em> – and the absence of the fear emotion[^22] – as how the wise can come closer to god because only a god has no reason to fear death[^23].</p>
<p>Securitas as this nearly-divine mindset quickly morphed into an association with divinity itself during the reign of Nero (the 1st century AD), specifically reflecting the divinity of the Emperor[^25] <a href="https://en.numista.com/catalogue/pieces246399.html">on coinage</a>. It also started to reflect an environmental vibe rather than emotions or mindset; the surrounding world, the genesis of a subject’s freedom from care, could also be <em>securitas</em>, possessing a peaceful and tranquil atmosphere.</p>
<p>But Seneca also laid the groundwork for coupling the security of the state with security of individuals, in the specific sense of public securitas contributing to the capacity to live according to virtue. In this framing, securitas was explicitly based on mutual trust between the ruler and the ruled.</p>
<p>This reflects yet another semantic deviation with cybersecurity, which is generally mistrustful of any parties outside infosec, including those who should be allies (like software engineers). How else should we characterize the common refrain that any employee is a potential &ldquo;insider threat&rdquo;? And the cybersecurity status quo certainly does not seem to foster mutual trust by helping potential allies live well, offering minimal proof that they have the best interests of the collective in mind; if anything, they often prove the opposite.</p>
<p>In fact, Seneca highlights that it is a mistake to think that “a ruler is [only] safe when nothing is safe from the ruler.”<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> The pox of “shadow” assets – whether shadow IT, shadow SaaS, shadow containers, shadow APIs – shows that the infosec establishment succumbs to this mistake readily. In fact, infosec’s general fetishization of control – vitalized by vendors – is a continuous realization of this mistake.</p>
<p>As anyone familiar with the motifs of history could predict, the subsequent rulers did not listen to Seneca’s admonition, which eventually led to an explicit rejection of hereditary rulership in the Nerva-Antonine dynasty. (Take that for what you will in the context of our current cybersecurity rulership). This led <a href="https://en.wikipedia.org/wiki/Tacitus">Tacitus</a> to express the new <em>securitas publica</em> (public security) as the confidence of the citizens that the state will no longer threaten them[^27]. That mutual trust was the core ingredient of <em>securitas</em> during that phase and reflected a check on authority.</p>
<p>It is interesting to ponder the notion of <em>securitas publica</em> in the organizational context; an organization&rsquo;s citizens would be confident that the enforcers of security policy can no longer disrupt their way of life or erode their pursuit of fulfillment. How many cybersecurity programs might be characterized as such today? How many programs instead feel disruptive, corrosive to productivity, and fostering anything but a &ldquo;peaceful and tranquil atmosphere&rdquo;?</p>
<p>Securitas&rsquo; confident spirit evolved into meaning of “assurance of faith” (as opposed to doubt) during Roman Antiquity, as espoused by Tertullian and, later, Saint Augustine. This “opposition to doubt” <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">again is at odds</a> with one of the letters of the acronym which defines traditional cybersecurity: F.U.D. (fear, uncertainty, and doubt). As we’ve seen time and again throughout this series (and we’re only into the 2nd - 4th century A.D. here!), the earlier and variegated meanings of <em>securitas</em> fly in the face of traditional infosec. Traditional infosec wants to doubt everything. It takes <em>pride</em> in doubting everything. Assurance of faith is seen as a security sin!<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup></p>
<p>Speaking of faith, early Roman Antiquity also saw the creation of Securitas as a deity. While the mythos surrounding Securitas, the goddess, is lamentably shallow, it’s worth noting that she was the goddess of security <em>and</em> stability<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>. Given the evidence from <a href="https://services.google.com/fh/files/misc/state-of-devops-2019.pdf">the DORA research</a> and metrics that stability and speed work in tandem and are complimentary, this suggests that if we worship at the altar of security, then we must also worship at the altar of speed.</p>
<p>The Roman god of speed, <a href="https://www.britannica.com/topic/Mercury-Roman-god">Mercurius</a> (aka Mercury), was also the god of shopkeepers, merchants, travelers, transporters of goods, thieves, and tricksters. It&rsquo;s worth noting that the coupling of commerce and prosperity with security is quite common throughout its history (more on this once <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">we get to the 16th century</a>). Traditional cybersecurity, in contrast, often pretends like prosperity is not the primary goal or, worse, views prosperity as a foe to security.</p>
<p>&ldquo;Why don&rsquo;t companies prioritize security? Don&rsquo;t they know THE THINGS can be HACKED??&rdquo; Well, dear security people, what do you think allows the companies to pay for your six or seven figure salary? It is because they prioritize money that they can afford to spend it on security endeavors that do not remunerate them and often cannot even be tied to tangible success outcomes beyond &ldquo;we saw these malware samples or known bad IPs this month&rdquo; spoonfed from vendor dashboards in symbiotic self-perpetuation.</p>
<p>The infosec industry forgets that security, even in <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">its more modern meaning</a>, is not just about protecting threats; it’s about protecting threats <em>against something</em>. In the business context, it’s about protecting threats <em>against prosperity</em>. Through this lens, is it not a victory if a security program waters the seeds of revenue growth? And is the security program not a tragic failure if it chokes and cages this material growth because of a “risk” that exists only as an incorporeal counterfactual?</p>
<p>Between the profligate spending on ineffectual security tools and the obstructionism imposed by security programs, it’s quite possible that the threat to enterprise prosperity by traditionalist information security rivals that posed by actual attackers.</p>
<p>This distinction is also emphasized in the term “national security,” even as we mean it today: national security is about defending threats <em>to what</em>? Liberty, prosperity, the pursuit of happiness… and we rightly dislike security measures that get in the way of these goals (often labeling them as “Security Theater”<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>).</p>
<p>Thus, we must ask, information security defends against threats <em>to what</em>? Largely the same things, but in businessy and computery contexts. If liberty or prosperity or the pursuit of happiness is choked out by security measures, then security is the threat in itself and the subjects are left in need of security against security<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>. Indeed, this is where we find ourselves with cybersecurity today.</p>
<p>But we are not yet done with this era. A few centuries after the deification of securitas, its meaning as “carefree” was twisted by religious leaders into an undesirable form: the state of being careless, reckless, heedless, and negligent<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>.</p>
<p>This notion of security is perhaps closest to the status quo in infosec today, which is quite careless with human (user, developer, colleague) time and attention, reckless with organizational budget, and negligent to design-based security solutions that are more reliable than attempting to control human behavior. The cybersecurity industry is heedless with its FUD-fueled zealotry, fretting about irreleventia while pretending nothing can be done about <a href="https://blogs.cfainstitute.org/investor/2017/10/23/do-gray-rhinos-pose-a-greater-threat-than-black-swans/">the grey rhinos</a> charging into our systems.</p>
<p><em>Securitas</em> was also relevant in the context of “Roman security” and specifically meant the Roman Empire’s peaceful and orderly domination of the world. Would we characterize traditional infosec programs as peaceful and orderly today? Even diehard zealots of the cybersecurity status quo readily admit that much of infosec in practice is firefighting and disorder. A worthy question is: who benefits from this paradigm?</p>
<p>Alas, the Roman Empire declined, as did <em>securitas</em>, whose meanings were largely stolen by the word <em>certitudo</em>. Thus, we must go to the provincial stables of the Middle Ages to continue our semantic safari. The two meanings of securitas not consumed by certitudo included <em>pax</em> (peace) and religious indifference.</p>
<p>The latter meaning persisted (albeit without nearly as much popularity as before) through to Martin Luther in the 16th century, who labeled “die Sicheren” as the people he was fighting against &ndash; people who did not truly trust the Holy Spirit and substituted true faith for religious rituals and conspicuous, performative acts. In his time, spiritual unity <a href="https://www.nationalgeographic.com/history/article/martin-luther-freedom-protestant-reformation-500">was preserved</a> &ldquo;through coercion and violence&hellip; dissent from orthodoxy was outlawed, heresy was rooted out and punished by fire and sword.&rdquo; Luther was excommunicated for his &ldquo;errors&rdquo; about the Holy Spirit, including the &ldquo;error&rdquo; of believing the Christian god wouldn&rsquo;t want heretics burned alive.</p>
<p>In our era, the traditionalist Security People put quite a bit of trust in their folk wisdom and rituals, despite their unclear success. It is still counterculture to suggest that humans shouldn&rsquo;t be punished for security &ldquo;errors.&rdquo;<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup> And does it not benefit the vendors and research analysts to continue spoon feeding this advice to security leaders?</p>
<p>Just as Martin Luther felt centuries past about religious belief, is it wrong to want to reconstruct our entire approach to cybersecurity? Just because power structures are in place, incumbents entrenched, money flowing, does not mean something new, bold, and based on real acts of security rather than displays of it &ndash; on outcomes vs. outputs &ndash; could not supplant the status quo. Fatalism is not true to our nature as humans and certainly not true to the spirit of the &ldquo;security&rdquo; concept as we have seen.</p>
<p>But there is more for us to see and for that, we must venture onward into the pre-Enlightenment period and beyond&hellip;</p>
<blockquote>
<p>Continue with <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5/">Track V: The Evolving Meaning of &ldquo;Security&rdquo; after the Roman Era (Securitas)</a>.</p>
</blockquote>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<p><img src="/blog/img/sec-etymology/what-does-security-mean-cover-art.png" alt="The cover art for the album with the title: What do we mean when we say security? It depicts an island floating in a sky filled with rainbow and pastel clouds in shades of periwinkle and violet. The island itself is a paradise, a blend of fantasy and cyberpunk aesthetics. Lush trees blanket its ledges while waterfalls cascade from each ledge, frozen in time and resembling a beautiful digital glitch. It is meant to reflect the utopia we might achieve with our systems &amp;ndash; our own islands &amp;ndash; if we embraced the original meanings of the word security."></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>From <a href="https://www.gutenberg.org/files/47001/47001-h/47001-h.htm"><em>De Officiis</em></a> passage 69 (nice): &ldquo;Vacandum autem omni est animi perturbatione, cum cupiditate et metu, tum etiam aegritudine et voluptate nimia[64] et iracundia, ut tranquillitas animi et securitas adsit, quae affert cum constantiam, tum etiam dignitatem.&rdquo; This translates roughly to: &ldquo;But it is necessary to be freed from all disturbance of the mind, with desire and fear, and also from sickness and excessive pleasure and anger, so that there may be peace of mind and security, which brings with it constancy, as well as dignity.&rdquo;&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Schrimm-Heins, A. (1991). Gewissheit und Sicherheit: Geschichte und Bedeutungswandel der Begriffe certitudo und securitas (Teil I). <em>Archiv für Begriffsgeschichte, 34</em>, 123-213.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Of course, in SCE, we want to foster this sort of assurance through repeated experimentation – cultivating confidence through empirical evidence affirming or denying our hypotheses about the resilience of our systems.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Upon learning this, I immediately updated my brain dictionary lookup to display Security as a gorgeous transbian goddess whose favorite language, naturally, is Rust. I am hoping for a crossover episode in which our representative enby god, Loki, woos her by donning thigh highs made of the tendons of her enemies.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>Levenson, E. (2014). The TSA Is in the Business of&rsquo;Security Theater,&rsquo; Not Security. <a href="https://www.theatlantic.com/national/archive/2014/01/tsa-business-security-theater-not-security/357599/">The Atlantic</a>.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p><em>Quis custodiet ipsos custodes?</em>&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>It was Pope Gregory I as hypeman for this interpretation and, yet again, the parallels between traditional infosec and the authoritarianism of the Catholic Church are… intriguing to say the least.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>It&rsquo;s been fun watching the industry catch up to me. ~6 - 7 years ago when I was dropping spicy takes about how bullshit &ldquo;gotchya&rdquo; security tests are (along with a bunch of other behavioral science-informed takes), I got a ton of pushback and usually vitrol. BuT ReAL aTTaCkErS dOn&rsquo;T CaRe AbOuT fEeLinGs. Many of those same people now launder those takes and pretend like they were always on board. There&rsquo;s probably a post in itself about the adoption cycle of hot takes where, at the beginning, people bristle because it&rsquo;s new and bold and different but eventually it&rsquo;s accepted enough that it&rsquo;s worth changing your beliefs and evangelizing it to look &ldquo;thought leadering.&rdquo; Hopefully one day I&rsquo;ll be similarly vindicated with my (still wildly unpopular) take that &ldquo;DevSecOps&rdquo; is an unnecessary and harmful term.&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>The Dawn of &#34;Security&#34; as a Noun: Securitas (Track III) </title>
            <link>https://kellyshortridge.com/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/</link>
            <pubDate>Thu, 20 Oct 2022 07:41:33 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/</guid>
            <description>This essay examines the transition of the word ‘security’ from securus (an adjective) to securitas (a noun) during the Roman era. It is Track III of a longer concept album exploring what we mean when we use the word ‘security’ (and what it should mean).
You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
The ancient association of securus with Epicureanism was not to last. Epicurianism was outlawed once the Roman Empire entered its rebellious Christianity phase because, as a philosophy, it’s quite incompatible with the idea that souls must be “saved” and that God is relevant to everyday life (Epicurus literally argued that the gods do not gaf about human affairs and do not punish or reward human behavior).
Why is this relevant to infosec? Because – as a collection of entities across vendors, consultants, thought leaders, and practitioners incentivized to increase influence on the world – information security has sanctified itself as a secular authority who can deem worthiness from on high and reward or punish according to behavior1.
Most security advice roughly goes, “if you’re interacting with a computer and what you’re doing feels convenient, you are actually doing something BAD.” We’re supposed to report when we’ve done something wrong, like a Catholic at confessional. We can gain an “exception” from the security authority like the medieval Catholic Church granting indulgences2 to partially reduce the punishment of the sin.
Naturally, only the ordained can read and interpret the sacred texts. The unwashed masses may only receive the good word. The divine wisdom is so complex, so arcane, far too difficult for anyone else to transform into action. Does this not imply that the non-security “normies” cannot be secure without the blessing of the security establishment? The human “users” must suffer in this life for their sins, for turning away from the path of security3.
In the eyes of the Infosec Church, users are weak sheep who must be told what to do and guided with a strong hand in the ways of natural security law, less they drift wayward into wickedness. We must practice chastity in all manners digital and resist the temptation of clicking on things unless we want the whole network to drown in depravity4.
For the Security Spirit is always watching. It knows when you allow incoming connections from cloud provider IPs even though attackers also use those IPs. It knows when you copy and paste something from Stack Overflow even though it could be backdoored. It knows when you don’t VPN on the hotel WiFi, where anyone, including a big, sexy scary APT could connect to it. Wicked, wicked user! A thousand years smoldering in hellfire and pestilence for your sins! Try clicking things now once the maggots have feasted upon your flesh!
We will return to security in the context of authority later, but now we must march onward to examine how securus, the adjective, evolved its noun form: securitas. In the Roman period, securitas specifically corresponded to intense emotions. And, it’s worth noting, the freedom from care represented by securitas does not require justification based on reality.
Securitas refers to a group of emotions (the things security and software “rationalists” alike pretend they don’t have) which relate to the absence of fear and include emotions like trust and confidence5. In fact, even the more modern notion of “job security” aligns to this older meaning; it is a feeling, specifically that you don’t have to worry about losing employment. Threats to it aren’t the point, the feeling is the point.
Now, my dear mortals, can we imagine an infosec program designed to ensure the organization is fearless and free from care, an infosec program that is quiet, easy, and composed? Cybersecurity as a discipline would be concerned with ensuring the organization could remain cheerful, tranquil, and serene. Servers would frolick in a field like fecund fawns. Software engineers would release code with confidence, trusting the safety designed into their languages, tools, platforms, and environments. If employees felt fear, uncertainty, or doubt when using technical systems, the security program would be curious and design solutions to alleviate their concerns.
Imagine an infosec program with the goal of relieving the rest of the organization from anxiety about security… cybersecurity that promotes convenience and puts in the hard work of crafting design-based mitigations. Status quo infosec – manifesting as SecObs, Security Theatre, etc. – seeks quite the opposite! Traditional cybersecurity programs openly admit aiming to increase anxiety among the rest of the organization to ensure they are vigilant to threats and always looking over their proverbial shoulders for potential peril. The security people decry convenience and shame users for seeking it while simultaneously indulging in it like Scrooge McDuck in his pool of gold by relying on enforcement, behavioral control, and blame as cheap “mitigations.”
I often read security advice or policies or other prescriptions and have the sense that the authors are trying very hard to pretend that local context is irrelevant and that generalized control is possible. Convenience is often framed as the enemy. The question is: convenience for whom?
Sure, convenience is clicking on every link or adding a third-party library without a second thought. But convenience is also requiring a security tool that you will never have to use, without performing any user research with the humans who will use it in their workflows. Convenience is tapping the 10^10 Security Commandments when someone makes a mistake and blaming them in front of Congress. Are we shocked that a framework of “convenience for me, but not for thee” doesn’t seem to produce the bundle of positive emotions securitas represented?
Are there words less associated with cybersecurity than “cheerful,” “bright,” “serene, “composed,” “quiet,” or “easy”?6 The whole business seems antithetical to those traits. Traditional infosec is all cura and no se – better deemed “cybercurity” than cyber_se_curity: a discipline of increasing concern, thought, trouble, anxiety, and grief in the organization regarding “cyber” matters. Offensive security is especially nonsensical through this etymological lens because it then means “offensive tranquility.”
Or maybe it isn’t that crazy. After all, don’t overpriced healing crystals and infosec wares have quite a bit in common?
Continue with Track IV: The Multifaceted Meaning of “Security” in the Roman Era (Securitas) .
Conclusion You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
In return, these stringent practices reinforce the status quo and uphold organizational power structures, which suits leadership just fine (and, besides, how would we expect them to know how security programs could look outside of the infosec status quo?). ↩︎
You watch the church leaders exchange influence for money, but instead of imparting the power of the Holy Spirit it’s your unfortunately unscrupulous CISO pushing some dogshit into your stack because their buddy invested in the startup and they do this back and forth and then blame the engineers or users who don’t want to interact with the dogshit for why everything is failing because nothing is your fault when you have the authority of something sacred, whether the Holy Spirit or the Security Spirit. ↩︎
I do find it interesting that when CISOs do not disclose a breach, instead laundering it through a bug bounty program, that is being “strategic” and showing security leadership, but when a software engineering team doesn’t fix a security bug immediately – no matter how contrived the exploit scenario – then they lack integrity. ↩︎
Perhaps we should be grateful there aren’t LinkedIn posts like “here’s the best way to run your sin response team #securitas #ciso #inquisition #SinSecOps. ↩︎
Wonderly, M. (2019). On the Affect of Security. philosophical topics, 47(2), 165-182. ↩︎
Intriguingly – and rather self-servingly, although I did not expect it to be so when delving into this thought exercise – the original meanings of securus and securitas align nicely with the goals of Security Chaos Engineering (SCE). Composure is something for which SCE strives through the practice of repeated experimentation. SCE wants security to be quiet, it seeks to foster organizational confidence, to grant organizations the freedom to not fret about potential incidents because they feel so well-practiced through experimentation, strong feedback loops, and resilient design that they feel fearless about the inevitable. In fact, we explicitly encourage defenders to have fun with SCE experiments, getting infosec closer to that original connotation that security involves a feeling of being cheerful and bright. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>This essay examines the transition of the word &lsquo;security&rsquo; from securus (an adjective) to securitas (a noun) during the Roman era. It is Track III of a longer concept album exploring what we mean when we use the word &lsquo;security&rsquo; (and what it <em>should</em> mean).</p>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<p><img src="/blog/img/sec-etymology/what-is-security-priest-praying.png" alt="A painting of a classical priest praying to a stained glass painting depicting a fancy padlock."></p>
<p><a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-3/">The ancient association</a> of <em>securus</em> with Epicureanism was not to last. Epicurianism was outlawed once the Roman Empire entered its rebellious Christianity phase because, as a philosophy, it’s quite incompatible with the idea that souls must be “saved” and that God is relevant to everyday life (Epicurus literally argued that the gods do not gaf about human affairs and do not punish or reward human behavior).</p>
<p>Why is this relevant to infosec? Because – as a collection of entities across vendors, consultants, thought leaders, and practitioners incentivized to increase influence on the world – information security has sanctified itself as a secular authority who can deem worthiness from on high and reward or punish according to behavior<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>.</p>
<p>Most security advice roughly goes, “if you’re interacting with a computer and what you’re doing feels convenient, you are actually doing something BAD.” We’re supposed to report when we’ve done something wrong, like a Catholic at confessional. We can gain an “exception” from the security authority like the medieval Catholic Church granting indulgences<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> to partially reduce the punishment of the sin.</p>
<p>Naturally, only the ordained can read and interpret the sacred texts. The unwashed masses may only <em>receive</em> the good word. The divine wisdom is so complex, so arcane, far too difficult for anyone else to transform into action. Does this not imply that the non-security “normies” cannot be secure without the blessing of the security establishment? The human “users” must suffer in this life for their sins, for turning away from the path of security<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>.</p>
<p>In the eyes of the Infosec Church, users are weak sheep who must be told what to do and guided with a strong hand in the ways of natural security law, less they drift wayward into wickedness. We must practice chastity in all manners digital and resist the temptation of clicking on things unless we want the whole network to drown in depravity<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>.</p>
<p>For the Security Spirit is always watching. It knows when you allow incoming connections from cloud provider IPs even though attackers <em>also</em> use those IPs. It knows when you copy and paste something from Stack Overflow even though it could be <em>backdoored</em>. It knows when you don&rsquo;t VPN on the hotel WiFi, where <em>anyone</em>, including a big, <del>sexy</del> scary APT could connect to it. Wicked, wicked user! A thousand years smoldering in hellfire and pestilence for your sins! Try clicking things now once the maggots have feasted upon your flesh!</p>
<p><img src="/blog/img/sec-etymology/what-is-security-tranquil-servers.png" alt="A pair of servers frolicking in a field of flowers."></p>
<p>We will return to security in the context of authority <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">later</a>, but now we must march onward to examine how <em>securus</em>, the adjective, evolved its noun form: <em>securitas</em>. In the Roman period, <em>securitas</em> specifically corresponded to intense emotions. And, it’s worth noting, the freedom from care represented by securitas does not require justification based on reality.</p>
<p>Securitas refers to a group of <strong>emotions</strong> (the things security and software “rationalists” alike pretend they don’t have) which relate to the absence of fear and include emotions like trust and confidence<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>. In fact, even the more modern notion of “job security” aligns to this older meaning; it is a <em>feeling</em>, specifically that you don’t have to worry about losing employment. Threats to it aren’t the point, the <em>feeling</em> is the point.</p>
<p>Now, my dear mortals, can we imagine an infosec program designed to ensure the organization is fearless and free from care, an infosec program that is quiet, easy, and composed? Cybersecurity as a discipline would be concerned with ensuring the organization could remain cheerful, tranquil, and serene. Servers would frolick in a field like fecund fawns. Software engineers would release code with confidence, trusting the safety designed into their languages, tools, platforms, and environments. If employees felt fear, uncertainty, or doubt when using technical systems, the security program would be <em>curious</em> and design solutions to alleviate their concerns.</p>
<p>Imagine an infosec program with the goal of relieving the rest of the organization from anxiety about security&hellip; cybersecurity that <em>promotes</em> convenience and puts  in the hard work of crafting design-based mitigations. Status quo infosec – manifesting as <a href="https://swagitda.com/blog/posts/the-security-obstructionism-secobs-market/">SecObs</a>, <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwioyeL-npP6AhWoFFkFHSjpAdcQtwJ6BAgiEAI&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DkiunphALNKw&amp;usg=AOvVaw3k8PeBeTtYaSdigBG-lPpd">Security Theatre</a>, etc. – seeks quite the opposite! Traditional cybersecurity programs openly admit aiming to <em>increase</em> anxiety among the rest of the organization to ensure they are vigilant to threats and always looking over their proverbial shoulders for potential peril. The security people decry convenience and shame users for seeking it while simultaneously indulging in it like Scrooge McDuck in his pool of gold by relying on enforcement, behavioral control, and blame as cheap &ldquo;mitigations.&rdquo;</p>
<p>I often read security advice or policies or other prescriptions and have the sense that the authors are trying very hard to pretend that local context is irrelevant and that generalized control is possible. Convenience is often framed as the enemy. The question is: convenience for whom?</p>
<p>Sure, convenience is clicking on every link or adding a third-party library without a second thought. But convenience is also requiring a security tool that you will never have to use, without performing any user research with the humans who <em>will</em> use it in their workflows. Convenience is tapping the 10^10 Security Commandments when someone makes a mistake and blaming them in front of Congress. Are we shocked that a framework of &ldquo;convenience for me, but not for thee&rdquo; doesn&rsquo;t seem to produce the bundle of positive emotions <em>securitas</em> represented?</p>
<p>Are there words <em>less</em> associated with cybersecurity than “cheerful,” “bright,” “serene, “composed,” “quiet,” or “easy”?<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup> The whole business seems antithetical to those traits. Traditional infosec is all <em>cura</em> and no <em>se</em> – better deemed “cybercurity” than cyber_se_curity: a discipline of <em>increasing</em> concern, thought, trouble, anxiety, and grief in the organization regarding “cyber” matters. Offensive security is especially nonsensical through this etymological lens because it then means “offensive tranquility.”</p>
<p>Or maybe it isn&rsquo;t that crazy. After all, don&rsquo;t overpriced healing crystals and infosec wares have quite a bit in common?</p>
<blockquote>
<p>Continue with <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">Track IV: The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas) </a>.</p>
</blockquote>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<p><img src="/blog/img/sec-etymology/what-does-security-mean-cover-art.png" alt="The cover art for the album with the title: What do we mean when we say security? It depicts an island floating in a sky filled with rainbow and pastel clouds in shades of periwinkle and violet. The island itself is a paradise, a blend of fantasy and cyberpunk aesthetics. Lush trees blanket its ledges while waterfalls cascade from each ledge, frozen in time and resembling a beautiful digital glitch. It is meant to reflect the utopia we might achieve with our systems &amp;ndash; our own islands &amp;ndash; if we embraced the original meanings of the word security."></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>In return, these stringent practices reinforce the status quo and uphold organizational power structures, which suits leadership just fine (and, besides, how would we expect them to know how security programs could look outside of the infosec status quo?).&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>You watch the church leaders exchange influence for money, but instead of imparting the power of the Holy Spirit it’s your unfortunately unscrupulous CISO pushing some dogshit into your stack because their buddy invested in the startup and they do this back and forth and then blame the engineers or users who don’t want to interact with the dogshit for why everything is failing because nothing is your fault when you have the authority of something sacred, whether the Holy Spirit or the Security Spirit.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>I do find it interesting that when CISOs do not disclose a breach, instead laundering it through a bug bounty program, that is being &ldquo;strategic&rdquo; and showing security leadership, but when a software engineering team doesn&rsquo;t fix a security bug immediately &ndash; no matter how contrived the exploit scenario &ndash; then they lack integrity.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Perhaps we should be grateful there aren’t LinkedIn posts like “here’s the best way to run your sin response team #securitas #ciso #inquisition #SinSecOps.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>Wonderly, M. (2019). On the Affect of Security. <em>philosophical topics, 47</em>(2), 165-182.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Intriguingly – and rather self-servingly, although I did not expect it to be so when delving into this thought exercise – the original meanings of <em>securus</em> and <em>securitas</em> align nicely with the goals of <a href="https://kellyshortridge.com/book.html">Security Chaos Engineering (SCE)</a>. Composure is something for which SCE strives through the practice of repeated experimentation. SCE wants security to be <em>quiet</em>, it seeks to foster organizational <em>confidence</em>, to grant organizations the freedom to not fret about potential incidents because they feel so well-practiced through experimentation, strong feedback loops, and resilient design that they feel <em>fearless</em> about the inevitable. In fact, we explicitly encourage defenders to have fun with SCE experiments, getting infosec closer to that original connotation that security involves a feeling of being <em>cheerful</em> and <em>bright</em>.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>A Platonic Dialogue on Security (Track II)</title>
            <link>https://kellyshortridge.com/blog/posts/a-platonic-dialogue-on-security-securus-part-2/</link>
            <pubDate>Thu, 20 Oct 2022 07:13:25 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/a-platonic-dialogue-on-security-securus-part-2/</guid>
            <description>This essay is a Platonic dialogue on the concept of “security” and its roots in Ancient Greece. It is Track II of a longer concept album exploring what we mean when we use the word ‘security’ (and what it should mean).
You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
Persons of the Dialogue1
SECRATES
THEOXORUS
Theoxorus: I am feeling secure in my knowledge today, Secrates, yet have no doubt you shall shortly annoy me with indistinct inquiries into something simple that we should enjoy simply for simplicity’s sake.
Secrates: Secure! O, my dear Theoxorus, do my ears truly witness you bringing a conversation to me on a shining platter?
Theoxorus: What do you mean, Secrates?
Secrates: I mean that you used the word “secure.” And what does “secure” mean? We should always be on the lookout for such answerless words. I know you do not wish to examine it, but now we must!
Theoxorus: Secrates, no –
Secrates: Do you truly object? Is it not your own lips which revealed the relevance of this word to your very life?
Theoxorus: I… I cannot object.
Secrates: And neither can I. We must proceed and, another time, we can discuss what your hesitance for exploration means. I observe that you grow weary of dissecting words and essences of late. And yet with what else do you fill your days? Is it not your own shame you are unwilling to confront? Is it not this regular discourse that exposes the inner self you wish to –
Theoxorus: Secrates! Let us examine “secure” now and my own soul later.
Secrates: As you wish, my dear Theoxorus, I will now proceed with this inquiry, for which I owe you many thanks. There are two words from our current civilization that serve as the inspiration for securus: ataraksia and asphaleia. Let us proceed first with ataraksia.
Theoxorus: What tongue is the word “securus”?
Secrates: Latin.
Theoxorus: “Latin”?
Secrates: Yes. I have seen into the future during a bathing ritual.
Theoxorus: Ah, Secrates, you indulge in the pleasures of the oracles!
Secrates: Believe what you wish. But let us now proceed, as you insisted. What defines ataraksia but what it is not? It is the negation of taraksia, from tarrassein, which means to trouble the mind, to agitate, to disturb, to stir.
Theoxorus: Just as your incessant inquiries do to me.
Secrates: Precisely. And if ataraksia is the negation of these verbs, may it not be said to reflect calmness, equanimity, tranquility? It is as Pyrrho said, a form of freedom from distress and concern, and as the public says in their less formal dialogue, it is the mental state soldiers must cultivate before battle. Is it a goal, a kind of goodness, that a person must pursue in their lives? The Pyrrhonists, Epicureans, and Stoics would agree with this, each for different reasons reflecting their different philosophical foundations.
Theoxorus: What do you think, Secrates?
Secrates: I know nothing, as you know well, Theoxorus. What matters for our conversation is the essence of ataraksia: a freedom from disturbances, especially of the mental variety. And, then, as I have seen in my bathtub in a very distant future, what matters is that the verbs ataraksia is meant to negate – to disturb, to agitate, to trouble, to stir – are the verbs most associated with traditional cybersecurity. Does this not suggest security then means its very opposite?
Theoxorus: To be sure.
Secrates: And does this not trouble the mind in itself?
Theoxorus: Certainly. But how can you know such contraction abounds?
Secrates: This future world seems designed by contradiction. Their “security awareness training” exercises, such as those meant to phish humans as one lures a fish with a decoy worm, have the explicit goal of “troubling the mind” to keep persons vigilant for danger. In this future, application security tools are infamous for how they disturb software development and delivery practices – and does that not trouble the minds of software engineers? The list of security rules and policies are unending, often arbitrary – and have they not found a most effective means to agitate the subjects under their dominion?
Theoxorus: They have.
Secrates: And do we believe that such activities result in greater defenses?
Theoxorus: Certainly not, unless one believes that defense is impossible through design. This reminds me of our prior dialogue on beauty, Secrates, as what you describe of this “infosec industry” must make it beauty’s enemy.
Secrates: Are you surprised, Theoxorus, that infosec makes enemies when its goal is to disrupt tranquility? And how could infosec achieve beauty when it sees ugliness and danger in all things outside itself?
Theoxorus: Of course. But surely some interpretation by other schools of thought justifies this perversion?
Secretes: They will not. Atarksia is seen as a strict requirement to attain the true, full happiness referred to as eudaimonia. It may surprise you, my dear Theoxorus, that the word atarksia is associated with Epicurean philosophy.
Theoxorus: But calmness seems harmonious with Epicureanism.
Secrates: Did you wish to ask me a question, my friend?
Theoxorus: Your social skills are as crude as unfired amphorae, Secrates. So, then, what is shocking about this?
Secrates: It is shocking because it is hard to imagine a philosophy more opposed to infosec than Epicurianism, which argues that the goal of a sentient life form is to maximize pleasure and minimize pain2. Epicurus is specific in defining pleasure as the absence of pain, and therefore “ethical hedonism” is the pursuit of avoiding pain, including pain that imparts pleasure near-term but pain longer-term. Without distracting ourselves by examining Epicureanism in more detail, we can say that the goal they espouse is to foster a life of tranquility. Does this cybersecurity community foster a life of such tranquility?
Theoxorus: They do not.
Secrates: I agree, my friend. Cybersecurity is not known for avoiding pain, regardless of temporal outlook. Infosec inflicts pain on others, whether by stoking fear or by making lives harder. Is it not fair to argue that infosec even inflicts pain on itself?3 Is it not cruel to cultivate obsession of vulnerabilities that kindle fear, uncertainty, and doubt when your stated aim is to eliminate them? Do we believe this fetishization of vulnerabilities and lascivious focus on blaming what they call “human error” can be called “ethical hedonism”? Or is it a societal mechanism to stifle introspection and to instead reenact shame? I regret that these questions reflect a topic for another time in the realm of psychology4, which has yet to be invented.
Theoxorus: You tease me, Secrates.
Secrates: Yes, but there is another thing, Theoxorus: what of asphaleia?
Theoxorus: You are unfailing in your pursuit, Secrates.
Secrates: Well, I suspect you might find its origin amusing. Asphaleia originates from wrestling and reflects the capacity to prevent being overthrown, being immovable and steadfast, like the throne of the gods, or like me in the presence of your lamentations and tantrums about our discourse. By roughly a century before our time5, asphaleia also came to mean the stability of the city state, to prevent being overthrown, and, if I permit myself to indulge in speculation, it could be extended to describe the stability of an organization (a kind of social entity I beheld in my bath). And, as some great scientists will prove thousands of years from our day, speed and stability of work harmonize and impart greater value together than apart. And, well, now I can put the matter as: is this infosec, that which slows down work, an enemy of asphaleia?
Theoxorus: Yes, certainly.
Secrates: Very good; and can you tell me how this might be despite asphaleia serving as the seed of the “security” concept’s own existence?
Theoxorus: I must confess, Secrates, that this “security” society of the future seems very lost.
Secrates: I dare say, my friend, that you spend too much time with me if you think it is an uncommon human desire to seek power and control, even at the expense of integrity. And can we truly argue that such desires are always conscious to the subject?
Theoxorus: Alas, they are not.
Secrates: If what you say is true, I ask you, then: what is this cybersecurity society not most of all?
When Secrates had asked his question, for a considerable time there was silence; Theoxorus furrowed his brow while meditating on this question; only Secrates made a sound when softly blowing on the delicate seedheads of a dandelion.
Theoxorus: For what did you wish as you blew, Secrates?
Secrates looked up at Theoxorus and said, with a smile: For you to answer my question.
Theoxorus: I will tell you. My feeling is that this cybersecurity society lacks curiosity.
Secrates: Exactly. The traditional infosec society is kin of Argus Panoptes; the role of enforcer grants them relevancy but not wisdom. Alas, my friend, if only they would follow the path of Daedalus instead. They feel ignorance as a sting and slight, as if ignorance was not the default condition of being alive! But there is more: how are they like the sophist?
Theoxorus: They both are paid to question without truth as their aim.
Secrates: And do they not both gain fortunes from this?
Theoxorus: They do.
Secrates: And are they not both hunters after a living prey, servants of the powerful, cousins of opportunists exploiting emotion for control?
Theoxorus: They are.
Secrates: But where the cybersecurity society differs is they seek the impossible void – the not-being of weakness – and they are willing to destroy whatever being stands in the way of this pyrrhic quest.
Continue with Track III: The Dawn of “Security” as a Noun (Securitas).
Conclusion You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
I think the closest we get to Platonic dialogues in modern times is Ao3 fanfiction #slowbuild #lightangst #friendship #humor #confessions #aroace #college #dom/sub #drama #alpha/beta/omegadynamics ↩︎
Rorty, Mary. “Lecture 10.1: Epicurus and Lucretius.” Stanford University. http://web.stanford.edu/~mvr2j/ucsccourse/Lecture10.1.pdf ↩︎
Infosec as an entity truly exhibits a weird form of masochism that honestly becomes slightly uncomfortable to contemplate if we start untangling all the evidence in support of it. ↩︎
I am tempted to delve into the psychological concept of security and insecurity but I fear its revelations – despite being aimed at infosec as collective – would be interpreted as personal attacks. I will leave this one morsel for us to digest: the APA defines insecurity as a feeling of inadequacy, a lack of self-confidence, an inability to cope combined by general uncertainty about one’s goals, abilities, or relationships with others. To what degree does this notion of psychological insecurity accurately characterize the traditional infosec industry – its folk wisdom, zeitgeist, program priorities, prescribed procedures, policies, and so forth? ↩︎
“Our time” here is referring to the time of Socrates (the inspiration for “Secrates”), which was in the 4th century B.C.E. Therefore, the rise of asphaleia meaning the stability of the city state was around the 5th century B.C.E. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>This essay is a Platonic dialogue on the concept of &ldquo;security&rdquo; and its roots in Ancient Greece. It is Track II of a longer concept album exploring what we mean when we use the word &lsquo;security&rsquo; (and what it <em>should</em> mean).</p>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<p><img src="/blog/img/sec-etymology/what-is-security-socrates-cat.png" alt="A painting of a cat in Socratic robes in an ancient greek temple."></p>
<p><strong>Persons of the Dialogue<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></strong></p>
<p>SECRATES</p>
<p>THEOXORUS</p>
<hr>
<p>Theoxorus: I am feeling secure in my knowledge today, Secrates, yet have no doubt you shall shortly annoy me with indistinct inquiries into something simple that we should enjoy simply for simplicity’s sake.</p>
<p>Secrates: Secure! O, my dear Theoxorus, do my ears truly witness you bringing a conversation to me on a shining platter?</p>
<p>Theoxorus: What do you mean, Secrates?</p>
<p>Secrates: I mean that you used the word “secure.” And what does “secure” mean? We should always be on the lookout for such answerless words. I know you do not wish to examine it, but now we must!</p>
<p>Theoxorus: Secrates, no –</p>
<p>Secrates: Do you truly object? Is it not your own lips which revealed the relevance of this word to your very life?</p>
<p>Theoxorus: I… I cannot object.</p>
<p>Secrates: And neither can I. We must proceed and, another time, we can discuss what your hesitance for exploration means. I observe that you grow weary of dissecting words and essences of late. And yet with what else do you fill your days? Is it not your own shame you are unwilling to confront? Is it not this regular discourse that exposes the inner self you wish to –</p>
<p>Theoxorus: Secrates! Let us examine “secure” now and my own soul later.</p>
<p>Secrates: As you wish, my dear Theoxorus, I will now proceed with this inquiry, for which I owe you many thanks. There are two words from our current civilization that serve as the inspiration for securus: <em>ataraksia</em> and <em>asphaleia</em>. Let us proceed first with <em>ataraksia</em>.</p>
<p>Theoxorus: What tongue is the word “securus”?</p>
<p>Secrates: Latin.</p>
<p>Theoxorus: “Latin”?</p>
<p>Secrates: Yes. I have seen into the future during a bathing ritual.</p>
<p>Theoxorus: Ah, Secrates, you indulge in the pleasures of the oracles!</p>
<p>Secrates: Believe what you wish. But let us now proceed, as you insisted. What defines ataraksia but what it is <em>not</em>? It is the negation of <em>taraksia</em>, from <em>tarrassein</em>, which means to trouble the mind, to agitate, to disturb, to stir.</p>
<p>Theoxorus: Just as your incessant inquiries do to me.</p>
<p>Secrates: Precisely. And if ataraksia is the negation of these verbs, may it not be said to reflect calmness, equanimity, tranquility? It is as Pyrrho said, a form of freedom from distress and concern, and as the public says in their less formal dialogue, it is the mental state soldiers must cultivate before battle. Is it a goal, a kind of goodness, that a person must pursue in their lives? The Pyrrhonists, Epicureans, and Stoics would agree with this, each for different reasons reflecting their different philosophical foundations.</p>
<p>Theoxorus: What do you think, Secrates?</p>
<p>Secrates: I know nothing, as you know well, Theoxorus. What matters for our conversation is the essence of ataraksia: a freedom from disturbances, especially of the mental variety. And, then, as I have seen in my bathtub in a very distant future, what matters is that the verbs ataraksia is meant to <em>negate</em> – to disturb, to agitate, to trouble, to stir – are the verbs most associated with traditional cybersecurity. Does this not suggest security then means its very opposite?</p>
<p>Theoxorus: To be sure.</p>
<p>Secrates: And does this not trouble the mind in itself?</p>
<p>Theoxorus: Certainly. But how can you know such contraction abounds?</p>
<p>Secrates: This future world seems designed by contradiction. Their “security awareness training” exercises, such as those meant to phish humans as one lures a fish with a decoy worm, have the explicit goal of “troubling the mind” to keep persons vigilant for danger. In this future, application security tools are infamous for how they disturb software development and delivery practices – and does that not trouble the minds of software engineers? The list of security rules and policies are unending, often arbitrary – and have they not found a most effective means to agitate the subjects under their dominion?</p>
<p>Theoxorus: They have.</p>
<p>Secrates: And do we believe that such activities result in greater defenses?</p>
<p>Theoxorus: Certainly not, unless one believes that defense is impossible through design. This reminds me of our prior dialogue on beauty, Secrates, as what you describe of this “infosec industry” must make it beauty’s enemy.</p>
<p>Secrates: Are you surprised, Theoxorus, that infosec makes enemies when its goal is to disrupt tranquility? And how could infosec achieve beauty when it sees ugliness and danger in all things outside itself?</p>
<p>Theoxorus: Of course. But surely some interpretation by other schools of thought justifies this perversion?</p>
<p>Secretes: They will not. Atarksia is seen as a strict requirement to attain the true, full happiness referred to as <em>eudaimonia</em>. It may surprise you, my dear Theoxorus, that the word atarksia is associated with Epicurean philosophy.</p>
<p>Theoxorus: But calmness seems harmonious with Epicureanism.</p>
<p>Secrates: Did you wish to ask me a question, my friend?</p>
<p>Theoxorus: Your social skills are as crude as unfired amphorae, Secrates. So, then, what is shocking about this?</p>
<p>Secrates: It is shocking because it is hard to imagine a philosophy more opposed to infosec than Epicurianism, which argues that the goal of a sentient life form is to maximize pleasure and minimize pain<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>. Epicurus is specific in defining pleasure as the absence of pain, and therefore “ethical hedonism” is the pursuit of avoiding pain, including pain that imparts pleasure near-term but pain longer-term. Without distracting ourselves by examining Epicureanism in more detail, we can say that the goal they espouse is to foster a life of <em>tranquility</em>. Does this cybersecurity community foster a life of such tranquility?</p>
<p>Theoxorus: They do not.</p>
<p>Secrates: I agree, my friend. Cybersecurity is not known for avoiding pain, regardless of temporal outlook. Infosec inflicts pain on others, whether by stoking fear or by making lives harder. Is it not fair to argue that infosec even inflicts pain on itself?<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> Is it not cruel to cultivate obsession of vulnerabilities that kindle fear, uncertainty, and doubt when your stated aim is to eliminate them? Do we believe this fetishization of vulnerabilities and lascivious focus on blaming what they call “human error” can be called “ethical hedonism”? Or is it a societal mechanism to stifle introspection and to instead reenact shame? I regret that these questions reflect a topic for another time in the realm of psychology<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, which has yet to be invented.</p>
<p>Theoxorus: You tease me, Secrates.</p>
<p>Secrates: Yes, but there is another thing, Theoxorus: what of <em>asphaleia</em>?</p>
<p>Theoxorus: You are unfailing in your pursuit, Secrates.</p>
<p>Secrates: Well, I suspect you might find its origin amusing. Asphaleia originates from wrestling and reflects the capacity to prevent being overthrown, being immovable and steadfast, like the throne of the gods, or like me in the presence of your lamentations and tantrums about our discourse. By roughly a century before our time<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>, asphaleia also came to mean the stability of the city state, to prevent being overthrown, and, if I permit myself to indulge in speculation, it could be extended to describe the stability of an organization (a kind of social entity I beheld in my bath). And, as some great scientists <a href="https://itrevolution.com/accelerate-book/">will prove</a> thousands of years from our day, speed and stability of work harmonize and impart greater value together than apart. And, well, now I can put the matter as: is this infosec, that which slows down work, an enemy of asphaleia?</p>
<p>Theoxorus: Yes, certainly.</p>
<p>Secrates: Very good; and can you tell me how this might be despite asphaleia serving as the seed of the “security” concept’s own existence?</p>
<p>Theoxorus: I must confess, Secrates, that this “security” society of the future seems very lost.</p>
<p>Secrates: I dare say, my friend, that you spend too much time with me if you think it is an uncommon human desire to seek power and control, even at the expense of integrity. And can we truly argue that such desires are always conscious to the subject?</p>
<p>Theoxorus: Alas, they are not.</p>
<p>Secrates: If what you say is true, I ask you, then: what is this cybersecurity society <em>not</em> most of all?</p>
<p>When Secrates had asked his question, for a considerable time there was silence; Theoxorus furrowed his brow while meditating on this question; only Secrates made a sound when softly blowing on the delicate seedheads of a dandelion.</p>
<p>Theoxorus: For what did you wish as you blew, Secrates?</p>
<p>Secrates looked up at Theoxorus and said, with a smile: For you to answer my question.</p>
<p>Theoxorus: I will tell you. My feeling is that this cybersecurity society lacks <em>curiosity</em>.</p>
<p>Secrates: Exactly. The traditional infosec society is kin of <a href="https://ethics.org.au/ethics-explainer-panopticon-what-is-the-panopticon-effect/"><em>Argus Panoptes</em></a>; the role of enforcer grants them relevancy but not wisdom. Alas, my friend, if only they would follow the path of Daedalus instead. They feel ignorance as a sting and slight, as if ignorance was not the default condition of being alive! But there is more: how are they like <a href="https://www.gutenberg.org/files/1735/1735-h/1735-h.htm">the sophist</a>?</p>
<p>Theoxorus: They both are paid to question without truth as their aim.</p>
<p>Secrates: And do they not both gain fortunes from this?</p>
<p>Theoxorus: They do.</p>
<p>Secrates: And are they not both hunters after a living prey, servants of the powerful, cousins of opportunists exploiting emotion for control?</p>
<p>Theoxorus: They are.</p>
<p>Secrates: But where the cybersecurity society differs is they seek the impossible void – the <em>not-being</em> of weakness – and they are willing to destroy whatever <em>being</em> stands in the way of this pyrrhic quest.</p>
<blockquote>
<p>Continue with <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">Track III: The Dawn of “Security” as a Noun (Securitas)</a>.</p>
</blockquote>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<p><img src="/blog/img/sec-etymology/what-does-security-mean-cover-art.png" alt="The cover art for the album with the title: What do we mean when we say security? It depicts an island floating in a sky filled with rainbow and pastel clouds in shades of periwinkle and violet. The island itself is a paradise, a blend of fantasy and cyberpunk aesthetics. Lush trees blanket its ledges while waterfalls cascade from each ledge, frozen in time and resembling a beautiful digital glitch. It is meant to reflect the utopia we might achieve with our systems &amp;ndash; our own islands &amp;ndash; if we embraced the original meanings of the word security."></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>I think the closest we get to Platonic dialogues in modern times is Ao3 fanfiction #slowbuild #lightangst #friendship #humor #confessions #aroace #college #dom/sub #drama #alpha/beta/omegadynamics&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Rorty, Mary. “Lecture 10.1: Epicurus and Lucretius.” Stanford University. <a href="http://web.stanford.edu/~mvr2j/ucsccourse/Lecture10.1.pdf">http://web.stanford.edu/~mvr2j/ucsccourse/Lecture10.1.pdf</a>&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Infosec as an entity truly exhibits a weird form of masochism that honestly becomes slightly uncomfortable to contemplate if we start untangling all the evidence in support of it.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>I am tempted to delve into the psychological concept of security and insecurity but I fear its revelations &ndash; despite being aimed at infosec as <em>collective</em> &ndash; would be interpreted as personal attacks. I will leave this one morsel for us to digest: the APA defines insecurity as a feeling of inadequacy, a lack of self-confidence, an inability to cope combined by general uncertainty about one’s goals, abilities, or relationships with others. To what degree does this notion of psychological insecurity accurately characterize the traditional infosec industry &ndash; its folk wisdom, zeitgeist, program priorities, prescribed procedures, policies, and so forth?&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>&ldquo;Our time” here is referring to the time of Socrates (the inspiration for &ldquo;Secrates&rdquo;), which was in the 4th century B.C.E. Therefore, the rise of asphaleia meaning the stability of the city state was around the 5th century B.C.E.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>When We Say &#34;Security,&#34; What Do We Mean? (Track I)</title>
            <link>https://kellyshortridge.com/blog/posts/what-do-we-mean-when-we-say-security-part-1/</link>
            <pubDate>Thu, 20 Oct 2022 06:50:56 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/what-do-we-mean-when-we-say-security-part-1/</guid>
            <description>This essay serves as an introduction to our semantic safari, starting with the Latin word ‘securus.’ It is Track I of a longer concept album exploring what we mean when we use the word ‘security’ (and what it should mean).
You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
Introduction We say the word “security” a lot in tech. Whether we refer to “cybersecurity” or “information security” (or “infosec”), how often do we pause to question what we mean when we say the word security itself?
In general, arguing that words should mean things in infosec is like fighting against the gravity of a supermassive black hole1. Unfortunately for me, I will die on this hill until my inevitable spaghettification. From what I understand from cybersecurity journalism, this persistence makes me a “sophisticated” attacker, perhaps even one of those fabled advanced persistent threats (APTs). My cyberweapon of choice is words. My action on target? Destabilizing the industry’s dereliction of meaning. My APT group name will be SOCRATIC KITTEN.
So, true to the spirit of being advanced and persistent and threatening, I write this to challenge and, with any luck2, overthrow incumbent notions of the security concept while nurturing new notions that inspire and uplift. Like any self-respecting former author of angsty teen poetry, I chose as my medium a “literary concept album” featuring six essays as “tracks,” all exploring the title’s provocative question: When We Say Security, What Do We Mean?
The security concept, like other words-as-concepts (happiness, courage, justice) is an idea, per Plato, perceivable only by the eyes of the mind3. To borrow from Hannah Arendt4, the word “security” is “something like a frozen thought that thinking must unfreeze whenever it wants to find out the original meaning.” To thaw it, we must meditate upon it, seep ourselves in it, let the currents of concept cleanse our preconceptions.
“But Kelly,” you sigh, “This is cybersecurity awareness month, shouldn’t you be posting about something practical?” Like, what, the growing ATTACK SURFACE due to HUMAN ERROR? Much like the spherical cow, such metaphors5 simplify our understanding of the world so it feels comforting and calculable as escapism from the real world, which is very messy. The security people, in no shortage of irony, choose convenience in this trade-off. Humans will interact with systems and do very natural human things and the security people will clutch their pearls and gasp, “But why would they do such a thing?!” Maybe spherical SBOMs will solve security so we can all finally stop being aware of it.
Because requiring awareness is part of the problem. We have a word for when humans are excellent at being of aware of threats in their environment: hypervigilance. It is not good when humans are hypervigilant! It means the human is likely traumatized and their nervous system is dysregulated. Unfortunately, the security people want us all to be hypervigilant because nothing says accountability for a problem like telling the potential victims they’re responsible for it.
Imagine, if you will, a parallel SKYSECURITY AWARENESS MONTH where we tell people to be careful whenever walking outside because a piano might fall on their head or that they should be scrutinizing the clouds – their trajectory, color, fullness, and other patterns – to figure out whether they are safe or not. In real life, we have meteorologists and can open an app that tells us whether we probably need an umbrella or sunglasses or to just stay inside to stay safe. Sometimes people will still go outside because that hurricane isn’t going to Instagram itself but there have been and will always be fools and our strategy in a problem domain should not be focused on the minority of fools who will not be persuaded by facts or logic and will gladly jump over guardrails while wondering why they were there in the first place.
My point is that the security people have collapsed upon a meaning of “security” as a concept that is not serving them or users or organizations or society particularly well. The cybersecurity industry’s meaning of “security” is a distortion, in many cases the exact opposite, of what the word means and has meant throughout its long, storied history. That history has much to teach us, which is why it is, in fact, entirely practical and pertinent to explore it on our upcoming semantic safari.
Thus, this concept album will illuminate why our current notion of (cyber)security, the concept, is worth re-evaluating through the lens of what “security” has meant over time. True to Socratic tradition6, these essays will not provide a definitive answer. Our path will be circuitous, but we will perhaps absorb a superior sense of what this ineffable concept of “security” is through ouroboric osmosis by the end of our journey7. We may not produce a definition of “security” by the end (although we will try) but, having pondered the meaning of “security,” we might be able to make our own attempts at it better.
To begin our journey, we must time travel.
The Curious Nature of Securus It’s a few hundred years before the common era in Rome. You’re chilling in a thermae with your bae admiring the intricate stone mosaic of a rather fetching deity beneath your feet as you feel your pores cleansing in the luscious steam.
Your beloved anaticula8 looks at you and smiles, “If only all our days together could be securus like this,” they say. You smile back and nod in blissful agreement, watching them rest their eyes with a satisfied sigh.
For the securus life is one without care. Securus starts with sē, the Latin prefix for “without,” which combines with cūra, the noun for care, concern, thought, trouble, solicitude, anxiety, grief, and sorrow.
Hence, securus is to enjoy piece of mind (securo animo esse9). Securus is the absence of concern, the absence of a troubled mind. The opposite of securus was sollicitus — the restlessness arising from being filled with fear, apprehension, anxiety, alarm.
Hurtling forward in time to 2022 CE, we can observe that the typical traditionalist infosec program is closer to sollicitus than securus. Fear, uncertainty, and doubt (FUD) pervade – and perhaps define – the industry. FUD are the foundational emotions industry vendors, journalists, and less scrupulous thought leaders exploit for fortune and fame.
Our world is increasingly software and internet but there is a powerful industry that tells us that we should be scared to use software and internet, that it is desirable for us to be uncertain at all times when using software and internet, that we should doubt our perceptions at all times because what if the 13,371,337th link you click or line of code you write in your lifetime causes CYBERGEDDON. All of this anti-securus rhetoric is supposedly in our best interests.
FUD pervades cybersecurity to such an extent that we take for granted that these emotions need not define the security we seek to cultivate. Could FUD not instead be seen as the explicit enemy of security?
Thus, a worthy thought experiment is: how might infosec programs look if they actually pursued the state of being securus? How would a security program designed to ensure the organization is “without care or concern or anxiety” appear? How would cybersecurity strategy differ if the goal outcome was for users – whether end consumers, software engineers, or employees – to feel care-free and untroubled?
We will explore those questions as we continue our journey. Our next stop is even further back in history, inspecting the inspiration for the word securus in Ancient Greece.
Continue with Track II: A Platonic Dialogue on Security (Securus).
Conclusion You can find all the essays in the “When We Say Security, What Do We Mean?” concept album here:
Track I. When We Say “Security,” What Do We Mean?
Track II. A Platonic Dialogue on Security (Securus)
Track III. The Dawn of Security as a Noun (Securitas)
Track IV. The Multifaceted Meaning of “Security” in the Roman Era (Securitas)
Track V. The Evolving Meaning of “Security” in the Early Modern Era (Securitas)
Track VI. What Does Security Mean in the Information Society?
The parallels between black hole firewalls and the infosec kind must remain a discussion for another time (if time isn’t just an abstraction). ↩︎
I performed a secret, arcane ritual to win the favor of the eldritch ones towards my quest of making the word security mean something better. But the gods are capricious and so the ultimate fate of this endeavor remains unknown. ↩︎
As Hannah Arendt described of such words, “when we try to define them, they get slippery; when we talk about their meaning, nothing stays put anymore, everything begins to move.” (From The Life of the Mind) ↩︎
Arendt, H. (1981). The life of the mind: The groundbreaking investigation on how we think. HMH. (In the section “Thinking” / “The answer of Socrates”) ↩︎
“Surface” is a spatial metaphor. Yet again, there is much to unpack with the language we use to talk about cybersecurity but, to keep with the metaphor, time marches onward… ↩︎
“The truth is rather that I infect them also with the perplexity I feel myself.” – Socrates ↩︎
This may sound like a journey up one’s ass, but it’s better than being a cookie-cutter infosec ass, I assure you. ↩︎
A term of endearment in ancient Rome; its literal translation is “duckling.” ↩︎
Carl Meißner; Henry William Auden (1894) Latin Phrase-Book, London: Macmillan and Co. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>This essay serves as an introduction to our semantic safari, starting with the Latin word &lsquo;securus.&rsquo; It is Track I of a longer concept album exploring what we mean when we use the word &lsquo;security&rsquo; (and what it <em>should</em> mean).</p>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<p><img src="/blog/img/sec-etymology/what-is-security-socratic-kitten.png" alt="A painting of fluffy cat hacking a computer in ancient Greece."></p>
<h1 id="introduction">Introduction</h1>
<p>We say the word &ldquo;security&rdquo; a lot in tech. Whether we refer to &ldquo;cybersecurity&rdquo; or &ldquo;information security&rdquo; (or &ldquo;infosec&rdquo;), how often do we pause to question what we mean when we say the word <em>security</em> itself?</p>
<p>In general, arguing that words should mean things in infosec is like fighting against the gravity of a supermassive black hole<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. Unfortunately for me, I will die on this hill until my inevitable spaghettification. From what I understand from cybersecurity journalism, this persistence makes me a &ldquo;sophisticated&rdquo; attacker, perhaps even one of those fabled advanced persistent threats (APTs). My cyberweapon of choice is words. My action on target? Destabilizing the industry’s dereliction of meaning. My APT group name will be SOCRATIC KITTEN.</p>
<p>So, true to the spirit of being advanced and persistent and threatening, I write this to challenge and, with any luck<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>, overthrow incumbent notions of the <em>security</em> concept while nurturing new notions that inspire and uplift. Like any self-respecting former author of angsty teen poetry, I chose as my medium a &ldquo;literary concept album&rdquo; featuring six essays as &ldquo;tracks,&rdquo; all exploring the title&rsquo;s provocative question: <em>When We Say Security, What Do We Mean?</em></p>
<p>The security concept, like other words-as-concepts (happiness, courage, justice) is an idea, per Plato, perceivable only by the eyes of the mind<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>. To borrow from Hannah Arendt<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, the word &ldquo;security&rdquo; is &ldquo;something like a frozen thought that thinking must unfreeze whenever it wants to find out the original meaning.&rdquo; To thaw it, we must meditate upon it, seep ourselves in it, let the currents of concept cleanse our preconceptions.</p>
<p>&ldquo;But Kelly,&rdquo; you sigh, &ldquo;This is cybersecurity awareness month, shouldn&rsquo;t you be posting about something practical?&rdquo; Like, what, the growing ATTACK SURFACE due to HUMAN ERROR? Much like the spherical cow, such metaphors<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> simplify our understanding of the world so it feels comforting and calculable as escapism from the real world, which is very messy. The security people, in no shortage of irony, choose convenience in this trade-off. Humans will interact with systems and do very natural human things and the security people will clutch their pearls and gasp, &ldquo;But why would they do such a thing?!&rdquo; Maybe spherical SBOMs will solve security so we can all finally stop being aware of it.</p>
<p>Because requiring awareness is part of the problem. We have a word for when humans are excellent at being of aware of threats in their environment: <a href="https://en.wikipedia.org/wiki/Hypervigilance">hypervigilance</a>. It is not good when humans are hypervigilant! It means the human is likely traumatized and their nervous system is dysregulated. Unfortunately, the security people want us all to be hypervigilant because nothing says accountability for a problem like telling the potential victims they&rsquo;re responsible for it.</p>
<p>Imagine, if you will, a parallel SKYSECURITY AWARENESS MONTH where we tell people to be careful whenever walking outside because a piano might fall on their head or that they should be scrutinizing the clouds &ndash; their trajectory, color, fullness, and other patterns &ndash; to figure out whether they are safe or not. In real life, we have meteorologists and can open an app that tells us whether we probably need an umbrella or sunglasses or to just stay inside to stay safe. Sometimes people will still go outside because that hurricane isn&rsquo;t going to Instagram itself but there have been and will always be fools and our strategy in a problem domain should not be focused on the minority of fools who will not be persuaded by facts or logic and will gladly jump over guardrails while wondering why they were there in the first place.</p>
<p>My point is that the security people have collapsed upon a meaning of &ldquo;security&rdquo; as a concept that is not serving them or users or organizations or society particularly well. The cybersecurity industry&rsquo;s meaning of &ldquo;security&rdquo; is a distortion, in many cases the exact <em>opposite</em>, of what the word means and has meant throughout its long, storied history. That history has much to teach us, which is why it is, in fact, entirely practical and pertinent to explore it on our upcoming semantic safari.</p>
<p>Thus, this concept album will illuminate why our current notion of (cyber)security, the concept, is worth re-evaluating through the lens of what &ldquo;security&rdquo; has meant over time. True to Socratic tradition<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>, these essays will not provide a definitive answer. Our path will be circuitous, but we will perhaps absorb a superior sense of what this ineffable concept of “security” is through ouroboric osmosis by the end of our journey<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>. We may not produce a definition of &ldquo;security&rdquo; by the end (although we will try) but, having pondered the meaning of &ldquo;security,&rdquo; we might be able to make our own attempts at it better.</p>
<p>To begin our journey, we must time travel.</p>
<h2 id="the-curious-nature-of-securus">The Curious Nature of Securus</h2>
<p><img src="/blog/img/sec-etymology/what-is-security-roman-mosaic.png" alt="A mosaic of a Roman goddess gazing at a giant key."></p>
<p>It’s a few hundred years before the common era in Rome. You’re chilling in a thermae with your bae admiring the intricate stone mosaic of a rather fetching deity beneath your feet as you feel your pores cleansing in the luscious steam.</p>
<p>Your beloved <em>anaticula</em><sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup> looks at you and smiles, “If only all our days together could be <em>securus</em> like this,” they say. You smile back and nod in blissful agreement, watching them rest their eyes with a satisfied sigh.</p>
<p>For the <em>securus</em> life is one without care. <em>Securus</em> starts with <em>sē</em>, the Latin prefix for “without,” which combines with <em>cūra</em>, the noun for care, concern, thought, trouble, solicitude, anxiety, grief, and sorrow.</p>
<p>Hence, <em>securus</em> is to enjoy piece of mind (<em>securo animo esse</em><sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup>). Securus is the absence of concern, the absence of a troubled mind. The opposite of securus was <a href="http://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1999.04.0059:entry=sollicitus"><em>sollicitus</em></a> — the restlessness arising from being filled with fear, apprehension, anxiety, alarm.</p>
<p>Hurtling forward in time to 2022 CE, we can observe that the typical traditionalist infosec program is closer to <em>sollicitus</em> than <em>securus</em>. Fear, uncertainty, and doubt (FUD) pervade – and <a href="https://www.csoonline.com/article/3302849/why-security-pros-are-addicted-to-fud-and-what-you-can-do-about-it.html">perhaps define</a> – the industry. FUD are the foundational emotions industry vendors, journalists, and less scrupulous thought leaders exploit for fortune and fame.</p>
<p>Our world is increasingly software and internet but there is a powerful industry that tells us that we should be scared to use software and internet, that it is <em>desirable</em> for us to be uncertain at all times when using software and internet, that we should doubt our perceptions at all times because <em>what if</em> the 13,371,337th link you click or line of code you write in your lifetime causes CYBERGEDDON. All of this anti-<em>securus</em> rhetoric is supposedly in our best interests.</p>
<p>FUD pervades cybersecurity to such an extent that we take for granted that these emotions need not define the security we seek to cultivate. Could FUD not instead be seen as the explicit <em>enemy</em> of security?</p>
<p>Thus, a worthy thought experiment is: how might infosec programs look if they actually pursued the state of being <em>securus</em>? How would a security program designed to ensure the organization is “without care or concern or anxiety” appear? How would cybersecurity strategy differ if the goal outcome was for users &ndash; whether end consumers, software engineers, or employees &ndash; to feel care-free and untroubled?</p>
<p>We will explore those questions as we continue our journey. Our next stop is even further back in history, inspecting the inspiration for the word <em>securus</em> in Ancient Greece.</p>
<blockquote>
<p>Continue with <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2">Track II: A Platonic Dialogue on Security (Securus)</a>.</p>
</blockquote>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>You can find all the essays in the <em>&ldquo;When We Say Security, What Do We Mean?&rdquo;</em> concept album here:</p>
<ul>
<li>
<p>Track I. <a href="/blog/posts/what-do-we-mean-when-we-say-security-part-1/">When We Say &ldquo;Security,&rdquo; What Do We Mean?</a></p>
</li>
<li>
<p>Track II. <a href="/blog/posts/a-platonic-dialogue-on-security-securus-part-2/">A Platonic Dialogue on Security (Securus)</a></p>
</li>
<li>
<p>Track III. <a href="/blog/posts/the-dawn-of-security-the-noun-securitas-part-3/">The Dawn of Security as a Noun (Securitas)</a></p>
</li>
<li>
<p>Track IV. <a href="/blog/posts/the-multifaceted-meaning-of-security-in-the-roman-era-securitas-part-4/">The Multifaceted Meaning of &ldquo;Security&rdquo; in the Roman Era (Securitas)</a></p>
</li>
<li>
<p>Track V. <a href="/blog/posts/the-evolving-meaning-of-security-as-securitas-in-the-early-modern-era-part-5">The Evolving Meaning of &ldquo;Security&rdquo; in the Early Modern Era (Securitas)</a></p>
</li>
<li>
<p>Track VI. <a href="/blog/posts/what-security-means-in-the-information-society-part-6/">What Does Security Mean in the Information Society?</a></p>
</li>
</ul>
<p><img src="/blog/img/sec-etymology/what-does-security-mean-cover-art.png" alt="The cover art for the album with the title: What do we mean when we say security? It depicts an island floating in a sky filled with rainbow and pastel clouds in shades of periwinkle and violet. The island itself is a paradise, a blend of fantasy and cyberpunk aesthetics. Lush trees blanket its ledges while waterfalls cascade from each ledge, frozen in time and resembling a beautiful digital glitch. It is meant to reflect the utopia we might achieve with our systems &amp;ndash; our own islands &amp;ndash; if we embraced the original meanings of the word security."></p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>The parallels between black hole firewalls and the infosec kind must remain a discussion for another time (if time isn&rsquo;t just an abstraction).&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>I performed a secret, arcane ritual to win the favor of the eldritch ones towards my quest of making the word <em>security</em> mean something better. But the gods are capricious and so the ultimate fate of this endeavor remains unknown.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>As Hannah Arendt described of such words, &ldquo;when we try to define them, they get slippery; when we talk about their meaning, nothing stays put anymore, everything begins to move.&rdquo; (From <em>The Life of the Mind</em>)&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Arendt, H. (1981). <em>The life of the mind: The groundbreaking investigation on how we think.</em> HMH. (In the section &ldquo;Thinking&rdquo; / &ldquo;The answer of Socrates&rdquo;)&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>&ldquo;Surface&rdquo; is a spatial metaphor. Yet again, there is much to unpack with the language we use to talk about cybersecurity but, to keep with the metaphor, time marches onward&hellip;&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>“The truth is rather that I infect them also with the perplexity I feel myself.” – Socrates&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>This may sound like a journey up one’s ass, but it’s better than being a cookie-cutter infosec ass, I assure you.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>A <a href="http://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1999.04.0059:entry=anaticula">term of endearment</a> in ancient Rome; its literal translation is &ldquo;duckling.&rdquo;&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>Carl Meißner; Henry William Auden (1894) <a href="https://www.gutenberg.org/files/50280/50280-h/50280-h.htm"><em>Latin Phrase-Book</em></a>, London: Macmillan and Co.&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Securing the Supply Chain of Nothing</title>
            <link>https://kellyshortridge.com/blog/posts/securing-the-supply-chain-of-nothing/</link>
            <pubDate>Thu, 15 Sep 2022 13:57:43 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/securing-the-supply-chain-of-nothing/</guid>
            <description>The Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), and the Office of the Director of National Intelligence (ODNI) recently released a document entitled, “Securing the Software Supply Chain – Recommended Practices Guide for Developers.” I hoped the document might shed light on practical, perhaps even novel, ways for the private sector to increase systems resilience to supply chain attacks. The authors are respected authorities, and the topic is salient to the public.
Instead, the document’s guidance contains a mixture of impractical, confusing, confused, and even dangerous recommendations.
There is a collective ignis fatuus in the information security community that it is the “job” of an organization’s employees to prioritize security above all else. This fallacy holds us back from achieving better defense outcomes. Unfortunately, “Securing the Software Supply Chain” calcifies this falsehood.
Therefore, I have written this rebuttal in the form of ten objections:
Slowing down software delivery does not help security, it hurts it There is an underlying paradox (the “Thinking Machine” paradox) Most enterprises have no chance of implementing this Most enterprises will not want to implement this Security vendor bolt-on solutions are overemphasized Relevant security and infrastructure innovation is omitted Inaccuracies about software delivery practices and basic terminology Confusing, contradictory messages from the authoring agencies Omission of second order effects and underlying causal factors Dangerous absolution of security vendors’ own software security Objection #1: Slowing down software delivery does not help security, it hurts it Empirical evidence from the enterprise software engineering world makes it clear that speed benefits not only business outcomes (like release velocity) but also security outcomes (like time to recovery, time to fix security issues). “Securing the Software Supply Chain” instead effectively recommends against speed, arguing for slowness and additional hurdles and frictions throughout the software delivery process.
The guidance is for a centralized1 security team to impose significant restrictions on software engineering activity, thereby enforcing security as the top priority. For instance, IDEs (and plug-ins) are treated as threats to be “preapproved, validated, and scanned for vulnerabilities before being incorporated onto any developer machine.”
Speed, and even reliability, are treated as justified casualties for security’s sake. Given the mission of intelligence agencies, chiefly national security, they rightly view security obstruction as worthwhile. Their goal is not to grow market share or their customer base or improve profit margins by shipping more software, faster. This means their perspective simply does not translate to the private sector, unless we, as a society, decide a corporation must serve national security rather than its shareholders.
The result from this guidance – if the recommendations were implemented by, say, the Fortune 500 – is that software struggles to get built. Software engineers would quit (and have quit over lesser inconveniences in the private sector). Whatever the enterprise’s budget, with these recommendations they will build software slowly, and it will be of poorer quality – which is a direct detriment to the goal of higher quality, more secure software.
Objection #2: There is an underlying paradox (the “Thinking Machine Paradox”) There is a logical inconsistency – perhaps even a paradox – presented: developers cannot be trusted, but automation is discouraged in favor of manual processes run by humans. If there is paranoia about “unintended flaws” introduced by developers, then why give them more opportunity to introduce them by recommending manual processes?
If there is concern that their user credentials will beget malefaction and mistakes, then why discourage service accounts2, which are inherently decoupled from specific user identities? Without service accounts, if the human user leaves or is fired, then their credentials are still tied to whatever activity – which is a dangerous game to play.
The guide goes to great lengths to paint the picture of developer as an insider threat – whether purposefully malicious or “poorly trained”— but then explicitly espouses manual build and release processes. So, individual software engineers are simultaneously not to be trusted but also, they should be the ones to perform and approve software delivery activities?
Human brains are not great at executing tasks the same way every time. Computers are much better at that! And yet while the guide warns about the dangers of human mistakes, they want us to rely on those humans for repeatable tasks rather than automating them3.
Objection #3: Most enterprises have no chance of implementing this Most enterprises have no chance of implementing the recommendations in “Securing the Software Supply Chain.” It is allegedly meant as a reference guide for developers, but it is really a reference guide for no one other than an intelligence agency with the same goals and resources as the NSA.
This is a criticism often made about Google: they propose advice that works for them and their titanic budget and pool of talent without considering the constraints and tradeoffs faced by “mere mortals.” CISA, the NSA, and the ODNI have fallen into a similar trap.
There are numerous recommendations that are impractical for enterprises, and not just the absurd one of disallowing internet access in “development”4 and “engineering”5 systems. For instance, if enterprises documented everything that a piece of software performs, it would be equivalent to writing it twice (and the documentation would inevitably differ from the source code); enterprises would likely be better off with no documentation at all and just reading the source code.
As another example, they also recommend that “Fuzzing should be performed on all software components during development.” If fuzzing all software components during development was a strict requirement, enterprises might never ship software again6. They also recommend “Using a testing approach…to ensure that repaired vulnerabilities are truly fixed against all possible compromises.” If enterprise software engineering teams knew the graph of possible compromises, why would we need all of this guidance?
Objection #4: Most enterprises will not want to implement this Most enterprises will also not want to implement this. The recommendations do not scale7, are not aligned to enterprise software delivery priorities8, and erode9 productivity10.
Intelligence agencies, whose mission is national security, have no choice but to implement a paradigm as described because the alternative is simply not developing software. Enterprises in the private sector, whose mission is making money, do not face the same constraint; the constraint they face is, instead, the number of resources at their disposal and, to support their mission, it is generally better to spend on revenue or profit-generating activities than those that obstruct or erode it (like the recommendations in this document).
The top priority among enterprise customers is usually not security, either, regardless of the enterprise being B2B or B2C. Security will only be the top concern about customers if the primary customer is an intelligence agency, which constitutes very few enterprises, especially within the Fortune 500/1000.
Through this lens, advice like “If possible, the engineering network should have no direct access to the internet”11 and “If possible, development systems should not have access to the Internet”12 suggests a mental model of for-profit enterprises that significantly differs from reality. Similarly, decrying “ease of development” features is not a reasonable position in the reality of enterprise software development; more constructive would be to suggest that such features must be considered as part of the system design and protected behind appropriate access controls and audit logging.
Some recommendations would even be considered “bad practice”13 by software engineers from a reliability14 and resilience15 perspective, and therefore rejected. The guide suggests using “a temporary SSH key” to allow admin access into production systems, whereas Ops engineers and SREs often prefer immutable infrastructure specifically to disallow the use of SSH, which helps with reliability (and cuts off a tempting mechanism for attackers).
Objection #5: Security vendor bolt-on solutions are overemphasized There is a pervasive overemphasis on vendor tooling with near-complete omission of non-vendor solutions. Specifically, the document touts a laundry list of bolt-on commercial solutions by incumbent security vendors – IAST, DAST, RASP, SCA, EDR, IDS, AV, SIEM, DLP, “machine learning-based protection”16 and more – often repeatedly singing their praises17. Rather than providing constructive, sane advice on automating processes and making them repeatable and maintainable, they recommend a smorgasbord of bolt-on tools.
The guidance is explicit in discouraging open-source software as well. There is also little about security through design, such as the D.I.E. triad. Unfortunately, this gives the impression that security vendors successfully lobbied for their inclusion in the document, which calls into question its neutrality18.
In fact, by promoting these commercial security tools, they promote dangerous advice like manual release processes. For instance, they recommend: “Before shipping the package to customers, the developer should perform binary composition analysis to verify the contents of the package.” But developers should not be performing package releases themselves if the desired outcome is high quality, secure software (see Objection #2).
As another example, they recommend that “SAST and DAST should be performed on all code prior to check-in and for each release…” But how is an enterprise supposed to perform DAST/SAST on code before it’s checked in? It is an ad absurdum of “shift left.” It is only one step leftward away from running the tools earlier inside the developer’s brains as they brainstorm what code to write.
But there is no mention of the need to integrate DAST/SAST into developer workflows and ensure that speed is still upheld. In enterprise software security, the success of a security bolt-on solution depends either on usability or coercion.
If you make the secure way the easy way and ensure that software engineers are not required to deviate unduly from their workflows to interact with the security solution, then it is quite likely to beget better security outcomes (or at least not worse productivity outcomes). The alternative is to mandate that a solution must be used; if it is unusable, then it will be bypassed to ensure work still gets done, unless there is sufficient coercion to enforce its use.
When the bolt-on recommendations are combined with their advice elsewhere to disconnect development systems from the internet, it begs the question: How do you use the SCA tools, among others, if the dev systems are not connected to the internet?
“Basic hygiene” is arguably better than any of these bolt-on option, including things like:
Knowing what dependencies are present Being purposeful about what goes into your software Choosing a tech stack you can understand and maintain Choosing tools that are appropriate for the software you are building19 What does “being purposeful about what goes into your software” mean? It means:
Including dependencies as part of design, rather than implementation Being cautious about adding dependencies Knowing why you’ve included other libraries and services Understand the packaging concerns of your dependencies; for example, if you include a dependency, what does it cost to feed the beast in terms of operationalizing and shipping it? If you take on a dependency in another team’s service, what are their SLOs? Do you trust them? Is it a stable system? Or have there been problems? If it’s an open-source library, is it maintained by one person? A team? A company with a support contract you can purchase? A company with a support contract you already have in place? Can you see its updating and patching history? In essence, the answer is not: “never take on dependencies”20; the answer is to understand what your dependencies are.
Overall, the recommended mitigations are all about outputs21 rather than outcomes22, about security theater rather than actually securing anything and verifying it with evidence. It is clear the guidance does not consider organizations who ship more than once per quarter; in fact, they seem to view fast software delivery as undesirable. They are mistaken (as per Objection #1).
Objection #6: Relevant security and infrastructure innovation is omitted As mentioned in Objection #5, the document ignores a wealth of innovation in the private sector on software security over the past decade. The guidance seems to take the stance that the NSA/CISA/ODNI way is better than the private sector’s status quo, which is a false dichotomy. In fact, companies like SolarWinds have admitted their slowness was a detriment and have since modernized their practices — including the use of open source — to achieve improved security.
The fact that none of those innovations were included suggests an insular examination of the problem at hand, which erodes the intellectual neutrality of the document. I’ve listed the kinds of security and infrastructure innovations I would expect to see in a reference guide like this below (I have no doubt there are many others worthy of inclusion, too):
Netflix’s Wall-E framework Segment.io on threat modeling Segment.io on time-based access The WebAssembly (Wasm) component model The Bytecode Alliance on Security and Correctness in Wasmtime IDE support for cloud-based static analyses Software provenance via CI/CD Automated patching via IaC GitHub’s Dependabot tool Resilient security log pipelines with security chaos engineering Facebook on 2FA for internal infra authentication AWS API Key canarytokens Google’s configuration distribution worked example (among many others in the same book) The guidance would also be strengthened by considering survey data from the private sector, such as the recent Golang survey (which has a section dedicated to Security, including fuzzing) and GitHub’s Octoverse Security report from 2020 (their Dependabot tool is also an arguably glaring omission).
As another example, the document cautions against “allowing an adversary to use backdoors within the remote environment to access and modify source code within an otherwise protected organization infrastructure.” This dismisses the last 30&#43; years of source code management (SCM) systems.
You cannot just up and change source code without people noticing in modern software delivery. Even subversion is built on the idea of tracking deltas; if you change some code, there exists a record of when that code was changed, by who, and when23. Most development workflows configure the SCM system to require peer approval before merging changes to important branches. It is worrisome if the authoring agencies are unaware of this given it has been the status quo for decades; if their vendors do not exhibit these practices, then this suggests a serious problem with federal procurement.
Objection #7: Inaccuracies about software delivery practices and basic terminology There are consistent misunderstandings24 and inaccuracies throughout the document about modern software delivery practices, including misunderstandings and inaccuracies about basic terminology, such as CI/CD25, orchestration26, nightly builds27, code repositories28, and more. This is part of a larger cultural problem in information security of trying to regulate29 what they do not understand.
The reference guide does not seem to understand who does what in enterprise engineering teams. Product and engineering management do not define security practices and procedures today, because their priorities are not security but instead whatever success metrics correspond to the business logic under their purview (usually related to revenue and customer adoption). The characterization of QA is particularly perplexing and suggests a significantly different QA discipline exists in intelligence agencies than does in the private sector.
If enterprises were to follow the advice that “software development group managers should ensure that the development process prevents the intentional and unintentional injection of… design flaws into production code,” then software might never be released again in the private sector. The guide also seems to believe that software engineers are unfamiliar with the concept of feature creep30, as if that is not the unfortunate default in product engineering today.
There are also simply perplexing statements. For instance, “An exception to [adjusting the system themselves] would be when an administrator has to fix the mean time to failure in production.” (p. 30, emphasis mine). I do not know what they mean by this and struggle to guess what they might mean. This, and other confusing31 passages32 tarnish the intellectual credibility of the guide.
The guidance is inaccurate even in areas that should be their area of expertise, like cryptography. “The cryptographic signature validates that the software has not been tampered with and was authored by the software supplier.” No, it doesn’t. It validates that the supplier applied for that key at some point; it does not say much about the security properties of the software in question. For instance, there is an anti-cheat driver for the game Genshin Impact whose key still hasn’t been revoked despite being vulnerable to a privilege escalation bug.
Finally, in what is more of an inaccuracy about enterprise security33 than software delivery, the authors refer to “zero-day vulnerabilities” as an “easy compromise vector.”34 This may be true for the Equation Group and their black budget funding but is not true from the perspective of most cybercriminals and, therefore, enterprises.
Enterprises are still wrestling with leaked credentials, social engineering, and misconfigurations (see Objection #8). For a Fortune 500, it is a victory if attackers are forced to use 0day to compromise you; you’ve exhausted all their other options, which should be considered a rightful accomplishment relative to the security status quo.
Objection #8: Confusing, contradictory messages from the authoring agencies It is confusing to see that the same agency (CISA) who emphasized the need for repeatability and scalability last year only mentions the importance of repeatability once. And that another authoring agency (NSA) stated in their report from January 2020 that supply chain vulnerabilities require a high level of sophistication to exploit while having low prevalence in the wild.
However, this guidance makes it seem like software engineering activities should be designed with supply chain vulnerabilities as the highest weighted factor35. Misconfigurations are given scarce mention, despite the NSA citing them as most prevalent and most likely to be exploited.
In the aforementioned guide by CISA, they highlight some of the benefits of cloud computing and automation. But in this reference guide, they indicate that the cloud is more dangerous than on-premises systems36, without explaining why it might be so.
Much of the language is confusing, too, and never receives clarification. What is a “high risk” defect? What does “cybersecurity hygiene” mean in the context of development environments? They insist that “security defects should be fixed,” but what defines a defect? That it exists? That it’s exploitable? That it’s likely a target for criminals? Nation states? It remains unclear.37
Objection #9: Omission of second order effects and underlying causal factors There is no consideration of second order effects or underlying causal factors, such as organizational incentives and production pressures.38 (There is one mention, quite in passing, that there might be various constraints the developer faces39). This ignores the rich literature around resilience in complex systems as well as behavioral science more generally. In fact, the recommendations are arguably the opposite of adaptive capacity and reflect rigidity, which is seen as a trap in resilience science40.
If enterprises are to attempt implementing these recommendations (which I very much discourage them from doing), then guidance on how to achieve them despite vertiginous social constraints is essential. Much of what is outlined will be irrelevant if incentives are not changed.
There is also no discussion of user experience (UX) considerations41 when implementing these suggestions, which, perhaps more than what is implemented, will influence security outcomes the most; an unusable workflow will be bypassed. Because the guidance ignores the complexities of software delivery activities and accepts convenient explanations for developer behavior, the resulting advice is often uncivilized42.
There is a missed opportunity to discuss making the secure way the easy way, the importance of minimizing friction in developer workflows, the need to integrate with tooling like IDEs, and so forth. This absence results in a guide that feels both shallow and hollow.
Objection #10: Dangerous absolution of security vendors’ own software security There is ample discussion of the need to scan software being built or leveraged from third parties, and how a long list of commercial security tools can support this endeavor (see objection #5). Curiously, there is no mention of the need for those tools to be scanned, such as performing SCA for your EDR or anti-virus bolt-on solution.
Security tools usually require privileged access in your systems, whether operating as a kernel module or requiring read/write (R/W) access to critical systems. Combined with the long history of critical vulnerabilities in security tools, this is a rather troubling omission. This is all aside from the numerous software reliability problems engendered by endpoint security tools, which also fail to receive mention. Engineering teams notoriously despise endpoint security on servers for valid reasons: kernel panics, CPUs tanking, and other weird failures that are exasperating to debug.
Given the paranoia about IDEs and other developer tools, it feels strange that security vendors receive absolution and nary a caveat regarding their code security. Where is the recommendation for a proctology exam on the security tools they recommend? Do all of their recommendations for countless hurdles in the way of code deployment apply to patches, too? They are software changes, after all. If they do not apply, then that is a massive loophole for attackers to exploit.
Another surprise is that “anti-virus” is listed as being capable of detecting things like DLL injections43; recent empirical data suggests otherwise even for EDR, which is considered leagues ahead of antivirus solutions. Again, it gives the impression that the guidance is biased in favor of security vendors rather than a more complete set of strategic options. The decimating performance impact these tools can have on the underlying systems, such as kernel panics on production hosts, also fails to receive mention.
Conclusion If you read this guidance and implement it into your enterprise organization, you will end up securing the supply chain of nothing. Your engineering organization will dismiss you as an ideologue, a fool, or both; else, you will have constrained software engineering activities to such a degree that it makes more sense to abandon delivering software at all. In fairness, the most secure software is the software never written. But, in a world whose progress is now closely coupled with software delivery, we can do better than this document suggests.
Thanks to Coda Hale, James Turnbull, Dr. Nicole Forsgren, Ryan Petrich, and Zac Duncan for their feedback when drafting this post.
They admit that “a developer has inside knowledge of the code base and is often an expert in their respective coding language, environment, and style,” and… this is why we should have the security team as the gatekeeper for determining systems safety and quality? The security team that does not have knowledge of the code base and is not an expert in the coding language, environment, and style… ↩︎
“The use of service accounts, like non-human privileged accounts used to run automated processes, should be minimized and carefully audited.” (p. 30). ↩︎
To wit, they note that developers “must take great care to protect its private key used for the digital signature or hashes…” while a more straightforward recommendation would be to use service accounts and automate the build process. An automated system has traceability into the version, where it was built, reproducible builds, and then the build server signs it. You don’t need an engineer there to personally bless the release. To achieve real security outcomes, you must bless the tree of things that went into that asset, not bless that a person was there to rubber stamp the release. ↩︎
“If possible, development systems should not have access to the Internet and may be deployed as local virtual systems with host-only access.” (p. 20) ↩︎
“If possible, the engineering network should have no direct access to the Internet.” (p. 30) ↩︎
Few organizations outside of the Linux kernel and browsers like Chrome and Firefox are even thinking about using fuzz testing at scale. That isn’t to say that more software engineering teams couldn’t benefit from fuzz testing. But the reality of enterprise dev today is that fuzzing is so far up the hierarchy of needs that it’s like a 16th century peasant dreaming of writing their own gilded manuscript while respectable working conditions and other trappings of basic human dignity are prerequisites. ↩︎
“Performing automatic static and dynamic vulnerability scanning on all components of the system for each committed change is key to the security and integrity of the build process and to validate the readiness of product release.” (p. 16). In a world where these scans take seconds, it might be feasible. In our real world where these scans can take hours and require navigating to a new window, it is counterproductive to suggest them. ↩︎
“All notifications should be evaluated with respect to the relationship to the product and prioritized based on risk assessment.” It will be prioritized based on business impact, which is often negligible. The guide repeatedly presents a world in which security and “risk” are the top priority without shining any real lens on social or organizational factors or providing recommendations on how to influence organizational dynamics. How do you adjust incentives to get “risk” to be treated as a higher priority? What are all the reasons to buy a software composition analysis tool? . It’s challenging to take the guidance seriously as a result. ↩︎
Developers want and need internet access to do their jobs because the internet is, by design, a fabulous reference tool and building software requires knowing many things, so many things, in fact, that it’s difficult to hold it all in your head and it’s much cheaper, effort-wise, to google “how python list” for the thousandth time. ↩︎
As part of “hardening the development environment” they recommend operating a VPN and a jump host behind which a shared corporate development environment limps along with endpoint security software installed. This creates many layers of delay and extra latency. ↩︎
The guide doesn’t explicitly forbid torrenting Stack Overflow and hosting your own offline copy of it… or printing Stack Overflow out as a gargantuan tome, an intimidating new edition to your office coffee room. Need help with Flask? Just turn to page 1,337,420 to read each Stack Overflow post about it. I think I am mostly kidding but also suspect that software engineers at real enterprises might resort to that if faced with the prospect of “no internet in engineering systems.” ↩︎
And if you shouldn’t use the internet in the development system, then how is this advice even conveyed to developers? A bearded prophet descends the mountain bearing a stone tablet inscribed with the recommendation, “Don’t use the internet,” and then software is absolved of all its sins, I suppose. ↩︎
For example, they recommend the ability to “raise privileges” for just one function and then lower it; this indicates that the software can have its privileges raised at any point in time, including after attack. Instead, they should advocate for privilege separation and sandboxing. ↩︎
They reference “fail-safe defaults,” and while it remains unclear what they mean, my guess is they mean “fail closed,” which software engineers of the infra and ops variety would also dislike. This is common appsec advice, usually given with the recommendation to sprinkle some redundancy onto the system as a “quick fix” for any reliability issues that surface. Nothing is more infosec than treating software engineers like they are negligent fools for overlooking an obscure vulnerability while believing that delivering reliable software is trivial. ↩︎
Metapod, the Pokémon, has a move called “harden” that does nothing (basically a no-op). This feels fitting for the section in question entitled “Harden the Build Environment.” ↩︎
Why specify the specific tools and not the goal outcomes? Anyone who’s wrestled with ML-based security tools knows that they gobble up needed resources on dev laptops while also needing lots of babying in the form of tuning and maintenance and triaging. But such concerns remain unmentioned in the reference guide. ↩︎
For instance, the document repeatedly recommends software composition analysis (SCA), another bolt-on solution: “Software Composition Analysis should be performed on all third-party software.” And, in the context of missing SBOMs: “the developer team will derive the required information to describe the third-party component within the SBOM for example by using software composition analysis tools.” (p. 27) ↩︎
The scent of grift wafts throughout the whole guide, unfortunately. To defend against an “exploited signing server,” they recommend controls like a security information and event management system (SIEM) and data loss prevention (DLP). They could have recommended a centralized system to consume and analyze logs, but they instead recommend a specific type of tool. Additionally, where is the mention of transparency logs? Transparency logs make sure you aren’t signing builds with a production key. The signing server accepts the signing requests and logs them. If it isn’t in the logs, then the signing server is potentially pwned. Reproducible builds are more important than signing. With the former, you can go back and see what went in the deployment if something goes wrong. ↩︎
For instance, if you’re building software for microcontrollers, maybe don’t pick Java as your language of choice because it has a runtime you need to ship with your application that’s somewhat large and would take up precious ROM space. If your software needs to be portable, then choose a language that is portable. ↩︎
You can take the view that you’ll be the steward of an open-source project within your organization, but it’s an option that is rather expensive. It’s a free thing, but it’s made with different ideas, constraints, and assumptions that don’t necessarily match what you need. How do you incentivize this curiosity about dependencies? The best way is to hold people accountable to their SLOs. You can also leverage the “shadow of the future” – the tendency for humans to be on better behavior when they believe they’ll have repeated interactions with their counterparty in the future. ↩︎
“The Secure SDLC identifies the exact procedures and policies that are used to ensure that secure development practices are implemented and artifacts are created to attest to the adherence of the adopted Secure SDLC plan with respect to the implementation and distribution of the product.” The point of the “Secure SDLC” therefore appears to be attesting that you’re adhering to the plan, rather than actually securing the product. If IaC tools are especially valuable for anything, it is creating artifacts; it allows software engineers to point to a tangible thing and say: “look at this thing I did” rather than their actions disappearing when they close their terminal or enough new commands send it out of view. ↩︎
“Successful completion [of security training] should be tracked for all engineers.” (p. 10) They at least make it explicit that they only care about outputs rather than outcomes. All that matters is completion, not whether software quality has actually improved. ↩︎
Usually there is a “why” too, but standards for commit messages vary. ↩︎
“…developers assigned to create the threat models use component-level designs for completeness.” I don’t know what this means, and I am considered an expert in threat modeling. ↩︎
“[The release branch] should be protected with reviewers and continuous integration/continuous delivery (CI/CD) tests with SAST enforced at the SCM.” This sentence alone suggests that the guide is confused on what CI/CD is, both in terms of how it works and why enterprises have adopted the practice. Further evidence is found in this passage: “Continuous integration/continuous deployment. In this case, the software is installed in a subset of the cloud for immediate feedback and A/B testing.” (p. 28) An enterprise could do that, and CI/CD might help, but that isn’t what CI/CD is in itself. The bullet immediately after it says, “Building software as part of a rapid iterative cycle, such as using an Agile Development Method.” This is frustrating. ↩︎
One section uses the phrase “the CD orchestrator” and it’s quite unclear what they mean. Do they mean GitHub Actions? But that isn’t “the only entity that manages all the environments.” Do they mean Terraform? I don’t think they mean Kubernetes, not only because k8s wouldn’t be responsible for version control (we must keep waiting for git push to k8s), but also because they have another sentence in this section about Kubernetes: “Also ensure that the administrator tools are not in the public environment such as a Kubernetes cluster.” I do not understand what they are trying to say and, overall, this section strongly suggests that there are pervasive misunderstandings about what CD, orchestrators, and public environments are. ↩︎
“The nightly build process also acts as a good performance matrix to assess the developer capabilities and comply with security and development processes.” (p. 17) That is not what nightlies are for. Nightlies are to integrate merged changes from the day before and run tests that are too expensive or lengthy to run on every build (at least in any sane workflow). It is not an opportunity to bully developers. ↩︎
“For example, apply continuous monitoring to prevent, detect, and remediate threats against the repository” (p. 36). What does this mean? Do they know what a repository is? ↩︎
The guidance presents a development model with some dreadful release branch and… review/approval when moving code from trunk to release. How are you going to review a giant stew of source code when promoting? Unless you are not reviewing and instead are rubber stamping to meet some secure development fertilizer sandwich of a process. ↩︎
“During the coding and implementation of the system, care must be taken to ensure that all development efforts map to specific system requirements and that there is no ‘feature creep’ that might compromise product integrity or inject vulnerabilities.” ↩︎
For instance, “This ensures the feature set is met and nothing exists for any feature creep.&#34; (p. 16) I have no idea how to parse “nothing exists for any feature creep.” ↩︎
They mention “structural vulnerabilities,” as in, “ensure neither the code nor systems have structural vulnerabilities” and then never explain what is meant by “structural.” Is it a vulnerability in a load-bearing component? Is it a couture vulnerability made of crisp, sculptural fabric? The word never appears again, so the mystery remains. ↩︎
They make the claim “Code signing is usually the last line of defense against a software supply chain exploit” and… is this true? This doesn’t feel right. What is the typical process for a supply chain exploit? Because my mental model of them is that the attack happens before signing. ↩︎
They expound upon “common compromise techniques such as buffer overflows, ROP execution gadgets, delayed dynamic function loading,” and so forth. I am curious what they think “common” is. Common for what the NSA sees from its peer adversaries? Or common for the typical private sector enterprise? What is “common” to the typical enterprise is less buffer overflow exploits and more, “some employee VPN credentials were leaked last year and criminals used them to gain access to our network share and ransomwared it.” I am uncertain if the advice in this guide would stop such common concerns. In fact, I suspect these recommendations exacerbate the problem by recommending slower software changes and bolt-on solutions like VPNs and on-prem network segmentation. ↩︎
They state that “Over the last few years, these next-generation software supply chain compromises have significantly increased for both open source and commercial software products.&#34; But they do not outline any sources or statistics to back this claim. ↩︎
“e.g. cloud products should be pen-tested more frequently”. (p. 8) Why? ↩︎
The reference guide also recommends that both SAST/DAST and SCA should be “performed to determine if the risk is acceptable.” But what is “acceptable”? Is it based on the security team’s point of view? Is it based on the organization as an ecosystem of stakeholders who all have disparate points of view? Is it based on the point of view of intelligence agencies, who, naturally, think enterprises should prioritize defense above all other endeavors? ↩︎
They spend a lot of words on “compromised engineers” as a “difficult threat.” They reference things like “poor performance reviews, lack of promotion, or disciplinary actions” but fail to mention production pressures more generally. ↩︎
They finally mention constraints and hint at production pressures with: “Depending on the complexity, security requirements, development resources available, and time constraints…” (p. 15) ↩︎
Fath, B. D., Dean, C. A., &amp; Katzmair, H. (2015). Navigating the adaptive cycle: an approach to managing the resilience of social systems. Ecology and Society, 20(2). ↩︎
There is finally a mention of UX on page 10: “Gaps should be examined to determine and address root causes, e.g., if there is a lack of usable tools to implement organizational security expectations.” But it is the sole mention. And there are a few things still objectionable in that sentence, like the term “root cause” and, also, how do you implement an expectation? ↩︎
The guide somewhat dehumanizes developers as a result; at one point they literally use the pronoun of “it” to refer to a developer. ↩︎
They suggest that “many antivirus products provide behavior analysis engines to provide additional security checks” like heap spray mitigation, stack pivot detection, DLL injection detection, and so forth. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>The Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), and the Office of the Director of National Intelligence (ODNI) recently <a href="https://media.defense.gov/2022/Sep/01/2003068942/-1/-1/0/ESF_SECURING_THE_SOFTWARE_SUPPLY_CHAIN_DEVELOPERS.PDF">released a document</a> entitled, “Securing the Software Supply Chain – Recommended Practices Guide for Developers.” I hoped the document might shed light on practical, perhaps even novel, ways for the private sector to increase systems resilience to supply chain attacks. The authors are respected authorities, and the topic is salient to the public.</p>
<p>Instead, the document’s guidance contains a mixture of impractical, confusing, confused, and even dangerous recommendations.</p>
<p>There is a collective <em>ignis fatuus</em> in the information security community that it is the “job” of an organization’s employees to prioritize security above all else. This fallacy holds us back from achieving better defense outcomes. Unfortunately, “Securing the Software Supply Chain” calcifies this falsehood.</p>
<p>Therefore, I have written this rebuttal in the form of ten objections:</p>
<ol>
<li>Slowing down software delivery does not help security, it hurts it</li>
<li>There is an underlying paradox (the “Thinking Machine” paradox)</li>
<li>Most enterprises have no chance of implementing this</li>
<li>Most enterprises will not want to implement this</li>
<li>Security vendor bolt-on solutions are overemphasized</li>
<li>Relevant security and infrastructure innovation is omitted</li>
<li>Inaccuracies about software delivery practices and basic terminology</li>
<li>Confusing, contradictory messages from the authoring agencies</li>
<li>Omission of second order effects and underlying causal factors</li>
<li>Dangerous absolution of security vendors’ own software security</li>
</ol>
<h2 id="objection-1-slowing-down-software-delivery-does-not-help-security-it-hurts-it">Objection #1: Slowing down software delivery does not help security, it hurts it</h2>
<p>Empirical evidence from the enterprise software engineering world <a href="https://itrevolution.com/accelerate-book/">makes it clear</a> that speed benefits not only business outcomes (like release velocity) but also security outcomes (like time to recovery, <a href="https://services.google.com/fh/files/misc/state-of-devops-2019.pdf">time to fix security issues</a>). “Securing the Software Supply Chain” instead effectively recommends <em>against</em> speed, arguing for slowness and additional hurdles and frictions throughout the software delivery process.</p>
<p>The guidance is for a centralized<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> security team to impose significant restrictions on software engineering activity, thereby enforcing security as the top priority. For instance, IDEs (and plug-ins) are treated as threats to be “preapproved, validated, and scanned for vulnerabilities before being incorporated onto any developer machine.”</p>
<p>Speed, and even reliability, are treated as justified casualties for security’s sake. Given the mission of intelligence agencies, chiefly national security, they rightly view <a href="/blog/posts/the-security-obstructionism-secobs-market/">security obstruction</a> as worthwhile. Their goal is not to grow market share or their customer base or improve profit margins by shipping more software, faster. This means their perspective simply does not translate to the private sector, unless we, as a society, decide a corporation must serve national security rather than its shareholders.</p>
<p>The result from this guidance – if the recommendations were implemented by, say, the Fortune 500 – is that software struggles to get built. Software engineers would quit (and have quit over lesser inconveniences in the private sector). Whatever the enterprise’s budget, with these recommendations they will build software slowly, and it will be of poorer quality – which is a direct detriment to the goal of higher quality, more secure software.</p>
<h2 id="objection-2-there-is-an-underlying-paradox-the-thinking-machine-paradox">Objection #2: There is an underlying paradox (the “Thinking Machine Paradox”)</h2>
<p>There is a logical inconsistency – perhaps even a paradox – presented: developers cannot be trusted, but automation is discouraged in favor of manual processes run by humans. If there is paranoia about “unintended flaws” introduced by developers, then why give them more opportunity to introduce them by recommending manual processes?</p>
<p>If there is concern that their user credentials will beget malefaction and mistakes, then why discourage service accounts<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>, which are inherently decoupled from specific user identities? Without service accounts, if the human user leaves or is fired, then their credentials are still tied to whatever activity – which is a dangerous game to play.</p>
<p>The guide goes to great lengths to paint the picture of developer as an insider threat – whether purposefully malicious or “poorly trained”— but then explicitly espouses manual build and release processes. So, individual software engineers are simultaneously not to be trusted but also, they should be the ones to perform and approve software delivery activities?</p>
<p>Human brains are not great at executing tasks the same way every time. Computers are much better at that! And yet while the guide warns about the dangers of human mistakes, they want us to rely on those humans for repeatable tasks rather than automating them<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>.</p>
<h2 id="objection-3-most-enterprises-have-no-chance-of-implementing-this">Objection #3: Most enterprises have no chance of implementing this</h2>
<p>Most enterprises have no chance of implementing the recommendations in “Securing the Software Supply Chain.” It is allegedly meant as a reference guide for developers, but it is really a reference guide for no one other than an intelligence agency with the same goals and resources as the NSA.</p>
<p>This is a criticism often made about Google: they propose advice that works for them and their titanic budget and pool of talent without considering the constraints and tradeoffs faced by “mere mortals.” CISA, the NSA, and the ODNI have fallen into a similar trap.</p>
<p>There are numerous recommendations that are impractical for enterprises, and not just the absurd one of disallowing internet access in “development”<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup> and “engineering”<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> systems. For instance, if enterprises documented everything that a piece of software performs, it would be equivalent to writing it twice (and the documentation would inevitably differ from the source code); enterprises would likely be better off with no documentation at all and just reading the source code.</p>
<p>As another example, they also recommend that “Fuzzing should be performed on all software components during development.” If fuzzing all software components during development was a strict requirement, enterprises might never ship software again<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>. They also recommend “Using a testing approach…to ensure that repaired vulnerabilities are truly fixed against <strong>all possible compromises</strong>.” If enterprise software engineering teams knew the graph of possible compromises, why would we need all of this guidance?</p>
<h2 id="objection-4-most-enterprises-will-not-want-to-implement-this">Objection #4: Most enterprises will not want to implement this</h2>
<p>Most enterprises will also not <em>want</em> to implement this. The recommendations do not scale<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>, are not aligned to enterprise software delivery priorities<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup>, and erode<sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup> productivity<sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup>.</p>
<p>Intelligence agencies, whose mission is national security, have no choice but to implement a paradigm as described because the alternative is simply not developing software. Enterprises in the private sector, whose mission is making money, do not face the same constraint; the constraint they face is, instead, the number of resources at their disposal and, to support their mission, it is generally better to spend on revenue or profit-generating activities than those that obstruct or erode it (like the recommendations in this document).</p>
<p>The top priority among enterprise customers is usually not security, either, regardless of the enterprise being B2B or B2C. Security will only be the top concern about customers if the primary customer is an intelligence agency, which constitutes very few enterprises, especially within the Fortune 500/1000.</p>
<p>Through this lens, advice like “If possible, the engineering network should have no direct access to the internet”<sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup> and “If possible, development systems should not have access to the Internet”<sup id="fnref:12"><a href="#fn:12" class="footnote-ref" role="doc-noteref">12</a></sup> suggests a mental model of for-profit enterprises that significantly differs from reality. Similarly, decrying “ease of development” features is not a reasonable position in the reality of enterprise software development; more constructive would be to suggest that such features must be considered as part of the system design and protected behind appropriate access controls and audit logging.</p>
<p>Some recommendations would even be considered “bad practice”<sup id="fnref:13"><a href="#fn:13" class="footnote-ref" role="doc-noteref">13</a></sup> by software engineers from a reliability<sup id="fnref:14"><a href="#fn:14" class="footnote-ref" role="doc-noteref">14</a></sup> and resilience<sup id="fnref:15"><a href="#fn:15" class="footnote-ref" role="doc-noteref">15</a></sup> perspective, and therefore rejected. The guide suggests using “a temporary SSH key” to allow admin access into production systems, whereas Ops engineers and SREs often prefer immutable infrastructure specifically to disallow the use of SSH, which helps with reliability (and cuts off a tempting mechanism for attackers).</p>
<h2 id="objection-5-security-vendor-bolt-on-solutions-are-overemphasized">Objection #5: Security vendor bolt-on solutions are overemphasized</h2>
<p>There is a pervasive overemphasis on vendor tooling with near-complete omission of non-vendor solutions. Specifically, the document touts a laundry list of bolt-on commercial solutions by incumbent security vendors – IAST, DAST, RASP, SCA, EDR, IDS, AV, SIEM, DLP, “machine learning-based protection”<sup id="fnref:16"><a href="#fn:16" class="footnote-ref" role="doc-noteref">16</a></sup> and more – often repeatedly singing their praises<sup id="fnref:17"><a href="#fn:17" class="footnote-ref" role="doc-noteref">17</a></sup>. Rather than providing constructive, sane advice on automating processes and making them repeatable and maintainable, they recommend a smorgasbord of bolt-on tools.</p>
<p>The guidance is explicit in discouraging open-source software as well. There is also little about security through design, such as the <a href="https://www.techtarget.com/searchsecurity/feature/Experts-say-CIA-security-triad-needs-a-DIE-model-upgrade">D.I.E. triad</a>. Unfortunately, this gives the impression that security vendors successfully lobbied for their inclusion in the document, which calls into question its neutrality<sup id="fnref:18"><a href="#fn:18" class="footnote-ref" role="doc-noteref">18</a></sup>.</p>
<p>In fact, by promoting these commercial security tools, they promote dangerous advice like manual release processes. For instance, they recommend: “Before shipping the package to customers, the developer should perform binary composition analysis to verify the contents of the package.” But developers should not be performing package releases themselves if the desired outcome is high quality, secure software (see Objection #2).</p>
<p>As another example, they recommend that “SAST and DAST should be performed on all code prior to check-in and for each release…” But how is an enterprise supposed to perform DAST/SAST on code before it&rsquo;s checked in? It is an ad absurdum of “shift left.” It is only one step leftward away from running the tools earlier inside the developer&rsquo;s brains as they brainstorm what code to write.</p>
<p>But there is no mention of the need to integrate DAST/SAST into developer workflows and ensure that speed is still upheld. In enterprise software security, the success of a security bolt-on solution depends either on usability or coercion.</p>
<p>If you make the secure way the easy way and ensure that software engineers are not required to deviate unduly from their workflows to interact with the security solution, then it is quite likely to beget better security outcomes (or at least not worse productivity outcomes). The alternative is to mandate that a solution must be used; if it is unusable, then it will be bypassed to ensure work still gets done, unless there is sufficient coercion to enforce its use.</p>
<p>When the bolt-on recommendations are combined with their advice elsewhere to disconnect development systems from the internet, it begs the question: How do you use the SCA tools, among others, if the dev systems are not connected to the internet?</p>
<p>“Basic hygiene” is arguably better than any of these bolt-on option, including things like:</p>
<ul>
<li>Knowing what dependencies are present</li>
<li>Being purposeful about what goes into your software</li>
<li>Choosing a tech stack you can understand and maintain</li>
<li>Choosing tools that are appropriate for the software you are building<sup id="fnref:19"><a href="#fn:19" class="footnote-ref" role="doc-noteref">19</a></sup></li>
</ul>
<p>What does “being purposeful about what goes into your software” mean? It means:</p>
<ul>
<li>Including dependencies as part of design, rather than implementation</li>
<li>Being cautious about adding dependencies</li>
<li>Knowing why you’ve included other libraries and services</li>
<li>Understand the packaging concerns of your dependencies; for example, if you include a dependency, what does it cost to feed the beast in terms of operationalizing and shipping it?</li>
<li>If you take on a dependency in another team’s service, what are their SLOs? Do you trust them? Is it a stable system? Or have there been problems?</li>
<li>If it’s an open-source library, is it maintained by one person? A team? A company with a support contract you can purchase? A company with a support contract you already have in place? Can you see its updating and patching history?</li>
</ul>
<p>In essence, the answer is not: “never take on dependencies”<sup id="fnref:20"><a href="#fn:20" class="footnote-ref" role="doc-noteref">20</a></sup>; the answer is to understand what your dependencies are.</p>
<p>Overall, the recommended mitigations are all about outputs<sup id="fnref:21"><a href="#fn:21" class="footnote-ref" role="doc-noteref">21</a></sup> rather than outcomes<sup id="fnref:22"><a href="#fn:22" class="footnote-ref" role="doc-noteref">22</a></sup>, about security theater rather than actually securing anything and verifying it with evidence. It is clear the guidance does not consider organizations who ship more than once per quarter; in fact, they seem to view fast software delivery as undesirable. They are mistaken (as per Objection #1).</p>
<h2 id="objection-6-relevant-security-and-infrastructure-innovation-is-omitted">Objection #6: Relevant security and infrastructure innovation is omitted</h2>
<p>As mentioned in Objection #5, the document ignores a wealth of innovation in the private sector on software security over the past decade. The guidance seems to take the stance that the NSA/CISA/ODNI way is better than the private sector’s status quo, which is a false dichotomy. In fact, companies like SolarWinds have admitted their slowness was a detriment and have since modernized their practices — <a href="https://www.youtube.com/watch?v=1-tMRxqMwTQ">including the use of open source</a> — to achieve improved security.</p>
<p>The fact that <em>none</em> of those innovations were included suggests an insular examination of the problem at hand, which erodes the intellectual neutrality of the document. I’ve listed the kinds of security and infrastructure innovations I would expect to see in a reference guide like this below (I have no doubt there are many others worthy of inclusion, too):</p>
<ul>
<li>Netflix’s <a href="https://netflixtechblog.com/the-show-must-go-on-securing-netflix-studios-at-scale-19b801c86479?gi=aaae3d7a6bd2">Wall-E framework</a></li>
<li>Segment.io on <a href="https://segment.com/blog/redefining-threat-modeling/">threat modeling</a></li>
<li>Segment.io on <a href="https://segment.com/blog/access-service/">time-based access</a></li>
<li>The WebAssembly (Wasm) <a href="https://github.com/WebAssembly/component-model">component model</a></li>
<li>The Bytecode Alliance on <a href="https://bytecodealliance.org/articles/security-and-correctness-in-wasmtime">Security and Correctness in Wasmtime</a></li>
<li>IDE support for <a href="https://dl.gi.de/bitstream/handle/20.500.12116/37971/A1-19.pdf?sequence=1&amp;isAllowed=y">cloud-based static analyses</a></li>
<li>Software provenance <a href="https://grepory.substack.com/p/der-softwareherkunft-software-provenance">via CI/CD</a></li>
<li>Automated patching <a href="https://www.hashicorp.com/resources/redeploying-stateless-systems-in-lieu-of-patching-petco-packer-terraform">via IaC</a></li>
<li>GitHub’s <a href="https://www.researchgate.net/publication/349641251_On_the_Use_of_Dependabot_Security_Pull_Requests">Dependabot tool</a></li>
<li>Resilient security log pipelines with <a href="https://www.youtube.com/watch?v=9_HmfM3Kt8Y">security chaos engineering</a></li>
<li>Facebook on 2FA for <a href="https://www.youtube.com/watch?v=pY4FBGI7bHM">internal infra authentication</a></li>
<li>AWS <a href="https://blog.thinkst.com/2017/09/canarytokens-new-member-aws-api-key.html">API Key canarytokens</a></li>
<li>Google’s <a href="https://static.googleusercontent.com/media/sre.google/en/static/pdf/building_secure_and_reliable_systems.pdf">configuration distribution</a> worked example (among many others in the same book)</li>
</ul>
<p>The guidance would also be strengthened by considering survey data from the private sector, such as the <a href="https://go.dev/blog/survey2022-q2-results">recent Golang survey</a> (which has a section dedicated to Security, including fuzzing) and GitHub’s <a href="https://octoverse.github.com/static/github-octoverse-2020-security-report.pdf">Octoverse Security report</a> from 2020 (their Dependabot tool is also an arguably glaring omission).</p>
<p>As another example, the document cautions against &ldquo;allowing an adversary to use backdoors within the remote environment to access and modify source code within an otherwise protected organization infrastructure.&rdquo; This dismisses the last 30+ years of source code management (SCM) systems.</p>
<p>You cannot just up and change source code without people noticing in modern software delivery. Even subversion is built on the idea of tracking deltas; if you change some code, there exists a record of when that code was changed, by who, and when<sup id="fnref:23"><a href="#fn:23" class="footnote-ref" role="doc-noteref">23</a></sup>. Most development workflows configure the SCM system to require peer approval before merging changes to important branches. It is worrisome if the authoring agencies are unaware of this given it has been the status quo for decades; if their vendors do not exhibit these practices, then this suggests a serious problem with federal procurement.</p>
<h2 id="objection-7-inaccuracies-about-software-delivery-practices-and-basic-terminology">Objection #7: Inaccuracies about software delivery practices and basic terminology</h2>
<p>There are consistent misunderstandings<sup id="fnref:24"><a href="#fn:24" class="footnote-ref" role="doc-noteref">24</a></sup> and inaccuracies throughout the document about modern software delivery practices, including misunderstandings and inaccuracies about basic terminology, such as CI/CD<sup id="fnref:25"><a href="#fn:25" class="footnote-ref" role="doc-noteref">25</a></sup>, orchestration<sup id="fnref:26"><a href="#fn:26" class="footnote-ref" role="doc-noteref">26</a></sup>, nightly builds<sup id="fnref:27"><a href="#fn:27" class="footnote-ref" role="doc-noteref">27</a></sup>, code repositories<sup id="fnref:28"><a href="#fn:28" class="footnote-ref" role="doc-noteref">28</a></sup>, and more. This is part of a larger cultural problem in information security of trying to regulate<sup id="fnref:29"><a href="#fn:29" class="footnote-ref" role="doc-noteref">29</a></sup> what they do not understand.</p>
<p>The reference guide does not seem to understand who does what in enterprise engineering teams. Product and engineering management do not define security practices and procedures today, because their priorities are not security but instead whatever success metrics correspond to the business logic under their purview (usually related to revenue and customer adoption). The characterization of QA is particularly perplexing and suggests a significantly different QA discipline exists in intelligence agencies than does in the private sector.</p>
<p>If enterprises were to follow the advice that “software development group managers should ensure that the development process prevents the intentional and unintentional injection of… design flaws into production code,” then software might never be released again in the private sector. The guide also seems to believe that software engineers are unfamiliar with the concept of feature creep<sup id="fnref:30"><a href="#fn:30" class="footnote-ref" role="doc-noteref">30</a></sup>, as if that is not the unfortunate default in product engineering today.</p>
<p>There are also simply perplexing statements. For instance, “An exception to [adjusting the system themselves] would be when an administrator has to <strong>fix the mean time to failure in production</strong>.” (p. 30, emphasis mine). I do not know what they mean by this and struggle to guess what they might mean. This, and other confusing<sup id="fnref:31"><a href="#fn:31" class="footnote-ref" role="doc-noteref">31</a></sup> passages<sup id="fnref:32"><a href="#fn:32" class="footnote-ref" role="doc-noteref">32</a></sup> tarnish the intellectual credibility of the guide.</p>
<p>The guidance is inaccurate even in areas that should be their area of expertise, like cryptography. “The cryptographic signature validates that the software has not been tampered with and was authored by the software supplier.” No, it doesn’t. It validates that the supplier applied for that key at some point; it <a href="https://www.theregister.com/2022/03/05/nvidia_stolen_certificate/">does not say much</a> about the security properties of the software in question. For instance, there is an anti-cheat driver for the game Genshin Impact whose <a href="https://www.trendmicro.com/en_us/research/22/h/ransomware-actor-abuses-genshin-impact-anti-cheat-driver-to-kill-antivirus.html">key still hasn’t been revoked</a> despite being vulnerable to a privilege escalation bug.</p>
<p>Finally, in what is more of an inaccuracy about enterprise security<sup id="fnref:33"><a href="#fn:33" class="footnote-ref" role="doc-noteref">33</a></sup> than software delivery, the authors refer to “zero-day vulnerabilities” as an “easy compromise vector.”<sup id="fnref:34"><a href="#fn:34" class="footnote-ref" role="doc-noteref">34</a></sup> This may be true for the Equation Group and their black budget funding but is not true from the perspective of most cybercriminals and, therefore, enterprises.</p>
<p>Enterprises are still wrestling with leaked credentials, social engineering, and misconfigurations (see Objection #8). For a Fortune 500, it is a victory if attackers are forced to use 0day to compromise you; you’ve exhausted all their other options, which should be considered a rightful accomplishment relative to the security status quo.</p>
<h2 id="objection-8-confusing-contradictory-messages-from-the-authoring-agencies">Objection #8: Confusing, contradictory messages from the authoring agencies</h2>
<p>It is confusing to see that the same agency (CISA) who emphasized the need for repeatability and scalability <a href="https://www.cisa.gov/cloud-security-technical-reference-architecture">last year</a> only mentions the importance of repeatability once. And that another authoring agency (NSA) stated in their report <a href="https://media.defense.gov/2020/Jan/22/2002237484/-1/-1/0/CSI-MITIGATING-CLOUD-VULNERABILITIES_20200121.PDF">from January 2020</a> that supply chain vulnerabilities require a high level of sophistication to exploit while having low prevalence in the wild.</p>
<p>However, this guidance makes it seem like software engineering activities should be designed with supply chain vulnerabilities as the highest weighted factor<sup id="fnref:35"><a href="#fn:35" class="footnote-ref" role="doc-noteref">35</a></sup>. Misconfigurations are given scarce mention, despite the NSA citing them as most prevalent and most likely to be exploited.</p>
<p>In the aforementioned guide by CISA, they highlight some of the benefits of cloud computing and automation. But in this reference guide, they indicate that the cloud is more dangerous than on-premises systems<sup id="fnref:36"><a href="#fn:36" class="footnote-ref" role="doc-noteref">36</a></sup>, without explaining why it might be so.</p>
<p>Much of the language is confusing, too, and never receives clarification. What is a “high risk” defect? What does “cybersecurity hygiene” mean in the context of development environments?  They insist that “security defects should be fixed,” but what defines a defect? That it exists? That it’s exploitable? That it’s likely a target for criminals? Nation states? It remains unclear.<sup id="fnref:37"><a href="#fn:37" class="footnote-ref" role="doc-noteref">37</a></sup></p>
<h2 id="objection-9-omission-of-second-order-effects-and-underlying-causal-factors">Objection #9: Omission of second order effects and underlying causal factors</h2>
<p>There is no consideration of second order effects or underlying causal factors, such as organizational incentives and production pressures.<sup id="fnref:38"><a href="#fn:38" class="footnote-ref" role="doc-noteref">38</a></sup> (There is one mention, quite in passing, that there might be various constraints the developer faces<sup id="fnref:39"><a href="#fn:39" class="footnote-ref" role="doc-noteref">39</a></sup>). This ignores the rich literature around resilience in complex systems as well as behavioral science more generally. In fact, the recommendations are arguably the opposite of adaptive capacity and reflect rigidity, which is seen as a trap in resilience science<sup id="fnref:40"><a href="#fn:40" class="footnote-ref" role="doc-noteref">40</a></sup>.</p>
<p>If enterprises are to attempt implementing these recommendations (which I very much discourage them from doing), then guidance on how to achieve them despite vertiginous social constraints is essential. Much of what is outlined will be irrelevant if incentives are not changed.</p>
<p>There is also no discussion of user experience (UX) considerations<sup id="fnref:41"><a href="#fn:41" class="footnote-ref" role="doc-noteref">41</a></sup> when implementing these suggestions, which, perhaps more than <em>what</em> is implemented, will influence security outcomes the most; an unusable workflow will be bypassed. Because the guidance ignores the complexities of software delivery activities and accepts convenient explanations for developer behavior, the resulting advice is often <a href="https://www.usenix.org/system/files/1401_08-12_mickens.pdf">uncivilized</a><sup id="fnref:42"><a href="#fn:42" class="footnote-ref" role="doc-noteref">42</a></sup>.</p>
<p>There is a missed opportunity to discuss making the secure way the easy way, the importance of minimizing friction in developer workflows, the need to integrate with tooling like IDEs, and so forth. This absence results in a guide that feels both shallow and hollow.</p>
<h2 id="objection-10-dangerous-absolution-of-security-vendors-own-software-security">Objection #10: Dangerous absolution of security vendors’ own software security</h2>
<p>There is ample discussion of the need to scan software being built or leveraged from third parties, and how a long list of commercial security tools can support this endeavor (see objection #5). Curiously, there is no mention of the need for <em>those</em> tools to be scanned, such as performing SCA for your EDR or anti-virus bolt-on solution.</p>
<p>Security tools usually require privileged access in your systems, whether operating as a kernel module or requiring read/write (R/W) access to critical systems. Combined with the <a href="https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=antivirus">long</a> <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.12.5543&amp;rep=rep1&amp;type=pdf">history</a> of <a href="https://github.com/v-p-b/avpwn">critical</a> <a href="https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=data+loss+prevention">vulnerabilities</a> in <a href="https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=vpn">security</a> <a href="https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=gateway">tools</a>, this is a rather troubling omission. This is all aside from the numerous software reliability problems engendered by endpoint security tools, which also fail to receive mention. Engineering teams notoriously despise endpoint security on servers for valid reasons: kernel panics, CPUs tanking, and other weird failures that are exasperating to debug.</p>
<p>Given the paranoia about IDEs and other developer tools, it feels strange that security vendors receive absolution and nary a caveat regarding their code security. Where is the recommendation for a proctology exam on the security tools they recommend? Do all of their recommendations for countless hurdles in the way of code deployment apply to patches, too? They are software changes, after all. If they do not apply, then that is a massive loophole for attackers to exploit.</p>
<p>Another surprise is that “anti-virus” is listed as being capable of detecting things like DLL injections<sup id="fnref:43"><a href="#fn:43" class="footnote-ref" role="doc-noteref">43</a></sup>; <a href="https://www.mdpi.com/2624-800X/1/3/21">recent empirical data</a> suggests otherwise even for EDR, which is considered leagues ahead of antivirus solutions. Again, it gives the impression that the guidance is biased in favor of security vendors rather than a more complete set of strategic options. The decimating performance impact these tools can have on the underlying systems, such as kernel panics on production hosts, also fails to receive mention.</p>
<h2 id="conclusion">Conclusion</h2>
<p>If you read this guidance and implement it into your enterprise organization, you will end up securing the supply chain of nothing. Your engineering organization will dismiss you as an ideologue, a fool, or both; else, you will have constrained software engineering activities to such a degree that it makes more sense to abandon delivering software at all. In fairness, the most secure software is the software never written. But, in a world whose progress is now closely coupled with software delivery, we can do better than this document suggests.</p>
<hr>
<p>Thanks to Coda Hale, James Turnbull, Dr. Nicole Forsgren, Ryan Petrich, and Zac Duncan for their feedback when drafting this post.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>They admit that “a developer has inside knowledge of the code base and is often an expert in their respective coding language, environment, and style,” and… this is why we should have the security team as the gatekeeper for determining systems safety and quality? The security team that does <em>not</em> have knowledge of the code base and is <em>not</em> an expert in the coding language, environment, and style…&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>“The use of service accounts, like non-human privileged accounts used to run automated processes, should be minimized and carefully audited.” (p. 30).&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>To wit, they note that developers “must take great care to protect its private key used for the digital signature or hashes…” while a more straightforward recommendation would be to use service accounts and automate the build process. An automated system has traceability into the version, where it was built, reproducible builds, and then the build server signs it. You don’t need an engineer there to personally bless the release. To achieve real security outcomes, you must bless the tree of things that went into that asset, not bless that a person was there to rubber stamp the release.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>“If possible, development systems should not have access to the Internet and may be deployed as local virtual systems with host-only access.” (p. 20)&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>“If possible, the engineering network should have no direct access to the Internet.” (p. 30)&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Few organizations outside of the Linux kernel and browsers like Chrome and Firefox are even thinking about using fuzz testing at scale. That isn’t to say that more software engineering teams couldn’t benefit from fuzz testing. But the reality of enterprise dev today is that fuzzing is so far up the hierarchy of needs that it’s like a 16th century peasant dreaming of writing their own gilded manuscript while respectable working conditions and other trappings of basic human dignity are prerequisites.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>“Performing automatic static and dynamic vulnerability scanning on all components of the system for each committed change is key to the security and integrity of the build process and to validate the readiness of product release.” (p. 16). In a world where these scans take seconds, it might be feasible. In our real world where these scans can take hours and require navigating to a new window, it is counterproductive to suggest them.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>“All notifications should be evaluated with respect to the relationship to the product and prioritized based on risk assessment.” It will be prioritized based on business impact, which is often negligible. The guide repeatedly presents a world in which security and “risk” are the top priority without shining any real lens on social or organizational factors or providing recommendations on how to influence organizational dynamics. How do you adjust incentives to get “risk” to be treated as a higher priority? <crickets> What are all the reasons to buy a software composition analysis tool? <many pages of text ensue>. It’s challenging to take the guidance seriously as a result.&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>Developers want and need internet access to do their jobs because the internet is, by design, a fabulous reference tool and building software requires knowing many things, so many things, in fact, that it’s difficult to hold it all in your head and it’s much cheaper, effort-wise, to google “how python list” for the thousandth time.&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:10">
<p>As part of “hardening the development environment” they recommend operating a VPN and a jump host behind which a shared corporate development environment limps along with endpoint security software installed. This creates many layers of delay and extra latency.&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>The guide doesn’t explicitly forbid torrenting Stack Overflow and hosting your own offline copy of it… or printing Stack Overflow out as a gargantuan tome, an intimidating new edition to your office coffee room. Need help with Flask? Just turn to page 1,337,420 to read each Stack Overflow post about it. I think I am mostly kidding but also suspect that software engineers at real enterprises might resort to that if faced with the prospect of “no internet in engineering systems.”&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:12">
<p>And if you shouldn’t use the internet in the development system, then how is this advice even conveyed to developers? A bearded prophet descends the mountain bearing a stone tablet inscribed with the recommendation, “Don’t use the internet,” and then software is absolved of all its sins, I suppose.&#160;<a href="#fnref:12" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:13">
<p>For example, they recommend the ability to &ldquo;raise privileges&rdquo; for just one function and then lower it; this indicates that the software can have its privileges raised at any point in time, including after attack. Instead, they should advocate for privilege separation and sandboxing.&#160;<a href="#fnref:13" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:14">
<p>They reference “fail-safe defaults,” and while it remains unclear what they mean, my guess is they mean “fail closed,” which software engineers of the infra and ops variety would also dislike. This is common appsec advice, usually given with the recommendation to sprinkle some redundancy onto the system as a “quick fix” for any reliability issues that surface. Nothing is more infosec than treating software engineers like they are negligent fools for overlooking an obscure vulnerability while believing that delivering reliable software is trivial.&#160;<a href="#fnref:14" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:15">
<p>Metapod, the Pokémon, has a move called “harden” that does nothing (basically a no-op).  This feels fitting for the section in question entitled “Harden the Build Environment.”&#160;<a href="#fnref:15" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:16">
<p>Why specify the specific tools and not the goal outcomes? Anyone who’s wrestled with ML-based security tools knows that they gobble up needed resources on dev laptops while also needing lots of babying in the form of tuning and maintenance and triaging. But such concerns remain unmentioned in the reference guide.&#160;<a href="#fnref:16" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:17">
<p>For instance, the document repeatedly recommends software composition analysis (SCA), another bolt-on solution: “Software Composition Analysis should be performed on all third-party software.” And, in the context of missing SBOMs: “the developer team will derive the required information to describe the third-party component within the SBOM for example by using software composition analysis tools.” (p. 27)&#160;<a href="#fnref:17" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:18">
<p>The scent of grift wafts throughout the whole guide, unfortunately. To defend against an “exploited signing server,” they recommend controls like a security information and event management system (SIEM) and data loss prevention (DLP). They could have recommended a centralized system to consume and analyze logs, but they instead recommend a specific type of tool. Additionally, where is the mention of transparency logs? Transparency logs make sure you aren’t signing builds with a production key. The signing server accepts the signing requests and logs them. If it isn’t in the logs, then the signing server is potentially pwned. Reproducible builds are more important than signing. With the former, you can go back and see what went in the deployment if something goes wrong.&#160;<a href="#fnref:18" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:19">
<p>For instance, if you’re building software for microcontrollers, maybe don’t pick Java as your language of choice because it has a runtime you need to ship with your application that’s somewhat large and would take up precious ROM space. If your software needs to be portable, then choose a language that is portable.&#160;<a href="#fnref:19" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:20">
<p>You can take the view that you’ll be the steward of an open-source project within your organization, but it’s an option that is rather expensive. It’s a free thing, but it’s made with different ideas, constraints, and assumptions that don’t necessarily match what you need. How do you incentivize this <em>curiosity</em> about dependencies? The best way is to hold people accountable to their SLOs. You can also leverage the “shadow of the future” – the tendency for humans to be on better behavior when they believe they’ll have repeated interactions with their counterparty in the future.&#160;<a href="#fnref:20" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:21">
<p>&ldquo;The Secure SDLC identifies the exact procedures and policies that are used to ensure that secure development practices are implemented and artifacts are created to attest to the adherence of the adopted Secure SDLC plan with respect to the implementation and distribution of the product.&rdquo; The point of the “Secure SDLC” therefore appears to be attesting that you&rsquo;re adhering to the plan, rather than actually securing the product. If IaC tools are especially valuable for anything, it is creating artifacts; it allows software engineers to point to a tangible thing and say: “look at this thing I did” rather than their actions disappearing when they close their terminal or enough new commands send it out of view.&#160;<a href="#fnref:21" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:22">
<p>“Successful completion [of security training] should be tracked for all engineers.” (p. 10) They at least make it explicit that they only care about outputs rather than outcomes. All that matters is completion, not whether software quality has actually improved.&#160;<a href="#fnref:22" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:23">
<p>Usually there is a “why” too, but standards for commit messages vary.&#160;<a href="#fnref:23" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:24">
<p>“…developers assigned to create the threat models use component-level designs for completeness.” I don’t know what this means, and I am considered an expert in threat modeling.&#160;<a href="#fnref:24" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:25">
<p>“[The release branch] should be protected with reviewers and continuous integration/continuous delivery (CI/CD) tests with SAST enforced at the SCM.” This sentence alone suggests that the guide is confused on what CI/CD is, both in terms of how it works and why enterprises have adopted the practice. Further evidence is found in this passage: “Continuous integration/continuous deployment. In this case, the software is installed in a subset of the cloud for immediate feedback and A/B testing.” (p. 28) An enterprise could do that, and CI/CD might help, but that isn’t what CI/CD is in itself. The bullet immediately after it says, “<strong>Building software as part of a rapid iterative cycle</strong>, such as using an Agile Development Method.” This is frustrating.&#160;<a href="#fnref:25" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:26">
<p>One section uses the phrase “the CD orchestrator” and it’s quite unclear what they mean. Do they mean GitHub Actions? But that isn’t “the only entity that manages all the environments.” Do they mean Terraform? I don’t think they mean Kubernetes, not only because k8s wouldn’t be responsible for version control (we must keep waiting for git push to k8s), but also because they have another sentence in this section about Kubernetes: “Also ensure that the administrator tools are not in the public environment such as a Kubernetes cluster.” I do not understand what they are trying to say and, overall, this section strongly suggests that there are pervasive misunderstandings about what CD, orchestrators, and public environments are.&#160;<a href="#fnref:26" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:27">
<p>“The nightly build process also acts as a good performance matrix to assess the developer capabilities and comply with security and development processes.” (p. 17) That is not what nightlies are for. Nightlies are to integrate merged changes from the day before and run tests that are too expensive or lengthy to run on every build (at least in any sane workflow). It is not an opportunity to bully developers.&#160;<a href="#fnref:27" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:28">
<p>“For example, apply continuous monitoring to prevent, detect, and remediate threats against the repository” (p. 36). What does this mean? Do they know what a repository is?&#160;<a href="#fnref:28" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:29">
<p>The guidance presents a development model with some dreadful release branch and&hellip; review/approval when moving code from trunk to release. How are you going to review a giant stew of source code when promoting? Unless you are not reviewing and instead are rubber stamping to meet some secure development fertilizer sandwich of a process.&#160;<a href="#fnref:29" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:30">
<p>“During the coding and implementation of the system, care must be taken to ensure that all development efforts map to specific system requirements and that there is no ‘feature creep’ that might compromise product integrity or inject vulnerabilities.”&#160;<a href="#fnref:30" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:31">
<p>For instance, “This ensures the feature set is met and nothing exists for any feature creep.&quot; (p. 16) I have no idea how to parse &ldquo;nothing exists for any feature creep.&rdquo;&#160;<a href="#fnref:31" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:32">
<p>They mention “structural vulnerabilities,” as in, “ensure neither the code nor systems have structural vulnerabilities” and then never explain what is meant by “structural.” Is it a vulnerability in a load-bearing component? Is it a couture vulnerability made of crisp, sculptural fabric? The word never appears again, so the mystery remains.&#160;<a href="#fnref:32" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:33">
<p>They make the claim “Code signing is usually the last line of defense against a software supply chain exploit” and… is this true? This doesn’t feel right. What is the typical process for a supply chain exploit? Because my mental model of them is that the attack happens <em>before</em> signing.&#160;<a href="#fnref:33" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:34">
<p>They expound upon “common compromise techniques such as buffer overflows, ROP execution gadgets, delayed dynamic function loading,” and so forth. I am curious what they think “common” is. Common for what the NSA sees from its peer adversaries? Or common for the typical private sector enterprise? What is “common” to the typical enterprise is less buffer overflow exploits and more, “some employee VPN credentials were leaked last year and criminals used them to gain access to our network share and ransomwared it.” I am uncertain if the advice in this guide would stop such common concerns. In fact, I suspect these recommendations <em>exacerbate</em> the problem by recommending <em>slower</em> software changes and bolt-on solutions like VPNs and on-prem network segmentation.&#160;<a href="#fnref:34" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:35">
<p>They state that “Over the last few years, these next-generation software supply chain compromises have significantly increased for both open source and commercial software products.&quot; But they do not outline any sources or statistics to back this claim.&#160;<a href="#fnref:35" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:36">
<p>“e.g. cloud products should be pen-tested more frequently”. (p. 8) Why?&#160;<a href="#fnref:36" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:37">
<p>The reference guide also recommends that both SAST/DAST and SCA should be “performed to determine if the risk is acceptable.” But what is “acceptable”? Is it based on the security team’s point of view? Is it based on the organization as an ecosystem of stakeholders who all have disparate points of view? Is it based on the point of view of intelligence agencies, who, naturally, think enterprises should prioritize defense above all other endeavors?&#160;<a href="#fnref:37" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:38">
<p>They spend a lot of words on “compromised engineers” as a “difficult threat.” They reference things like “poor performance reviews, lack of promotion, or disciplinary actions” but fail to mention production pressures more generally.&#160;<a href="#fnref:38" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:39">
<p>They finally mention constraints and hint at production pressures with: “Depending on the complexity, security requirements, development resources available, and time constraints…” (p. 15)&#160;<a href="#fnref:39" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:40">
<p>Fath, B. D., Dean, C. A., &amp; Katzmair, H. (2015). Navigating the adaptive cycle: an approach to managing the resilience of social systems. <em>Ecology and Society, 20</em>(2).&#160;<a href="#fnref:40" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:41">
<p>There is finally a mention of UX on page 10: “Gaps should be examined to determine and address root causes, e.g., if there is a lack of usable tools to implement organizational security expectations.” But it is the sole mention. And there are a few things still objectionable in that sentence, like the term “root cause” and, also, how do you implement an expectation?&#160;<a href="#fnref:41" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:42">
<p>The guide somewhat dehumanizes developers as a result; at one point they literally use the pronoun of “it” to refer to a developer.&#160;<a href="#fnref:42" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:43">
<p>They suggest that “many antivirus products provide behavior analysis engines to provide additional security checks” like heap spray mitigation, stack pivot detection, DLL injection detection, and so forth.&#160;<a href="#fnref:43" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Opportunity Cost of Action Bias in Cybersecurity Incident Response</title>
            <link>https://kellyshortridge.com/blog/posts/opportunity-cost-action-bias-cybersecurity-incident-response/</link>
            <pubDate>Wed, 27 Jul 2022 21:01:28 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/opportunity-cost-action-bias-cybersecurity-incident-response/</guid>
            <description>I recently had the delectable pleasure of collaborating with some fine folks on a paper about behavioral factors in infosec incident response, specifically about the opportunity cost of action bias – the human tendency to favor action over inaction – in these activities. The influence of cognitive bias in various infosec endeavors is an underexplored area of research and this paper aims to change that, starting with supporting more constructive incident response decision-making.
There are a few points we made in the paper that I especially want to highlight:
Overlooking opportunity cost – the loss of potential gain from other alternatives when a choice is made – is rife within infosec decision-making across all activities, not just incident response (IR); get hyped for more on this from Josiah and me soon One way to start incorporating opportunity cost in practice is by considering the “null baseline”; in the context of incident response, it means considering the option of waiting to act Action bias is especially bad for IR because any option may be chosen without regard for its cost or value – and such rushed decisions can lead to suboptimal outcomes We use the Sony Breach as a case study of this and it’s worth pondering how much money they lost from the heat-of-the-moment decision to shut down all computers along with the corporate network once they discovered the intrusion This problem extends beyond security-specific IR, too – for instance, restarting database services that are running “hot” can exacerbate and extend the incident, despite reflecting a common knee-jerk response Part of incident preparation involves ensuring more choice options are available during response; for example, reliable logging pipelines would enable the rebuild option during response without sacrificing visibility into system behavior Our hypothesis is that thinking about a temporary delay is more helpful than a “do nothing” mentality as well as the “do something” (i.e. action bias-driven) mentality – the paper, of course, elaborates on why we believe so Practitioners frequently conflate ambiguity and uncertainty; in incident response, collecting more information (solving uncertainty) will not always guide you towards which choice will result in the optimal outcome (i.e. it won’t resolve ambiguity) Opportunity cost in infosec must also be considered with respect to its externalities; decisions made by defenders will impose costs on other stakeholders, whether users, software engineers, or even society – and those costs are woefully neglected today Pre-mortems, post-mortems, and chaos engineering experiments can all help incident response teams build “muscle memory” to support “watchful waiting” rather than succumbing to the instinct to act immediately And honestly our send-off just straight up slaps: “To avoid a breach is to try and prevent it. Overcoming action bias and properly pricing in opportunity cost instead requires a focus on preparation.” The paper’s venue is the upcoming Human Factors and Ergonomics Society Annual Meeting in October 2022, but my co-author Josiah Dykstra published it on his site for the viewing pleasure of all mortals out there seeking enlightenment: https://josiahdykstra.com/wp-content/uploads/2022/06/HFES2022_OpportunityCostAndActionBias.pdf
Abstract The hours and days immediately following the discovery of a cyber intrusion can be stressful and chaotic for victims. Without a documented and well-rehearsed incident response plan, people are prone to costly fear-based reactions. Action bias is the human tendency to favor action over inaction. It feels better for victims to do something even if rushed decisions are suboptimal to thoughtful, careful alternatives. Furthermore, the null baseline of doing nothing or watchful waiting can sometimes be advantageous. This paper describes an application of opportunity cost to action bias. While these insights are not yet backed by empirical data, this is the first work to examine the intersection of opportunity cost with action bias in cybersecurity incident response. Using Sony Pictures Entertainment as a case study, we discuss the implications of opportunity costs from acting prematurely and, conversely, the opportunity costs of waiting to act.
Citation Opportunity Cost of Action Bias in Cybersecurity Incident Response. Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough. Human Factors and Ergonomics Society Annual Meeting, October 2022.
</description>
            <atom:content type="html"><![CDATA[<p>I recently had the delectable pleasure of collaborating with some fine folks on <a href="https://josiahdykstra.com/wp-content/uploads/2022/06/HFES2022_OpportunityCostAndActionBias.pdf">a paper about behavioral factors in infosec incident response</a>, specifically about the opportunity cost of action bias &ndash; the human tendency to favor action over inaction &ndash; in these activities. The influence of cognitive bias in various infosec endeavors is an underexplored <a href="/blog/tags/behavioral-infosec/">area of research</a> and this paper aims to change that, starting with supporting more constructive incident response decision-making.</p>
<p>There are a few points we made in <a href="https://josiahdykstra.com/wp-content/uploads/2022/06/HFES2022_OpportunityCostAndActionBias.pdf">the paper</a> that I especially want to highlight:</p>
<ul>
<li>Overlooking <a href="https://www.stlouisfed.org/open-vault/2020/january/real-life-examples-opportunity-cost">opportunity cost</a> &ndash; the loss of potential gain from other alternatives when a choice is made &ndash; is rife within infosec decision-making across all activities, not just incident response (IR); get hyped for more on this from Josiah and me soon</li>
<li>One way to start incorporating opportunity cost in practice is by considering the &ldquo;null baseline&rdquo;; in the context of incident response, it means considering the option of <em>waiting</em> to act</li>
<li><a href="https://thedecisionlab.com/biases/action-bias">Action bias</a> is especially bad for IR because <em>any</em> option may be chosen without regard for its cost or value &ndash; and such rushed decisions can lead to suboptimal outcomes</li>
<li>We use the Sony Breach as a case study of this and it&rsquo;s worth pondering how much money they lost from the heat-of-the-moment decision to shut down all computers along with the corporate network once they discovered the intrusion</li>
<li>This problem extends beyond security-specific IR, too &ndash; for instance, restarting database services that are running “hot” can exacerbate and extend the incident, despite reflecting a common knee-jerk response</li>
<li>Part of incident preparation involves ensuring more choice options are available during response; for example, reliable logging pipelines would enable the rebuild option during response without sacrificing visibility into system behavior</li>
<li>Our hypothesis is that thinking about a temporary delay is more helpful than a “do nothing” mentality as well as the &ldquo;do something&rdquo; (i.e. action bias-driven) mentality &ndash; the paper, of course, elaborates on why we believe so</li>
<li>Practitioners frequently conflate ambiguity and uncertainty; in incident response, collecting more information (solving uncertainty) will not always guide you towards which choice will result in the optimal outcome (i.e. it won&rsquo;t resolve ambiguity)</li>
<li>Opportunity cost in infosec must also be considered with respect to its externalities; decisions made by defenders will impose costs on other stakeholders, whether users, software engineers, or even society &ndash; and those costs are woefully neglected today</li>
<li>Pre-mortems, post-mortems, and chaos engineering experiments can all help incident response teams build &ldquo;muscle memory&rdquo; to support &ldquo;watchful waiting&rdquo; rather than succumbing to the instinct to act immediately</li>
<li>And honestly our send-off just straight up slaps: &ldquo;To <em>avoid</em> a breach is to try and prevent it. Overcoming action bias
and properly pricing in opportunity cost instead requires a focus on preparation.&rdquo;</li>
</ul>
<p>The paper&rsquo;s venue is the upcoming Human Factors and Ergonomics Society Annual Meeting in October 2022, but my co-author Josiah Dykstra published it on his site for the viewing pleasure of all mortals out there seeking enlightenment: <a href="https://josiahdykstra.com/wp-content/uploads/2022/06/HFES2022_OpportunityCostAndActionBias.pdf">https://josiahdykstra.com/wp-content/uploads/2022/06/HFES2022_OpportunityCostAndActionBias.pdf</a></p>
<h3 id="abstract">Abstract</h3>
<p>The hours and days immediately following the discovery of a cyber intrusion can be stressful and chaotic
for victims. Without a documented and well-rehearsed incident response plan, people are prone to costly
fear-based reactions. Action bias is the human tendency to favor action over inaction. It feels better for victims
to do something even if rushed decisions are suboptimal to thoughtful, careful alternatives. Furthermore, the
null baseline of doing nothing or watchful waiting can sometimes be advantageous. This paper describes
an application of opportunity cost to action bias. While these insights are not yet backed by empirical data,
this is the first work to examine the intersection of opportunity cost with action bias in cybersecurity incident
response. Using Sony Pictures Entertainment as a case study, we discuss the implications of opportunity costs
from acting prematurely and, conversely, the opportunity costs of waiting to act.</p>
<h3 id="citation">Citation</h3>
<p><a href="https://josiahdykstra.com/wp-content/uploads/2022/06/HFES2022_OpportunityCostAndActionBias.pdf">Opportunity Cost of Action Bias in Cybersecurity Incident Response</a>. Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough. <em>Human Factors and Ergonomics Society Annual Meeting</em>, October 2022.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>HarpoCrates Pitchdeck: Remote Administration as a Service</title>
            <link>https://kellyshortridge.com/blog/posts/harpocrates-remote-admin-as-a-service-pitchdeck/</link>
            <pubDate>Mon, 23 May 2022 21:12:36 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/harpocrates-remote-admin-as-a-service-pitchdeck/</guid>
            <description> Disclaimer: this is a satirical post. The publication of this post taught me satire is regrettably an art of which infosec is neither familiar or fond.
This past weekend, I learned of the most revolutionary DevSecOps startup yet.
Naive to my imminent enlightenment, I crunched into some delectably crispy bacon while having brunch with my bff, Apate Dolus, in the Village. Just as I was arranging a forkful of over-medium egg with peppery watercress, she took off her Tom Ford sunnies and gingerly folded them next to her truffle omelette, straightening her posture and drawing a deep breath. I halted my brunching and returned her intense eye contact, now curious.
“What is it?”
“I’ve founded a startup. I’m raising a seed round. And I want you to share its vision with your audience.”
My heart rate relented. That was all? I continued my cronching and monching, now with an extra serving of blasé.
“So,” I asked in between bites, “what does it do?” Knowing Apate’s ambition, I assumed she must be capitalizing on the effervescent web3 hype or else had crafted some other couture cash grab.
But what she said next thunderbolted me in place.
“It is HarpoCrates and we’re revolutionizing remote administration as a service. We’re going after a $2 trillion TAM with another $10 trillion – and change,” she grinned, “in untapped upside.” Her phone was suddenly in my hand, the pitch deck’s cover blazing on the screen. “Read it. The secret is AI to deliver remote encryption services at lightspeed and hyperscale. Deep learning. NLP. The highest payment conversion rates you’ve ever seen. The biggest payments you’ve ever seen. Think of it this way… HarpoCrates is an anemone and the world is our tasty fucking oyster.”
Time stilled on the sleepy cobblestoned street. A fat bumblebee guzzled nectar in a violently pink azalea bush near our cast iron café table. I, too, felt I was drinking deeply of some sweet substance. Apate’s pitch was near divine, an ambrosia too sublime for mere mortal throats to imbibe. The TAM pierced my soul like Poseidon’s trident; the value prop glinted and scintillated like Athena’s fabled crown; the differentiators dazzled and tantalized to such a titillating degree that I imagined Zeus himself would deign to transform into a vulnerable VPN just for a moment’s entanglement with HarpoCrates – HarpoCrates, so clearly fated as the climax of DevSecOps by all the oracles across all the lands, lest they be fools.
Apate swirled her yuzu mimosa as a royal vizier might swirl a mystical tonic in their jeweled goblet, having just bequeathed arcane insights from distant coruscating constellations upon an awestruck prince. Her long, coffin-shaped nails now bore a new meaning: a prophecy of the competitors she would bury. Her diamond tennis bracelet gleamed like a pristine weapon to be wielded around the market’s tender neck, wringing every last dollar from its coffers.
I looked up at the clouds swaying across the sky to the jazz of the late spring breeze. Such majesty, so high up… but not as high as I knew HarpoCrates’ valuation must be. Perhaps not even as majestic as the RAaaS revolution to come.
“Do you have a lead yet?” I asked, still awestruck. She rolled her eyes.
“Obviously. Every top VC was begging to lead. But I want some darling angels in my choir. And that’s where you come in. RSA is soon and I want meetings with the best of the best in infosec.”
My hands trembled as finished nibbling on my bacon and sipping my matcha latte. I felt honored with this glorious burden. This must be how Napoleon’s generals must have felt, I marveled, watching an overheated Chow Chow prance past our table. So, too, would HarpoCrates prance into countless enterprises and slobber all over them until they relinquished their coins.
Thus, here I am before the Internet today, spreading the word about the seed round to end all seed rounds, which is being led by the world’s best VC (who must remain anonymous, for secret reasons).
I’ve included the pitch deck below for the world’s perusal. HarpoCrates is actively seeking follow-on and angel investors, so if you’ll be in SF during RSAC week, I implore you to email Apate at ad@harpocrates.dev to learn more about this unprecedented opportunity and fund this unrivaled revolution in remote systems administration.
PreviousNext   Page: / </description>
            <atom:content type="html"><![CDATA[<hr>
<p><em>Disclaimer: this is a satirical post. The publication of this post taught me satire is regrettably an art of which infosec is neither familiar or fond.</em></p>
<p>This past weekend, I learned of the most revolutionary DevSecOps startup yet.</p>
<p>Naive to my imminent enlightenment, I crunched into some delectably crispy bacon while having brunch with my bff, Apate Dolus, in the Village. Just as I was arranging a forkful of over-medium egg with peppery watercress, she took off her Tom Ford sunnies and gingerly folded them next to her truffle omelette, straightening her posture and drawing a deep breath. I halted my brunching and returned her intense eye contact, now curious.</p>
<p>&ldquo;What is it?&rdquo;</p>
<p>&ldquo;I&rsquo;ve founded a startup. I&rsquo;m raising a seed round. And I want you to share its vision with your audience.&rdquo;</p>
<p>My heart rate relented. That was all? I continued my cronching and monching, now with an extra serving of blasé.</p>
<p>&ldquo;So,&rdquo; I asked in between bites, &ldquo;what does it do?&rdquo; Knowing Apate&rsquo;s ambition, I assumed she must be capitalizing on the effervescent web3 hype or else had crafted some other couture cash grab.</p>
<p>But what she said next thunderbolted me in place.</p>
<p>&ldquo;<em>It</em> is HarpoCrates and we&rsquo;re revolutionizing remote administration as a service. We&rsquo;re going after a $2 trillion TAM with another $10 trillion &ndash; and change,&rdquo; she grinned, &ldquo;in untapped upside.&rdquo; Her phone was suddenly in my hand, the pitch deck&rsquo;s cover blazing on the screen. &ldquo;Read it. The secret is AI to deliver remote encryption services at lightspeed and hyperscale. Deep learning. NLP. The highest payment conversion rates you&rsquo;ve ever seen. The biggest <em>payments</em> you&rsquo;ve ever seen. Think of it this way&hellip; HarpoCrates is an anemone and the world is our tasty fucking oyster.&rdquo;</p>
<p>Time stilled on the sleepy cobblestoned street. A fat bumblebee guzzled nectar in a violently pink azalea bush near our cast iron café table. I, too, felt I was drinking deeply of some sweet substance. Apate&rsquo;s pitch was near divine, an ambrosia too sublime for mere mortal throats to imbibe. The TAM pierced my soul like Poseidon&rsquo;s trident; the value prop glinted and scintillated like Athena&rsquo;s fabled crown; the differentiators dazzled and tantalized to such a titillating degree that I imagined Zeus himself would deign to transform into a vulnerable VPN just for a moment&rsquo;s entanglement with HarpoCrates &ndash; HarpoCrates, so clearly fated as the climax of DevSecOps by all the oracles across all the lands, lest they be fools.</p>
<p>Apate swirled her yuzu mimosa as a royal vizier might swirl a mystical tonic in their jeweled goblet, having just bequeathed arcane insights from distant coruscating constellations upon an awestruck prince. Her long, coffin-shaped nails now bore a new meaning: a prophecy of the competitors she would bury. Her diamond tennis bracelet gleamed like a pristine weapon to be wielded around the market&rsquo;s tender neck, wringing every last dollar from its coffers.</p>
<p>I looked up at the clouds swaying across the sky to the jazz of the late spring breeze. Such majesty, so high up&hellip; but not as high as I knew HarpoCrates&rsquo; valuation must be. Perhaps not even as majestic as the RAaaS revolution to come.</p>
<p>&ldquo;Do you have a lead yet?&rdquo; I asked, still awestruck. She rolled her eyes.</p>
<p>&ldquo;Obviously. Every top VC was begging to lead. But I want some darling angels in my choir. And that&rsquo;s where you come in. RSA is soon and I want meetings with the best of the best in infosec.&rdquo;</p>
<p>My hands trembled as finished nibbling on my bacon and sipping my matcha latte. I felt honored with this glorious burden. This must be how Napoleon&rsquo;s generals must have felt, I marveled, watching an overheated Chow Chow prance past our table. So, too, would HarpoCrates prance into countless enterprises and slobber all over them until they relinquished their coins.</p>
<p>Thus, here I am before the Internet today, spreading the word about the seed round to end all seed rounds, which is being led by the world&rsquo;s best VC (who must remain anonymous, <a href="https://twitter.com/lettersofnote/status/397354825915985921?lang=en">for secret reasons</a>).</p>
<p>I&rsquo;ve included the pitch deck below for the world&rsquo;s perusal. HarpoCrates is <strong>actively seeking follow-on and angel investors</strong>, so if you&rsquo;ll be in SF during RSAC week, I implore you to email Apate at <a href="mailto:ad@harpocrates.dev">ad@harpocrates.dev</a> to learn more about this unprecedented opportunity and fund this unrivaled revolution in remote systems administration.</p>
<script type="text/javascript" src= '/js/pdf-js/build/pdf.js'></script>
<style>
#the-canvas {
  border: 1px solid black;
  direction: ltr;
  width: 100%;
  height: auto;
  display: none;
}

#paginator {
  display: none;
  text-align: center;
  margin-bottom: 10px;
}

#paginator button {
  display: inline-block;
}

#loadingWrapper {
  display: none;
  justify-content: center;
  align-items: center;
  width: 100%;
  height: 350px;
}

#loading {
  display: inline-block;
  width: 50px;
  height: 50px;
  border: 3px solid #d2d0d0;;
  border-radius: 50%;
  border-top-color: #383838;
  animation: spin 1s ease-in-out infinite;
  -webkit-animation: spin 1s ease-in-out infinite;
}

@keyframes spin {
  to { -webkit-transform: rotate(360deg); }
}
@-webkit-keyframes spin {
  to { -webkit-transform: rotate(360deg); }
}
</style>

<div id="paginator">
    <button id="prev">Previous</button>
    <button id="next">Next</button>
    &nbsp; &nbsp;
    <span>Page: <span id="page_num"></span> / <span id="page_count"></span></span>
</div>
<div id="embed-pdf-container">
    <div id="loadingWrapper">
      <div id="loading"></div>
    </div>
    <canvas id="the-canvas"></canvas>
</div>

<script type="text/javascript">
window.onload = function() {


var url = "" + '\/blog\/img\/HarpoCrates-Overview-May-2022-Investor-Deck.pdf';

var hidePaginator = "" === "true";
var hideLoader = "" === "true";
var selectedPageNum = parseInt("") || 1;


var pdfjsLib = window['pdfjs-dist/build/pdf'];


pdfjsLib.GlobalWorkerOptions.workerSrc = "/blog/" + 'js/pdf-js/build/pdf.worker.js';


var pdfDoc = null,
    pageNum = selectedPageNum,
    pageRendering = false,
    pageNumPending = null,
    scale = 3,
    canvas = document.getElementById('the-canvas'),
    ctx = canvas.getContext('2d'),
    paginator = document.getElementById("paginator"),
    loadingWrapper = document.getElementById('loadingWrapper');



showPaginator();
showLoader();



function renderPage(num) {
  pageRendering = true;
  
  pdfDoc.getPage(num).then(function(page) {
    var viewport = page.getViewport({scale: scale});
    canvas.height = viewport.height;
    canvas.width = viewport.width;

    
    var renderContext = {
      canvasContext: ctx,
      viewport: viewport
    };
    var renderTask = page.render(renderContext);

    
    renderTask.promise.then(function() {
      pageRendering = false;
      showContent();
      
      if (pageNumPending !== null) {
        
        renderPage(pageNumPending);
        pageNumPending = null;
      }
    });
  });

  
  document.getElementById('page_num').textContent = num;
}



function showContent() {
  loadingWrapper.style.display = 'none';
  canvas.style.display = 'block';
}



function showLoader() {
  if(hideLoader) return
  loadingWrapper.style.display = 'flex';
  canvas.style.display = 'none';
}



function showPaginator() {
  if(hidePaginator) return
  paginator.style.display = 'block';
}



function queueRenderPage(num) {
  if (pageRendering) {
    pageNumPending = num;
  } else {
    renderPage(num);
  }
}



function onPrevPage() {
  if (pageNum <= 1) {
    return;
  }
  pageNum--;
  queueRenderPage(pageNum);
}
document.getElementById('prev').addEventListener('click', onPrevPage);



function onNextPage() {
  if (pageNum >= pdfDoc.numPages) {
    return;
  }
  pageNum++;
  queueRenderPage(pageNum);
}
document.getElementById('next').addEventListener('click', onNextPage);



pdfjsLib.getDocument(url).promise.then(function(pdfDoc_) {
  pdfDoc = pdfDoc_;
  var numPages = pdfDoc.numPages;
  document.getElementById('page_count').textContent = numPages;
  
  
  if(pageNum > numPages) {
    pageNum = numPages
  }

  
  renderPage(pageNum);
});
}

</script>

]]></atom:content>
        </item>
        
        <item>
            <title>Infosec Startup Buzzword Bingo: 2022 Edition</title>
            <link>https://kellyshortridge.com/blog/posts/infosec-buzzword-bingo-2022/</link>
            <pubDate>Wed, 02 Feb 2022 08:10:34 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/infosec-buzzword-bingo-2022/</guid>
            <description>It is the year 2022. One hundred years after T.S. Eliot wrote The Wasteland, we find ourselves lost in a wasteland of information security buzzwords. If we grab a handful of dust from this parched land, what trends do we see? This edition of my annual Infosec Buzzword Bingo elucidates the current zeitgeist through which buzzwords are most popular among security startups.
I surveyed 100 infosec companies’ websites1, the vast majority of which are startups who raised VC funding in the past nine to twelve months. The idea behind the bingo card is to take it with you on physical journeys through vendor halls or virtual quests through vendor websites and see whether you can replace your existential dread with a cry out into the void of “Bingo!”.
For even more fun, see what your cursed cyber startup would offer by trying out my Infosec Startup Tagline Generator (built on Compute@Edge), which includes all the “best” buzzwords from this years’ cohort: https://brightly-willing-guinea.edgecompute.app/
Without further introduction, below is the 2022 Infosec Startup Buzzword Bingo card – read on if you want more analysis:
The top word this year was automated and its variants (automatically, automation, automates), which makes it the reigning champion since 2019. And everyone is now a platform, the second most popular word, even startups who do not actually offer a way to build or run anything on top of what they created (especially when their only creation is an MVP to raise their seed round).
Does this mean we will soon see a new word to indicate true platforms? I vote for flatform. Flatforms are shoes with thick soles but without heels, while platforms have thick soles as well as heels, making them far more precarious for perambulation and therefore making the distinction worthy of the “true platform” vs. “startup deeming themselves a platform on incredibly shaky semantic grounds.”
The following table lists the rest of the top 25 buzzwords and includes the number of companies who cited the buzzword, along with whether the buzzword was on prior bingo cards:
But the top words don’t tell the entire story. What merits inclusion on the bingo card is not just absolute popularity, but whether the buzzword is on the rise in usage, too. So, it behooves us to look at which buzzwords are en vogue and which are on their way to the clearance bin.
Which buzzwords are on the rise? Let’s start with the zeitgeisty words on the bingo card itself. Solutions are now far more dynamic (&#43;2,200% from the prior bingo card), finally acknowledging the nature of complex systems and the continuous nature of spacetime under general relativity. Vendors are also asking you to place your trust in Zero Trust (&#43;183%) tools, split between tools aligned with a zero trust philosophy and those that fall under the namesake market category and collection of technical capabilities. How fun for practitioners who are already struggling to understand wtf either of those mean.
While some buzzwords are arguably obnoxious but innocuous, some buzzwords are supposed to mean real things and misappropriation of them is a scourge for practitioners. For instance, the buzzword agentless (&#43;113%) has drifted from its true meaning, like one instance in which the purportedly agentless solution is an agent that lives inside your app instead of alongside it. Practitioners who read agentless and assume they will be adopting something lightweight and unobtrusive – without embedded hooks into the underlying system – will be understandably shocked upon implementation.
Similarly, the labels for fancy math like machine learning and AI are misused and abused to such a degree that they are rendered meaningless. While machine learning appears to be sliding into boring territory (-11%), vendors seem to be pivoting to labeling their linear regression models as AI (&#43;58%). There is actually nothing wrong with regex or deterministic, conditional logic – it often works better than fancy math – but labeling it as ML or AI is simply disingenuous.
Our cursed timeline demands that if startups want to raise funding from today’s VCs, labeling their wares as AI or ML is a lamentable prerequisite. Nevertheless, they should at least elucidate the approach without technobabble or, even better, share data points to prove why AI is superior to deterministic approaches (whether in terms of efficacy, speed, or scale) for practitioners’ sake.
Then there are buzzwords like cloud-native (&#43;170%) which are also supposed to mean real things, but the market map by the Cloud Native Computing Foundation looks like a conspiracist’s feverish ramblings and so I think its misuse is less appalling. If you don’t want a solution for cloud-native, many vendors also offer straight-up native (&#43;108%) solutions that belie their nature as bolt-on tools or are trying to make “can run in Kubernetes like most other Linux software” sound special. Alas, after reading their marketing fluff, I question whether these vendors are even reality-native.
Following the lead of the Webb Space Telescope, security professionals now seek to discover (&#43;79%) things like the oldest and most distant objects in the universe such as unpatched mainframes cowering pitifully in the COVID-vacated company headquarters. Security pros also want to optimize (&#43;100%), although most invocations of the word on startup vendor sites were not accompanied by measurement that would help evaluate efficacy. How strange.
Perhaps inspired by the car industry, security vendors will offer you an engine that is powerful to accelerate (&#43;53%) whatever things are being done. And two new words reflect changing concerns: security vendors, like chiropractors perhaps in many ways, want to help you fix your posture across the software development lifecycle – and ideally across your personal human lifecycle, too, to maximize customer lifetime value.
In terms of needless superlatives, vendors increasingly boast of being world-class (&#43;200%) and unmatched (&#43;1,100%) which perhaps better describes a sports champion. And a security solution is ideally frictionless (&#43;233%) and effortless (&#43;700%) because who needs the laws of thermodynamics?
Which buzzwords are on their way out? Our long national nightmare with threat being used in every sentence in infosec may finally be waning if its 26% drop in usage holds true across the industry (if only we could eradicate the term “bad guys” with similar efficacy). Similarly, fewer vendors talk about advanced (-27%), complex (-40%), unknown (-19%), or targeted (-60%) attacks now and less than 10% of vendors are even mentioning zero-day (-20%) vulns. I can only hope this means the median is countering their natural cognitive bias and focusing on a more realistic threat model.
There is also less focus now on whether a solution is next-gen (-20%) or robust (-29%), perhaps suggesting that security buyer personas have realized that copy-pasting old solutions to new environments with new problems is insufficient and that prevention isn’t a panacea. Offering intelligence (-26%) isn’t the differentiator it purportedly used to be and, perchance having beheld the lambent light of reason, no one mentions dark (-100%) as in “dark web” anymore.
Which buzzwords will people yell at me on Twitter for not including on the bingo card? Multi-cloud, DevSecOps, XDR, shift left, supply chain, SASE, SBOM – yes, yes I hear you but not enough vendors included them on their home page and product pages to merit inclusion on the card. No, I did not forget about them (DevSecOps and XDR each only had 12 vendors mentioning them; the rest of those listed have even fewer representative vendors).
I invite you to set up prediction markets on which buzzwords are likely to make the bingo card next year and then peer pressure vendors into including them during your own leisure time.
Which buzzwords are the weirdest? My favorite buzzword this year is definitely wowful for obvious reasons although best-in-breed must win as the most amusing typo (which perhaps reveals the quality of their wares). Supercharge is a fun new verb that emerged among seven vendors this year and the six vendors who used harden are far more mature than I.
Two vendors presumably took inspo from the #FreeBritney movement and used toxic to describe the things they help ameliorate. Two vendors have solutions which are multidimensional, presumably to stop dreaded wormhole attacks and events only known relative to the motion of observability wetware.
One vendor described their solution as omniscient, not specifying whether their interpretation of omniscience is compatible with immutability2 (which could make it incongruent with the cloud-native buzzword as well as classical deities). Someone, not content with describing APIs as “shadow”, threw all dignity to the wind and invented the term zombie APIs.
And finally, one vendor said their solution is zero-ops and I must invoke Heidegger to ask, “Why are there [ops] at all, instead of Nothing? That is the question.” Or should we follow Nishida Kitarō and ponder whether zero-ops means “[ops] affirms itself through its own self negation”3?
DevOps, or Zero-Ops, the two great ends of Fate,
And True or False, the subject of debate,
That perfect or destroy the vast designs of state —
When they have racked the developer’s breast,
Within thy pipeline most securely rest,
And when reduced to Rust, are least unsafe and best.4
Thanks to Adrian Sanabria and the Mad Tinkerer for feedback on this post, and to Mark Teodoro for porting my Python script into C@E (don’t blame him for the Geocities styling).
I did not survey their entire website, only the main page and product pages. If buzzwords appear in blogs, for instance, that isn’t captured. The goal is to hone in on how infosec startups presently present themselves to the market. ↩︎
Kretzmann, N. (1966). Omniscience and immutability. The Journal of Philosophy, 63(14), 409-421. ↩︎
Krummel, J. W. (2018). On (the) nothing: Heidegger and Nishida. Continental Philosophy Review, 51(2), 239-268. ↩︎
My glorious spoof-poem is inspired by the poem “Upon Nothing” by John Wilmot, 2nd Earl of Rochester: https://www.poetryfoundation.org/poems/53720/upon-nothing ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>It is the year 2022. One hundred years after T.S. Eliot wrote <em>The Wasteland</em>, we find ourselves lost in a wasteland of information security buzzwords. If we grab a handful of dust from this parched land, what trends do we see? This edition of <a href="/blog/posts/buzzword-bingo-all-editions/">my annual Infosec Buzzword Bingo</a> elucidates the current zeitgeist through which buzzwords are most popular among security startups.</p>
<p>I surveyed 100 infosec companies’ websites<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>, the vast majority of which are startups who raised VC funding in the past nine to twelve months. The idea behind the bingo card is to take it with you on physical journeys through vendor halls or virtual quests through vendor websites and see whether you can replace your existential dread with a cry out into the void of “Bingo!”.</p>
<p>For even more fun, see what your cursed cyber startup would offer by trying out my <a href="https://brightly-willing-guinea.edgecompute.app/">Infosec Startup Tagline Generator</a> (built on <a href="https://developer.fastly.com/solutions/starters/">Compute@Edge</a>), which includes all the &ldquo;best&rdquo; buzzwords from this years&rsquo; cohort: <a href="https://brightly-willing-guinea.edgecompute.app/">https://brightly-willing-guinea.edgecompute.app/</a></p>
<p>Without further introduction, below is the 2022 Infosec Startup Buzzword Bingo card – read on if you want more analysis:</p>
<p><img src="/blog/img/infosec-startup-buzzword-bingo-2022.png" alt="The 2022 edition of the infosec buzzword bingo card. The background is terrible cyber art that is roughly a translucent caution sign resting above vaporwave colored cubes that are meant to look vaguely like a circuit board. In order from left to right, starting on the upper left side, the buzzwords included on the card are as follows. Automated. Discover. Cloud-native. Engine. Visibility. API. AI / ML. Seamless. Posture. One-click. Powerful. Dynamic. Continuous, which is the center word of the bingo card. Deep. Accelerate. Simple. Zero Trust. Agentless. Native. Lifecycle. Platform. Accurate. Enforce. Real-time. Context."></p>
<p>The top word this year was <em><strong>automated</strong></em> and its variants (<em><strong>automatically</strong></em>, <em><strong>automation</strong></em>, <em><strong>automates</strong></em>), which makes it the reigning champion since 2019. And everyone is now a <strong><em>platform</em></strong>, the second most popular word, even startups who do not actually offer a way to build or run anything on top of what they created (especially when their only creation is an MVP to raise their seed round).</p>
<p>Does this mean we will soon see a new word to indicate <em>true</em> platforms? I vote for <em><strong>flatform</strong></em>. Flatforms are shoes with thick soles but without heels, while platforms have thick soles as well as heels, making them far more precarious for perambulation and therefore making the distinction worthy of the “true platform” vs. “startup deeming themselves a platform on incredibly shaky semantic grounds.”</p>
<p>The following table lists the rest of the top 25 buzzwords and includes the number of companies who cited the buzzword, along with whether the buzzword was on prior bingo cards:</p>
<script src="https://gist.github.com/swagitda/9f0f8ff3d8b9080f0a21e16ab2bce4b5.js"></script>
<p>But the top words don’t tell the entire story. What merits inclusion on the bingo card is not just absolute popularity, but whether the buzzword is on the rise in usage, too. So, it behooves us to look at which buzzwords are <em>en vogue</em> and which are on their way to the clearance bin.</p>
<h2 id="which-buzzwords-are-on-the-rise">Which buzzwords are on the rise?</h2>
<p>Let’s start with the zeitgeisty words on the bingo card itself. Solutions are now far more <em><strong>dynamic</strong></em> (+2,200% from the prior bingo card), finally acknowledging the nature of complex systems and the <em><strong>continuous</strong></em> nature of spacetime under general relativity. Vendors are also asking you to place your trust in <em><strong>Zero Trust</strong></em> (+183%) tools, split between tools aligned with a zero trust <em>philosophy</em> and those that fall under the namesake <em>market category</em> and collection of technical capabilities. How fun for practitioners who are already struggling to understand wtf either of those mean.</p>
<p>While some buzzwords are arguably obnoxious but innocuous, some buzzwords are <em>supposed</em> to mean real things and misappropriation of them is a scourge for practitioners. For instance, the buzzword <em><strong>agentless</strong></em> (+113%) has drifted from its true meaning, like one instance in which the purportedly agentless solution is an agent that lives <em>inside</em> your app instead of alongside it. Practitioners who read agentless and assume they will be adopting something lightweight and unobtrusive – without embedded hooks into the underlying system – will be understandably shocked upon implementation.</p>
<p>Similarly, the labels for fancy math like <em><strong>machine learning</strong></em> and <em><strong>AI</strong></em> are misused and abused to such a degree that they are rendered meaningless. While machine learning appears to be sliding into boring territory (-11%), vendors seem to be pivoting to labeling their linear regression models as AI (+58%). There is actually nothing wrong with regex or deterministic, conditional logic – it often works <em>better</em> than fancy math – but labeling it as ML or AI is simply disingenuous.</p>
<p>Our cursed timeline demands that if startups want to raise funding from today’s VCs, labeling their wares as AI or ML is a lamentable prerequisite. Nevertheless, they should at least elucidate the approach without technobabble or, even better, share data points to prove why AI is superior to deterministic approaches (whether in terms of efficacy, speed, or scale) for practitioners’ sake.</p>
<p>Then there are buzzwords like <em><strong>cloud-native</strong></em> (+170%) which are also supposed to mean real things, but the market map by the Cloud Native Computing Foundation <a href="https://twitter.com/dastbe/status/1303858170155081728">looks like a conspiracist’s feverish ramblings</a> and so I think its misuse is less appalling. If you don’t want a solution for cloud-native, many vendors also offer straight-up <em><strong>native</strong></em> (+108%) solutions that belie their nature as bolt-on tools or are trying to make “can run in Kubernetes like most other Linux software” sound special. Alas, after reading their marketing fluff, I question whether these vendors are even reality-native.</p>
<p>Following the lead of the Webb Space Telescope, security professionals now seek to <em><strong>discover</strong></em> (+79%) things like the oldest and most distant objects in the universe such as unpatched mainframes cowering pitifully in the COVID-vacated company headquarters. Security pros also want to <em><strong>optimize</strong></em> (+100%), although most invocations of the word on startup vendor sites were not accompanied by measurement that would help evaluate efficacy. How strange.</p>
<p>Perhaps inspired by the car industry, security vendors will offer you an <em><strong>engine</strong></em> that is <em><strong>powerful</strong></em> to <em><strong>accelerate</strong></em> (+53%) whatever things are being done. And two new words reflect changing concerns: security vendors, like chiropractors perhaps in many ways, want to help you fix your <em><strong>posture</strong></em> across the software development <em><strong>lifecycle</strong></em> – and ideally across your personal human lifecycle, too, to maximize customer lifetime value.</p>
<p>In terms of needless superlatives, vendors increasingly boast of being <em><strong>world-class</strong></em> (+200%) and <em><strong>unmatched</strong></em> (+1,100%) which perhaps better describes a sports champion. And a security solution is ideally <em><strong>frictionless</strong></em> (+233%) and <em><strong>effortless</strong></em> (+700%) because who needs the laws of thermodynamics?</p>
<h2 id="which-buzzwords-are-on-their-way-out">Which buzzwords are on their way out?</h2>
<p>Our long national nightmare with <em><strong>threat</strong></em> being used in every sentence in infosec may finally be waning if its 26% drop in usage holds true across the industry (if only we could eradicate the term “bad guys” with similar efficacy). Similarly, fewer vendors talk about <em><strong>advanced</strong></em> (-27%), <em><strong>complex</strong></em> (-40%), <em><strong>unknown</strong></em> (-19%), or <em><strong>targeted</strong></em> (-60%) attacks now and less than 10% of vendors are even mentioning <em><strong>zero-day</strong></em> (-20%) vulns. I can only hope this means the median <a href="/blog/posts/when-prospect-theory-meets-chaos-engineering/">is countering their natural cognitive bias</a> and focusing on a more realistic threat model.</p>
<p>There is also less focus now on whether a solution is <em><strong>next-gen</strong></em> (-20%) or <em><strong>robust</strong></em> (-29%), perhaps suggesting that security buyer personas have realized that copy-pasting old solutions to new environments with new problems is insufficient and that prevention isn’t a panacea. Offering <em><strong>intelligence</strong></em> (-26%) isn’t the differentiator it purportedly used to be and, perchance having beheld the lambent light of reason, no one mentions <em><strong>dark</strong></em> (-100%) as in “dark web” anymore.</p>
<h2 id="which-buzzwords-will-people-yell-at-me-on-twitter-for-not-including-on-the-bingo-card">Which buzzwords will people yell at me on Twitter for not including on the bingo card?</h2>
<p>Multi-cloud, DevSecOps, XDR, shift left, supply chain, SASE, SBOM – yes, yes I hear you but not enough vendors included them on their home page and product pages to merit inclusion on the card. No, I did not forget about them (DevSecOps and XDR each only had 12 vendors mentioning them; the rest of those listed have even fewer representative vendors).</p>
<p>I invite you to set up prediction markets on which buzzwords are likely to make the bingo card next year and then peer pressure vendors into including them during your own leisure time.</p>
<h2 id="which-buzzwords-are-the-weirdest">Which buzzwords are the weirdest?</h2>
<p>My favorite buzzword this year is definitely <em><strong>wowful</strong></em> for obvious reasons although <em><strong>best-in-breed</strong></em> must win as the most amusing typo (which perhaps reveals the quality of their wares). <em><strong>Supercharge</strong></em> is a fun new verb that emerged among seven vendors this year and the six vendors who used <em><strong>harden</strong></em> are far more mature than I.</p>
<p>Two vendors presumably took inspo from the #FreeBritney movement and used <em><strong>toxic</strong></em> to describe the things they help ameliorate. Two vendors have solutions which are <em><strong>multidimensional</strong></em>, presumably to stop dreaded wormhole attacks and events only known relative to the motion of observability wetware.</p>
<p>One vendor described their solution as <em><strong>omniscient</strong></em>, not specifying whether their interpretation of omniscience is compatible with immutability<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> (which could make it incongruent with the cloud-native buzzword as well as classical deities). Someone, not content with describing APIs as “shadow”, threw all dignity to the wind and invented the term <em><strong>zombie APIs</strong></em>.</p>
<p>And finally, one vendor said their solution is <em><strong>zero-ops</strong></em> and I must invoke Heidegger to ask, “Why are there [ops] at all, instead of Nothing? That is the question.” Or should we follow Nishida Kitarō and ponder whether zero-ops means “[ops] affirms itself through its own self negation”<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>?</p>
<blockquote>
<p>DevOps, or Zero-Ops, the two great ends of Fate,</p>
<p>And True or False, the subject of debate,</p>
<p>That perfect or destroy the vast designs of state —</p>
<p>When they have racked the developer’s breast,</p>
<p>Within thy pipeline most securely rest,</p>
<p>And when reduced to Rust, are least unsafe and best.<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup></p>
</blockquote>
<hr>
<p>Thanks to Adrian Sanabria and the Mad Tinkerer for feedback on this post, and to Mark Teodoro for porting my Python script into C@E (don&rsquo;t blame him for the Geocities styling).</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>I did not survey their entire website, only the main page and product pages. If buzzwords appear in blogs, for instance, that isn’t captured. The goal is to hone in on how infosec startups presently present themselves to the market.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Kretzmann, N. (1966). Omniscience and immutability. <em>The Journal of Philosophy, 63</em>(14), 409-421.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Krummel, J. W. (2018). On (the) nothing: Heidegger and Nishida. <em>Continental Philosophy Review, 51</em>(2), 239-268.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>My glorious spoof-poem is inspired by the poem &ldquo;Upon Nothing&rdquo; by John Wilmot, 2nd Earl of Rochester: <a href="https://www.poetryfoundation.org/poems/53720/upon-nothing">https://www.poetryfoundation.org/poems/53720/upon-nothing</a>&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>The Security Obstructionism (SecObs) Market</title>
            <link>https://kellyshortridge.com/blog/posts/the-security-obstructionism-secobs-market/</link>
            <pubDate>Wed, 12 Jan 2022 08:55:49 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/the-security-obstructionism-secobs-market/</guid>
            <description>Obstruction is “a thing that impedes or prevents passage or progress; an obstacle or blockage.” Under this definition, I would include sabotage – deliberate obstruction or damage – as well as passivity, or what I like to term “aggressive passivity” – deliberate non-interference to actuate an undesirable outcome (the “sink or swim” approach).
In security, obstructionism foments the dreaded Department of No, the begrudged gatekeeper, and the truculent Security Theatre1. Hence, I am introducing the term Security Obstructionism (SecObs)2, a category of tools, policies, and practices whose outcome is to impede or prevent progress for security’s (speculative) sake. I suspect the TAM (total addressable market) for SecObs is enormous and perhaps provides a more coherent understanding of security stacks than traditional market categories.
But outputs aren’t outcomes3. The amount of activity performed by a team is not equivalent to productivity nor its ability to produce desirable outcomes4. The secret to the SecObs market is that this does not matter. The point of SecObs is not better security outcomes for the business or end users. The point is more security outputs as a proxy for progress and these outputs impart more control over the organization, transmogrifying into power and status. In essence, the point is a self-perpetuating organism5.
How does the infosec organism self-perpetuate via SecObs? Here are some examples (which perhaps can be called “Indicators of Obstructionism” or IoOs6):
Vulnerability management that creates long lists of things to triage (and outside of dev workflows) “Shadow” anything – shadow IT, shadow SaaS, shadow APIs… Manual security reviews, manual change approval – basically anything to block shipping new code unless security personally gives the go ahead Internal education programs that are not based on measurable security outcomes (the primary outcome is wasted time) “Gotchya” phishing simulations that primarily measure the efficacy of phishing copy rather than the efficacy of security strategy Password rotation, key rotation, access approval, and other policies that may leave internal users unable to work Corporate VPNs, whose most effective use in 2022 is perhaps as the entry point for Ransomware-as-a-Service operations Shutting down modernization initiatives (cloud, microservices, etc.), slowing adoption of new tech or tools7 “Insider threat” detection that bears a remarkable resemblance to malware (such as using keyloggers8 and screen recording9…) Petulantly allowing “self-serve” security… but without providing recommended patterns or guidance As soon as a security property or feature is embraced by the rest of the organization, it no longer “counts” as security – like how I’ve heard SSO recently labeled somewhat pejoratively as “convenience” vs. security by some infosec professionals… As seems necessary to legitimize a market category, I plotted SecObs on a market “graph”10: Astute readers might have noted that some of these SecObs practices fall under the purview of “DevSecOps,” which is basically SecObs disguised in a hoodie, AllBirds, and Fjallraven backpack (carrying a Macbook). The typical definition of DevSecOps is that security is integrated across the software development lifecycle11 and it is an unobjectionable goal. The problematic part is that the goal is twisted from a thoughtful entwining of things that could promote systems resilience from design through to delivery into a goal of security reasserting itself as an authority.
But even the typical definition of DevSecOps does not justify its existence as a term; we would also need DevTestOps, DevQAOps, DevComplianceOps, DevAccountingOps, DevGeneralCounselOps to ensure that their concerns are also considered or built-in throughout the SDLC (as they should be!). Ergo, DevSecOps is quite literally Security Obstructionism in the linguistic sense but lamentably in the living sense, too – inserting security into the equation to impede Dev from meeting Ops.
The tragedy of SecObs echoes across hype cycles: bad security ideology mixed with bad incentives leads to bad implementation which leads to bad outcomes in all dimensions except the justification of status quo security people’s existence. Implementation is manipulable and can pervert even noble goals into SecObs. For instance, “shift left” is the purported mantra of DevSecOps12, but what it means too often in practice is shifting obstructionism earlier into the development process.
There are arguably some psychological benefits of ensuring expectations of timely software releases are destroyed sooner rather than later13, but the more tangible outcomes are that the obsession with preventing failure happens in more places. As a result, there are more outputs and more opportunities for finger pointing and citing “you did not follow process at X, Y juncture” when something goes wrong. Because SecObs is a defense mechanism, perhaps the most effective one in status quo security’s arsenal.
Obstruction is ultimately about preservation. Nobody obstructs something from happening if they want things to change. If security wanted things to go the DevOps way, they would understand that security, like resilience or reliability or maintainability14, is part of the outcomes that DevOps practices aim to achieve – and that there is immense value in ensuring all those qualities can be nurtured as a check against myopic organizational pressures15.
The problem is, if security is already one of the qualities that orgs doing the DevOps are trying to achieve, then that means the status quo will change. Software engineers, architects, and SREs will identify where existing security programs are falling short of desired outcomes and pioneer new programs to solve these challenges using their expertise as builders.
They will develop strategies to perform asset inventory, testing, and patching their own way – one that likely treats bugs as bugs, regardless of whether there are performance or security implications16. That means the security team is no longer performing these actions, which is one less thing to point to when proving the security strategy is “successful.” Never mind that security could absolutely still be essential by providing recommended patterns, serving as an expert sounding board, operating like a platform engineering team to build foundational systems to make security work easier, or leading security chaos engineering experiments17. Security professionals will be expected to design systems – a daunting shift that can catalyze existential panic about whether existing skill sets actually matter.
I suspect the lizard brain18 origin of SecObs lurks within in-groups and out-groups19. The information security establishment has seen itself as a marginalized outsider, a scorned prophet, an unfairly resented authority figure who is just trying to keep the devs from sticking forks into electrical sockets. Humans like seeing members of their in-group succeed, but they love seeing members of the out-group fail (or at least suffer)20. Humans will sacrifice the greater good – or even more for themselves in a bigger but equally-divided pie – if it means the out-group gets less than they do21.
Adopting SecObs is how infosec can ensure the out-group(s) receive less. I have long been perplexed why some security professionals are quite so resistant to my thinky thinky – how could they so detest and resent something that extirpates toil for them so that they may stretch out their strategic wings to soar on the zephyrs of innovation? I’ve felt like they must see me as K-2SO saying “Congratulations! You are being rescued! Please do not resist.”
What I realized is that it does not matter if Security Chaos Engineering22 or all the other things23 I’ve proposed make these infosec traditionalists better off; what matters is that they feel that they are worse off on a relative basis24. SecObs makes everyone else in the organization miserable and puts them at least partially under infosec’s thumb. Therefore, even though it is a woefully inefficient use of the security team’s time, SecObs makes infosec better off on a relative basis – if not equals with engineering, at least able to directly impact their outcomes; if not fulfilled by their work, at least they aren’t the ones facing a Kafkaesque imposition on their workflows.
SecObs depends on the definition of “secure” remaining nebulous and unquantifiable. The argument for DevSecOps is that “while DevOps applications have stormed ahead in terms of speed, scale and functionality, they are often lacking in robust security and compliance.”25 But what does “robust security” even mean? The justification for Sec being treated as an equal among Dev and Ops is based on wielding the abstract ideal of “security” as a Maginot Line. If you ask what a “secure” app means, you will rarely receive an actionable or consistent answer. Is it free of bugs? Does it never experience failure? Or is it something that only security teams and security teams alone can identify, akin to “I know it when I see it”26?
Conceiving concrete security outcomes is not an arcane art, despite what the industry inculcates. An engineering team’s product is broken if they aren’t meeting relevant compliance standards. If they are addressing the retail industry, they can’t have a product if they aren’t PCI compliant; their customers quite literally cannot purchase it. The same is true for financial services or healthcare.
And in my recent experience27, SREs think about impacts to availability more than security people, despite availability being the A in the classic C.I.A. triad (which just turned 44 years old a few months ago)28. A cryptominer can impact stability and reliability in production – which jeopardizes the organization’s ability to conduct its business. An attacker exploiting a web vuln and crashing the machine causes downtime. Exfil of customer data can cause latency and result in compliance sanctions.
SecObs spends all its time fretting about preventing incidents and implementing tools and policies that impede business operations29 under the guise of collapsing the probabilistic wave function of failure to zero while spending very little time actually preparing for the inevitable incident.
But if an attacker cuts down an application in a hosted container forest and it automatically disappears and grows again, who cares? The impact is negligible, just as it should be if security is done well – if security is focused on outcomes rather than outputs. All that prevention just doesn’t matter if you can’t recover quickly when something bad happens, which it will. And what was the point of holding up a release by a week because the security team wanted to personally inspect it first for bugs when the impact of exploiting those bugs is “autoprovision another container to replace the compromised one”?
What is to be done?30 When you see or hear weasel words like “appropriate” or “sufficient” or “robust” to describe a “level” or “maturity model”31 of security, it is worth pausing to ponder whether SecObs abounds. If the security program evokes the on-prem monolith and waterfalls era – SAST, DAST, vulnerability management, security reviews and approvals – but is festooned with the words “continuous” or “automated”, then perhaps what is now automated and continuous is SecObs. When you hear “we need to bake security into [thing]”, it could be a boon – weaving security into workflows to make it consistent, repeatable, and scalable – but it could also be a leading indicator of SecObs; engineering teams will be left to sink or swim because when they fail, it’s an opportunity for SecObs to say, “See why you need us?”
SecObs is pernicious; it is deeply rooted in the information security swamp and will be difficult to excise from the industry. DevSecOps may be its modern incarnation, but SecObs is a larger problem that existed before this trend and will continue to exist after it wanes. SecObs carries a massive TAM – at least $10 billion and likely much higher32 – which means there are legions of incumbents incentivized to actively fight against its removal.
The strategies I have seen work are either to burn down the status quo (quite literally firing obstructionists and building anew) and / or to support motivated software engineers and architects building overarching security programs in parallel, which treats patches like other upgrades and attacks like other incidents. Security is treated as a facet of resilience, as it should be because it reflects reality33.
What I have never seen work is attempting to modernize SecObs, which is what we are seeing with too much of the DevSecOps movement. Getting status quo security pros up to speed on the latest technology just means they’ll now be able to weaponize it and talk about the dangers of shadow APIs and shadow Infrastructure as Code and shadow functions.
Because getting up to speed in the SecObs market isn’t about understanding the technology – how it works, its strengths, its potential, its deficiencies, its concerns – it’s about understanding the power dynamics of it and figuring out where security can best assert itself to maintain control in the organization. Find a vulnerability, exploit it, persist in the system… maybe the only difference between the “good actors” and the “bad actors” is that the bad ones make money for their organizations.
Thanks to Camille Fournier, Ryan Petrich, Greg Poirier, Andrew Ruef, James Turnbull, and Leif Walsh for feedback on this post.
Enjoy this post? Stay tuned for the full Security Chaos Engineering book later in 2022. In the meantime, you can read the Security Chaos Engineering report in the O’Reilly Learning Library.
For a theatrical discussion of Security Theatre, I recommend my keynote Exit Stage Left: Eradicating Security Theatre: https://www.youtube.com/watch?v=kiunphALNKw ↩︎
I will use SecObs throughout this post since pithy buzzwords seem to be infosec’s perpetual zeitgeist. ↩︎
While I recommend reading her book in full, this lucid interview with Dr. Forsgren cites examples of outcomes vs. outputs in the context of software engineering: https://www.techrepublic.com/article/how-to-measure-outcomes-of-your-companys-devops-initiatives/ Book citation: Forsgren, PhD, N., Humble, J., &amp; Kim, G. (2018). Accelerate: The science of lean software and devops: Building and scaling high performing technology organizations. IT Revolution. ↩︎
Forsgren, N., Storey, M. A., Maddila, C., Zimmermann, T., Houck, B., &amp; Butler, J. (2021). The SPACE of Developer Productivity: There’s more to it than you think. Queue, 19(1), 20-48. https://queue.acm.org/detail.cfm?id=3454124 ↩︎
This could perhaps be called the Tumor Model of Information Security. ↩︎
Perhaps later in 2022 we will see a YARA scanner for IoOs raise a $100 million Series A with a $1 billion post-money valuation on $1 million ARR. ↩︎
New and not-quite-compliant on everything is much better than old and unmaintained but compliant. The inability to update a system should terrify security; alas, in many cases, it is an afterthought or worse, seen as a comfort. ↩︎
Usually vendors won’t say “keylogging” explicitly but will use euphemisms like “keystroke dynamics”, “keystroke logging”, or “user behavior analytics”. As CISA explains in their Insider Threat Mitigation Guide about User Activity Monitoring (UAM), “In general, UAM software monitors the full range of a user’s cyber behavior. It can log keystrokes, capture screenshots, make video recordings of sessions, inspect network packets, monitor kernels, track web browsing and searching, record the uploading and downloading of files, and monitor system logs for a comprehensive picture of activity across the network.” See also the presentation Exploring keystroke dynamics for insider threat detection. ↩︎
CISA’s description of UAM tools (see citation 8) also notes the ability to “make video recordings of sessions” as a general capability of insider threat tools. As an example of a vendor that isn’t quite as shy about it: https://www.proofpoint.com/us/blog/insider-threat-management/what-advanced-corporate-keylogging-definition-benefits-and-uses ↩︎
Unlike other entities purporting to analyze markets, the Infernal Quadrant is actually a quadrant because it is a plane divided into four infinite regions. It is hard to imagine something less magical than constraining infinite regions by bounding them in a larger square, although this perhaps exposes the underlying principal problem: fitting everything into neat boxes and calling them something they’re not. ↩︎
Each vendor defines DevSecOps in their own way, but this is the definition that stays constant across most of them. Some vendors additionally highlight automation, some focus more on shift left, some suggest security processes are handled by devs, and some talk about “enabling development of secure software at the speed of Agile and DevOps”. ↩︎
Although IBM refers to “Shift Left” as a mantra rather than the mantra, suggesting that the collective noun for DevSecOps is a mantra of DevSecOpses. ↩︎
Koyama, T., McHaffie, J. G., Laurienti, P. J., &amp; Coghill, R. C. (2005). The subjective experience of pain: where expectations become reality. Proceedings of the National Academy of Sciences, 102(36), 12950-12955. https://www.pnas.org/content/pnas/102/36/12950.full.pdf ↩︎
Kleppmann, M. (2017). Designing data-intensive applications: The big ideas behind reliable, scalable, and maintainable systems. “O’Reilly Media, Inc.”. ↩︎
Rasmussen, J. (1997). Risk management in a dynamic society: a modelling problem. Safety science, 27(2-3), 183-213. http://sunnyday.mit.edu/16.863/rasmussen-safetyscience.pdf ↩︎
This recalls Linus Torvalds’ remark in 2008: “I personally consider security bugs to be just ‘normal bugs.’ I don’t cover them up, but I also don’t have any reason what-so-ever to think it’s a good idea to track them and announce them as something special.” https://yarchive.net/comp/linux/security_bugs.html ↩︎
For more on security chaos engineering experiments, either read the SCE ebook or watch my talk “The Scientific Method: Security Chaos Experimentation &amp; Attacker Math” https://www.youtube.com/watch?v=oJ3iSyhWb5U ↩︎
One of my finest achievements is being responsible for the term “lizard brain” making it into the Wall Street Journal. Mitchell, H. (2021, September 7). How Hackers Use Our Brains Against Us. The Wall Street Journal. https://www.wsj.com/articles/how-hackers-use-our-brains-against-us-11631044800 ↩︎
I grazed the surface of in-groups and out-groups in my blog post “On YOLOsec and FOMOsec” but this particular insight had not yet coalesced in my mind at the time. /blog/posts/on-yolosec-and-fomosec/ ↩︎
Molenberghs, P., &amp; Louis, W. R. (2018). Insights from fMRI studies into ingroup bias. Frontiers in psychology, 9, 1868. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6174241/ ↩︎
The OG paper on this dynamic is: Tajfel, H. (1970). Experiments in intergroup discrimination. Scientific American, 223(5), 96-103. https://faculty.ucmerced.edu/jvevea/classes/Spark/readings/tajfel-1970-experiments-in-intergroup-discrimination.pdf ↩︎
Shortridge, K., Rinehart, A. (2020). Security Chaos Engineering. United States: O’Reilly Media, Incorporated. https://www.oreilly.com/library/view/security-chaos-engineering/9781492080350/ ↩︎
My favorite of these things perhaps being Darth Jar Jar: A Model for Infosec Innovation: /blog/posts/darth-jar-jar-model-infosec-innovation/ followed by Lamboozling Attackers: A New Generation of Deception https://queue.acm.org/detail.cfm?id=3494836 ↩︎
For a 101 about reference points in decision making (and about the OG behavioral economics theory, Prospect Theory) I recommend reading the Decision Lab’s guide on it: https://thedecisionlab.com/reference-guide/economics/reference-point/ ↩︎
Quoted from https://www.forcepoint.com/cyber-edu/devsecops ↩︎
I will spare y’all a diversion into epistemology and instead just cite this somewhat bizarre historical examination of the quote, which, incidentally, serves as another example of the potency of in-group vs. out-group framing: https://www.wsj.com/articles/BL-LB-4558 ↩︎
By recent experience I mean reception to my talks, writings, and the Security Chaos Engineering e-book. But the SRE book provides additional supporting evidence by repeatedly underlining the importance of availability to SRE success: https://sre.google/sre-book/service-level-objectives/ ↩︎
Ruthberg, Z. G., &amp; McKenzie, R. G. (1977). Audit and Evaluation of Computer Security. https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nbsspecialpublication500-19.pdf ↩︎
For instance, a recent infosec Twitter thread suggested people TAKE THEIR SHIT OFF THE INTERNET as a solution to problems like vulnerabilities in vCenter instances. Its popularity even lead to the creation of a site dedicated to this mindblowing advice: https://www.getyourshitofftheinternet.com/ ↩︎
This reference to the title of Lenin’s pamphlet is a chance for me to shoehorn in my quip that status quo information security should adore Marx’s labor theory of value (originally David Ricardo’s) in which goods or services are valued based on the effort that went into producing them rather than based on the consumer’s preferences (I will avoid going down the rabbit hole of decision theory at this juncture). ↩︎
I suggest reading Dr. Forsgren’s excellent takedown of maturity models for more on why they are “for chumps”: https://twitter.com/nicolefv/status/1130192402608664576 ↩︎
The entire information security market is somewhere between ~$165 billion and ~$185 billion as of 2020. It feels reasonable that at least ~6% of that spending is in security obstructionism. To wit, the vulnerability management market is nearly $14 billion as of 2021, the VPN market is also around $14 billion, and the CASB market (which addresses “shadow IT”) was valued at just under $9 billion in 2020. Smaller categories include the cybersecurity awareness training market at $1 billion in 2021, SAST is less than a billion, and the insider threat market, which is still small enough that Gartner hasn’t sized it yet it seems. I strongly suspect that some percentage of each security market category includes tools that facilitate obstructionism, but that is difficult to quantify. ↩︎
Connelly, E. B., Allen, C. R., Hatfield, K., Palma-Oliveira, J. M., Woods, D. D., &amp; Linkov, I. (2017). Features of resilience. Environment systems and decisions, 37(1), 46-50. https://www.osti.gov/pages/servlets/purl/1346540 ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>Obstruction is “a thing that impedes or prevents passage or progress; an obstacle or blockage.” Under this definition, I would include sabotage – deliberate obstruction or damage – as well as passivity, or what I like to term “aggressive passivity” – deliberate non-interference to actuate an undesirable outcome (the “sink or swim” approach).</p>
<p>In security, obstructionism foments the dreaded Department of No, the begrudged gatekeeper, and the truculent Security Theatre<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. Hence, I am introducing the term Security Obstructionism (SecObs)<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>, a category of tools, policies, and practices whose outcome is to impede or prevent progress for security’s (speculative) sake. I suspect the TAM (total addressable market) for SecObs is enormous and perhaps provides a more coherent understanding of security stacks than traditional market categories.</p>
<p>But outputs aren’t outcomes<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>. The amount of activity performed by a team is not equivalent to productivity nor its ability to produce desirable outcomes<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>. The secret to the SecObs market is that this does not matter. The point of SecObs is not better security outcomes for the business or end users. The point is more security outputs as a proxy for progress and these outputs impart more control over the organization, transmogrifying into power and status. In essence, the point is a self-perpetuating organism<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>.</p>
<p>How does the infosec organism self-perpetuate via SecObs? Here are some examples (which perhaps can be called “Indicators of Obstructionism” or IoOs<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>):</p>
<ul>
<li>Vulnerability management that creates long lists of things to triage (and outside of dev workflows)</li>
<li>“Shadow” anything – shadow IT, shadow SaaS, shadow APIs…</li>
<li>Manual security reviews, manual change approval – basically anything to block shipping new code unless security personally gives the go ahead</li>
<li>Internal education programs that are not based on measurable security outcomes (the primary outcome is wasted time)</li>
<li>“Gotchya” phishing simulations that primarily measure the efficacy of phishing copy rather than the efficacy of security strategy</li>
<li>Password rotation, key rotation, access approval, and other policies that may leave internal users unable to work</li>
<li>Corporate VPNs, whose most effective use in 2022 is perhaps as the entry point for Ransomware-as-a-Service operations</li>
<li>Shutting down modernization initiatives (cloud, microservices, etc.), slowing adoption of new tech or tools<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup></li>
<li>“Insider threat” detection that bears a remarkable resemblance to malware (such as using keyloggers<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup> and screen recording<sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup>…)</li>
<li>Petulantly allowing “self-serve” security… but without providing recommended patterns or guidance</li>
<li>As soon as a security property or feature is embraced by the rest of the organization, it no longer “counts” as security – like how I’ve heard SSO recently labeled somewhat pejoratively as “convenience” vs. security by some infosec professionals…</li>
</ul>
<p>As seems necessary to legitimize a market category, I plotted SecObs on a market &ldquo;graph&rdquo;<sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup>:
<img src="/blog/img/secobs-infernal-quadrant.png" alt="The Infernal Quadrant for Security Obstructionism"></p>
<p>Astute readers might have noted that some of these SecObs practices fall under the purview of “DevSecOps,” which is basically SecObs disguised in a hoodie, AllBirds, and Fjallraven backpack (carrying a Macbook). The typical definition of DevSecOps is that security is integrated across the software development lifecycle<sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup> and it is an unobjectionable goal. The problematic part is that the goal is twisted from a thoughtful entwining of things that could promote systems resilience from design through to delivery into a goal of security reasserting itself as an authority.</p>
<p>But even the typical definition of DevSecOps does not justify its existence as a term; we would also need DevTestOps, DevQAOps, DevComplianceOps, DevAccountingOps, DevGeneralCounselOps to ensure that <em>their</em> concerns are also considered or built-in throughout the SDLC (as they should be!). Ergo, DevSecOps is quite literally Security Obstructionism in the linguistic sense but lamentably in the living sense, too – inserting security into the equation to impede Dev from meeting Ops.</p>
<p>The tragedy of SecObs echoes across hype cycles: bad security ideology mixed with bad incentives leads to bad implementation which leads to bad outcomes in all dimensions except the justification of status quo security people’s existence. Implementation is manipulable and can pervert even noble goals into SecObs. For instance, “shift left” is the purported mantra of DevSecOps<sup id="fnref:12"><a href="#fn:12" class="footnote-ref" role="doc-noteref">12</a></sup>, but what it means too often in practice is shifting obstructionism earlier into the development process.</p>
<p>There are arguably some psychological benefits of ensuring expectations of timely software releases are destroyed sooner rather than later<sup id="fnref:13"><a href="#fn:13" class="footnote-ref" role="doc-noteref">13</a></sup>, but the more tangible outcomes are that the obsession with preventing failure happens in more places. As a result, there are more outputs and more opportunities for finger pointing and citing “you did not follow process at X, Y juncture” when something goes wrong. Because SecObs is a defense mechanism, perhaps the most effective one in status quo security’s arsenal.</p>
<p>Obstruction is ultimately about preservation. Nobody obstructs something from happening if they <em>want</em> things to change. If security <em>wanted</em> things to go the DevOps way, they would understand that security, like resilience or reliability or maintainability<sup id="fnref:14"><a href="#fn:14" class="footnote-ref" role="doc-noteref">14</a></sup>, is part of the outcomes that DevOps practices aim to achieve – and that there is immense value in ensuring all those qualities can be nurtured as a check against myopic organizational pressures<sup id="fnref:15"><a href="#fn:15" class="footnote-ref" role="doc-noteref">15</a></sup>.</p>
<p>The problem is, if security <em>is</em> already one of the qualities that orgs doing the DevOps are trying to achieve, then that means the status quo will change. Software engineers, architects, and SREs will identify where existing security programs are falling short of desired outcomes and pioneer new programs to solve these challenges using their expertise as builders.</p>
<p>They will develop strategies to perform asset inventory, testing, and patching their own way – one that likely treats bugs as bugs, regardless of whether there are performance or security implications<sup id="fnref:16"><a href="#fn:16" class="footnote-ref" role="doc-noteref">16</a></sup>. That means the security team is no longer performing these actions, which is one less thing to point to when proving the security strategy is “successful.” Never mind that security could absolutely still be essential by providing recommended patterns, serving as an expert sounding board, operating like a platform engineering team to build foundational systems to make security work easier, or leading security chaos engineering experiments<sup id="fnref:17"><a href="#fn:17" class="footnote-ref" role="doc-noteref">17</a></sup>. Security professionals will be expected to design systems &ndash; a daunting shift that can catalyze existential panic about whether existing skill sets actually matter.</p>
<p>I suspect the lizard brain<sup id="fnref:18"><a href="#fn:18" class="footnote-ref" role="doc-noteref">18</a></sup> origin of SecObs lurks within in-groups and out-groups<sup id="fnref:19"><a href="#fn:19" class="footnote-ref" role="doc-noteref">19</a></sup>. The information security establishment has seen itself as a marginalized outsider, a scorned prophet, an unfairly resented authority figure who is just trying to keep the devs from sticking forks into electrical sockets. Humans like seeing members of their in-group succeed, but they <em>love</em> seeing members of the out-group fail (or at least suffer)<sup id="fnref:20"><a href="#fn:20" class="footnote-ref" role="doc-noteref">20</a></sup>. Humans will sacrifice the greater good – or even more for themselves in a bigger but equally-divided pie – if it means the out-group gets less than they do<sup id="fnref:21"><a href="#fn:21" class="footnote-ref" role="doc-noteref">21</a></sup>.</p>
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/please-do-not-resist.jpg" alt="A screenshot of K-2SO from the movie Rogue One saying Congratulations! You are being rescued! Please do not resist.">
<p>Adopting SecObs is how infosec can ensure the out-group(s) receive less. I have long been perplexed why some security professionals are quite so resistant to my thinky thinky – how could they so detest and resent something that extirpates toil for them so that they may stretch out their strategic wings to soar on the zephyrs of innovation? I’ve felt like they must see me <a href="https://en.meming.world/wiki/Congratulations!_You_Are_Being_Rescued">as K-2SO saying</a> “Congratulations! You are being rescued! Please do not resist.”</p>
<p>What I realized is that it does not matter if Security Chaos Engineering<sup id="fnref:22"><a href="#fn:22" class="footnote-ref" role="doc-noteref">22</a></sup> or all the other things<sup id="fnref:23"><a href="#fn:23" class="footnote-ref" role="doc-noteref">23</a></sup> I’ve proposed make these infosec traditionalists better off; what matters is that they feel that they are worse off on a <em>relative</em> basis<sup id="fnref:24"><a href="#fn:24" class="footnote-ref" role="doc-noteref">24</a></sup>. SecObs makes everyone else in the organization miserable and puts them at least partially under infosec’s thumb. Therefore, even though it is a woefully inefficient use of the security team’s time, SecObs makes infosec better off on a <em>relative</em> basis – if not equals with engineering, at least able to directly impact their outcomes; if not fulfilled by their work, at least they aren’t the ones facing a Kafkaesque imposition on their workflows.</p>
<p>SecObs depends on the definition of “secure” remaining nebulous and unquantifiable. The argument for DevSecOps is that “while DevOps applications have stormed ahead in terms of speed, scale and functionality, they are often lacking in robust security and compliance.”<sup id="fnref:25"><a href="#fn:25" class="footnote-ref" role="doc-noteref">25</a></sup> But what does “robust security” even mean? The justification for Sec being treated as an equal among Dev and Ops is based on wielding the abstract ideal of “security” as a Maginot Line. If you ask what a “secure” app means, you will rarely receive an actionable or consistent answer. Is it free of bugs? Does it never experience failure? Or is it something that only security teams and security teams alone can identify, akin to “I know it when I see it”<sup id="fnref:26"><a href="#fn:26" class="footnote-ref" role="doc-noteref">26</a></sup>?</p>
<p>Conceiving concrete security outcomes is not an arcane art, despite what the industry inculcates. An engineering team’s product is broken if they aren’t meeting relevant compliance standards. If they are addressing the retail industry, they can’t have a product if they aren’t PCI compliant; their customers quite literally cannot purchase it. The same is true for financial services or healthcare.</p>
<p>And in my recent experience<sup id="fnref:27"><a href="#fn:27" class="footnote-ref" role="doc-noteref">27</a></sup>, SREs think about impacts to availability more than security people, despite availability being the A in the classic C.I.A. triad (which just turned 44 years old a few months ago)<sup id="fnref:28"><a href="#fn:28" class="footnote-ref" role="doc-noteref">28</a></sup>. A cryptominer can impact stability and reliability in production – which jeopardizes the organization’s ability to conduct its business. An attacker exploiting a web vuln and crashing the machine causes downtime. Exfil of customer data can cause latency and result in compliance sanctions.</p>
<p>SecObs spends all its time fretting about preventing incidents and implementing tools and policies that impede business operations<sup id="fnref:29"><a href="#fn:29" class="footnote-ref" role="doc-noteref">29</a></sup> under the guise of collapsing the probabilistic wave function of failure to zero while spending very little time actually preparing for the inevitable incident.</p>
<p>But if an attacker cuts down an application in a hosted container forest and it automatically disappears and grows again, who cares? The impact is negligible, <em>just as it should be</em> if security is done well – if security is focused on outcomes rather than outputs. All that prevention just doesn’t matter if you can’t recover quickly when something bad happens, <em>which it will</em>. And what was the point of holding up a release by a week because the security team wanted to personally inspect it first for bugs when the impact of exploiting those bugs is “autoprovision another container to replace the compromised one”?</p>
<p>What is to be done?<sup id="fnref:30"><a href="#fn:30" class="footnote-ref" role="doc-noteref">30</a></sup> When you see or hear weasel words like “appropriate” or “sufficient” or “robust” to describe a “level” or “maturity model”<sup id="fnref:31"><a href="#fn:31" class="footnote-ref" role="doc-noteref">31</a></sup> of security, it is worth pausing to ponder whether SecObs abounds. If the security program evokes the on-prem monolith and waterfalls era – SAST, DAST, vulnerability management, security reviews and approvals – but is festooned with the words “continuous” or “automated”, then perhaps what is now automated and continuous is SecObs. When you hear “we need to bake security into [thing]”, it could be a boon – weaving security into workflows to make it consistent, repeatable, and scalable – but it could also be a leading indicator of SecObs; engineering teams will be left to sink or swim because when they fail, it’s an opportunity for SecObs to say, “See why you need us?”</p>
<p>SecObs is pernicious; it is deeply rooted in the information security swamp and will be difficult to excise from the industry. DevSecOps may be its modern incarnation, but SecObs is a larger problem that existed before this trend and will continue to exist after it wanes. SecObs carries a massive TAM – at least $10 billion and likely much higher<sup id="fnref:32"><a href="#fn:32" class="footnote-ref" role="doc-noteref">32</a></sup> – which means there are legions of incumbents incentivized to actively fight against its removal.</p>
<p>The strategies I have seen work are either to burn down the status quo (quite literally firing obstructionists and building anew) and / or to support motivated software engineers and architects building overarching security programs in parallel, which treats patches like other upgrades and attacks like other incidents. Security is treated as a facet of resilience, as it should be because it reflects reality<sup id="fnref:33"><a href="#fn:33" class="footnote-ref" role="doc-noteref">33</a></sup>.</p>
<p>What I have never seen work is attempting to modernize SecObs, which is what we are seeing with too much of the DevSecOps movement. Getting status quo security pros up to speed on the latest technology just means they’ll now be able to weaponize it and talk about the dangers of <em>shadow APIs</em> and <em>shadow Infrastructure as Code</em> and <em>shadow functions</em>.</p>
<p>Because getting up to speed in the SecObs market isn’t about understanding the technology – how it works, its strengths, its potential, its deficiencies, its concerns – it’s about understanding the power dynamics of it and figuring out where security can best assert itself to maintain control in the organization. Find a vulnerability, exploit it, persist in the system… maybe the only difference between the “good actors” and the “bad actors” is that the bad ones make money for their organizations.</p>
<hr>
<p>Thanks to Camille Fournier, Ryan Petrich, Greg Poirier, Andrew Ruef, James Turnbull, and Leif Walsh for feedback on this post.</p>
<hr>
<p>Enjoy this post? Stay tuned for the full Security Chaos Engineering book later in 2022. In the meantime, you can read the Security Chaos Engineering report in the <a href="https://www.oreilly.com/library/view/security-chaos-engineering/9781492080350/">O’Reilly Learning Library</a>.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>For a theatrical discussion of Security Theatre, I recommend my keynote Exit Stage Left: Eradicating Security Theatre: <a href="https://www.youtube.com/watch?v=kiunphALNKw">https://www.youtube.com/watch?v=kiunphALNKw</a>&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>I will use SecObs throughout this post since pithy buzzwords seem to be infosec’s perpetual zeitgeist.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>While I recommend reading <a href="https://itrevolution.com/accelerate-book/">her book</a> in full, this lucid interview with Dr. Forsgren cites examples of outcomes vs. outputs in the context of software engineering: <a href="https://www.techrepublic.com/article/how-to-measure-outcomes-of-your-companys-devops-initiatives/">https://www.techrepublic.com/article/how-to-measure-outcomes-of-your-companys-devops-initiatives/</a> Book citation: Forsgren, PhD, N., Humble, J., &amp; Kim, G. (2018). <em>Accelerate: The science of lean software and devops: Building and scaling high performing technology organizations.</em> IT Revolution.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Forsgren, N., Storey, M. A., Maddila, C., Zimmermann, T., Houck, B., &amp; Butler, J. (2021). The SPACE of Developer Productivity: There&rsquo;s more to it than you think. <em>Queue, 19</em>(1), 20-48. <a href="https://queue.acm.org/detail.cfm?id=3454124">https://queue.acm.org/detail.cfm?id=3454124</a>&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>This could perhaps be called the Tumor Model of Information Security.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Perhaps later in 2022 we will see a YARA scanner for IoOs raise a $100 million Series A with a $1 billion post-money valuation on $1 million ARR.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>New and not-quite-compliant on everything is much better than old and unmaintained but compliant. The inability to update a system should terrify security; alas, in many cases, it is an afterthought or worse, seen as a comfort.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>Usually vendors won’t say “keylogging” explicitly but will use euphemisms like “keystroke dynamics”, “keystroke logging”, or “user behavior analytics”. <a href="https://www.cisa.gov/sites/default/files/publications/Insider%20Threat%20Mitigation%20Guide_Final_508.pdf">As CISA explains in their Insider Threat Mitigation Guide</a> about User Activity Monitoring (UAM), <em>“In general, UAM software monitors the full range of a user’s cyber behavior. It can log keystrokes, capture screenshots, make video recordings of sessions, inspect network packets, monitor kernels, track web browsing and searching, record the uploading and downloading of files, and monitor system logs for a comprehensive picture of activity across the network.”</em> See also the presentation <a href="https://apps.dtic.mil/sti/pdfs/AD1090474.pdf">Exploring keystroke dynamics for insider threat detection</a>.&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p><a href="https://www.cisa.gov/sites/default/files/publications/Insider%20Threat%20Mitigation%20Guide_Final_508.pdf">CISA’s description of UAM tools</a> (see citation 8) also notes the ability to “make video recordings of sessions” as a general capability of insider threat tools. As an example of a vendor that isn’t quite as shy about it: <a href="https://www.proofpoint.com/us/blog/insider-threat-management/what-advanced-corporate-keylogging-definition-benefits-and-uses">https://www.proofpoint.com/us/blog/insider-threat-management/what-advanced-corporate-keylogging-definition-benefits-and-uses</a>&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:10">
<p>Unlike other entities purporting to analyze markets, the Infernal Quadrant is <em>actually</em> a quadrant because it is a plane divided into four infinite regions. It is hard to imagine something less magical than constraining infinite regions by bounding them in a larger square, although this perhaps exposes the underlying principal problem: fitting everything into neat boxes and calling them something they’re not.&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>Each vendor defines DevSecOps in their own way, but this is the definition that stays constant across most of them. Some vendors additionally <a href="https://www.redhat.com/en/topics/devops/what-is-devsecops">highlight automation</a>, some focus <a href="https://www.synopsys.com/glossary/what-is-devsecops.html">more on shift left</a>, some suggest <a href="https:%5Cwww.csoonline.com%5Carticle%5C3245748%5Cwhat-is-devsecops-developing-more-secure-applications.html">security processes are handled by devs</a>, and some talk about <a href="https://www.ibm.com/cloud/learn/devsecops">“enabling development of secure software at the speed of Agile and DevOps”</a>.&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:12">
<p>Although IBM refers to “Shift Left” as <em>a</em> mantra rather than <em>the</em> mantra, suggesting that the collective noun for DevSecOps is a <em>mantra</em> of DevSecOpses.&#160;<a href="#fnref:12" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:13">
<p>Koyama, T., McHaffie, J. G., Laurienti, P. J., &amp; Coghill, R. C. (2005). The subjective experience of pain: where expectations become reality. <em>Proceedings of the National Academy of Sciences, 102</em>(36), 12950-12955. <a href="https://www.pnas.org/content/pnas/102/36/12950.full.pdf">https://www.pnas.org/content/pnas/102/36/12950.full.pdf</a>&#160;<a href="#fnref:13" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:14">
<p>Kleppmann, M. (2017). <em>Designing data-intensive applications: The big ideas behind reliable, scalable, and maintainable systems.</em> &ldquo;O&rsquo;Reilly Media, Inc.&rdquo;.&#160;<a href="#fnref:14" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:15">
<p>Rasmussen, J. (1997). Risk management in a dynamic society: a modelling problem. <em>Safety science, 27</em>(2-3), 183-213. <a href="http://sunnyday.mit.edu/16.863/rasmussen-safetyscience.pdf">http://sunnyday.mit.edu/16.863/rasmussen-safetyscience.pdf</a>&#160;<a href="#fnref:15" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:16">
<p>This recalls Linus Torvalds’ remark in 2008: “I personally consider security bugs to be just ‘normal bugs.’ I don’t cover them up, but I also don’t have any reason what-so-ever to think it’s a good idea to track them and announce them as something special.” <a href="https://yarchive.net/comp/linux/security_bugs.html">https://yarchive.net/comp/linux/security_bugs.html</a>&#160;<a href="#fnref:16" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:17">
<p>For more on security chaos engineering experiments, either <a href="https://www.oreilly.com/library/view/security-chaos-engineering/9781492080350/">read the SCE ebook</a> or watch my talk “The Scientific Method: Security Chaos Experimentation &amp; Attacker Math” <a href="https://www.youtube.com/watch?v=oJ3iSyhWb5U">https://www.youtube.com/watch?v=oJ3iSyhWb5U</a>&#160;<a href="#fnref:17" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:18">
<p>One of my finest achievements is being responsible for the term “lizard brain” making it into the Wall Street Journal. Mitchell, H. (2021, September 7). How Hackers Use Our Brains Against Us. <em>The Wall Street Journal.</em> <a href="https://www.wsj.com/articles/how-hackers-use-our-brains-against-us-11631044800">https://www.wsj.com/articles/how-hackers-use-our-brains-against-us-11631044800</a>&#160;<a href="#fnref:18" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:19">
<p>I grazed the surface of in-groups and out-groups in my blog post “On YOLOsec and FOMOsec” but this particular insight had not yet coalesced in my mind at the time. /blog/posts/on-yolosec-and-fomosec/&#160;<a href="#fnref:19" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:20">
<p>Molenberghs, P., &amp; Louis, W. R. (2018). Insights from fMRI studies into ingroup bias. <em>Frontiers in psychology, 9,</em> 1868. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6174241/">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6174241/</a>&#160;<a href="#fnref:20" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:21">
<p>The OG paper on this dynamic is: Tajfel, H. (1970). Experiments in intergroup discrimination. <em>Scientific American, 223</em>(5), 96-103. <a href="https://faculty.ucmerced.edu/jvevea/classes/Spark/readings/tajfel-1970-experiments-in-intergroup-discrimination.pdf">https://faculty.ucmerced.edu/jvevea/classes/Spark/readings/tajfel-1970-experiments-in-intergroup-discrimination.pdf</a>&#160;<a href="#fnref:21" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:22">
<p>Shortridge, K., Rinehart, A. (2020). <em>Security Chaos Engineering</em>. United States: O&rsquo;Reilly Media, Incorporated. <a href="https://www.oreilly.com/library/view/security-chaos-engineering/9781492080350/">https://www.oreilly.com/library/view/security-chaos-engineering/9781492080350/</a>&#160;<a href="#fnref:22" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:23">
<p>My favorite of these things perhaps being Darth Jar Jar: A Model for Infosec Innovation: /blog/posts/darth-jar-jar-model-infosec-innovation/ followed by Lamboozling Attackers: A New Generation of Deception <a href="https://queue.acm.org/detail.cfm?id=3494836">https://queue.acm.org/detail.cfm?id=3494836</a>&#160;<a href="#fnref:23" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:24">
<p>For a 101 about reference points in decision making (and about the OG behavioral economics theory, Prospect Theory) I recommend reading the Decision Lab’s guide on it: <a href="https://thedecisionlab.com/reference-guide/economics/reference-point/">https://thedecisionlab.com/reference-guide/economics/reference-point/</a>&#160;<a href="#fnref:24" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:25">
<p>Quoted from <a href="https://www.forcepoint.com/cyber-edu/devsecops">https://www.forcepoint.com/cyber-edu/devsecops</a>&#160;<a href="#fnref:25" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:26">
<p>I will spare y’all a diversion into epistemology and instead just cite this somewhat bizarre historical examination of the quote, which, incidentally, serves as another example of the potency of in-group vs. out-group framing: <a href="https://www.wsj.com/articles/BL-LB-4558">https://www.wsj.com/articles/BL-LB-4558</a>&#160;<a href="#fnref:26" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:27">
<p>By recent experience I mean reception to my talks, writings, and the Security Chaos Engineering e-book. But the SRE book provides additional supporting evidence by repeatedly underlining the importance of availability to SRE success: <a href="https://sre.google/sre-book/service-level-objectives/">https://sre.google/sre-book/service-level-objectives/</a>&#160;<a href="#fnref:27" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:28">
<p>Ruthberg, Z. G., &amp; McKenzie, R. G. (1977). Audit and Evaluation of Computer Security. <a href="https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nbsspecialpublication500-19.pdf">https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nbsspecialpublication500-19.pdf</a>&#160;<a href="#fnref:28" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:29">
<p>For instance, <a href="https://twitter.com/jfslowik/status/1479490219573276673">a recent infosec Twitter thread</a> suggested people <code>TAKE THEIR SHIT OFF THE INTERNET</code> as a solution to problems like vulnerabilities in vCenter instances. Its popularity even lead to the creation of a site dedicated to this mindblowing advice: <a href="https://www.getyourshitofftheinternet.com/">https://www.getyourshitofftheinternet.com/</a>&#160;<a href="#fnref:29" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:30">
<p>This reference to the title of Lenin’s pamphlet is a chance for me to shoehorn in my quip that status quo information security should adore Marx’s <a href="https://plato.stanford.edu/entries/marx/#LaboTheoValu">labor theory of value</a> (originally David Ricardo’s) in which goods or services are valued based on the effort that went into producing them rather than based on the consumer’s preferences (I will avoid going down the rabbit hole of <a href="https://plato.stanford.edu/entries/decision-theory/">decision theory</a> at this juncture).&#160;<a href="#fnref:30" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:31">
<p>I suggest reading Dr. Forsgren’s excellent takedown of maturity models for more on why they are “for chumps”: <a href="https://twitter.com/nicolefv/status/1130192402608664576">https://twitter.com/nicolefv/status/1130192402608664576</a>&#160;<a href="#fnref:31" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:32">
<p>The entire information security market is somewhere between <a href="https://www.grandviewresearch.com/industry-analysis/cyber-security-market">~$165 billion</a> and <a href="https://www.grandviewresearch.com/industry-analysis/cyber-security-market">~$185 billion</a> as of 2020. It feels reasonable that at least ~6% of that spending is in security obstructionism. To wit, the vulnerability management market is <a href="https://www.businesswire.com/news/home/20211011005555/en/Global-Security-and-Vulnerability-Management-Market-2021-to-2026---Integration-of-Vulnerability-Management-and-Patch-Management-Solutions-Presents-Opportunities---ResearchAndMarkets.com">nearly $14 billion</a> as of 2021, the VPN market is <a href="https://www.yahoo.com/now/global-virtual-private-network-vpn-122300132.html">also around $14 billion</a>, and the CASB market (which addresses “shadow IT”) was valued at <a href="https://www.verifiedmarketresearch.com/product/global-cloud-access-security-brokers-market-size-and-forecast-to-2025/">just under $9 billion</a> in 2020. Smaller categories include the cybersecurity awareness training market <a href="https://blog.knowbe4.com/some-interesting-security-awareness-computer-based-training-numbers">at $1 billion</a> in 2021, SAST is <a href="https://www.industryarc.com/Report/19220/static-application-security-testing-market.html">less than a billion</a>, and the insider threat market, which is still small enough that Gartner hasn’t sized it yet it seems. I strongly suspect that some percentage of each security market category includes tools that facilitate obstructionism, but that is difficult to quantify.&#160;<a href="#fnref:32" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:33">
<p>Connelly, E. B., Allen, C. R., Hatfield, K., Palma-Oliveira, J. M., Woods, D. D., &amp; Linkov, I. (2017). Features of resilience. <em>Environment systems and decisions, 37</em>(1), 46-50. <a href="https://www.osti.gov/pages/servlets/purl/1346540">https://www.osti.gov/pages/servlets/purl/1346540</a>&#160;<a href="#fnref:33" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>My 2021 Reading List</title>
            <link>https://kellyshortridge.com/blog/posts/2021-reading-list/</link>
            <pubDate>Mon, 20 Dec 2021 07:53:31 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2021-reading-list/</guid>
            <description>This year, I continued to devour fiction but expanded much of my non-fiction reading to papers and reports via my 2020 holiday present to myself, the Remarkable 2, which I highly recommend for kindred souls who enjoy writing notes while reading scientific literature.
I averaged just over 3 books per month in 2021, a bit lower than 3.7 per month in 2020 – but I also averaged 9.5 papers per month on the Remarkable or just over 2 per week, a number I look forward to increasing next year. I also published a paper of my own this year in ACM Queue entitled “Lamboozling Attackers: A New Generation of Deception”.
If you’re looking for more science fiction, speculative fiction, or non-fiction recommendations, check out my reading lists from prior years:
2020 reading list 2019 reading list 2018 reading list 2017 reading list 2016 reading list Fiction Água Viva by Clarice Lispector
All Systems Red by Martha Wells
The Atrocity Archives by Charles Stross
The Awakening by Kate Chopin
Blindness by José Saramago
The Book of Disquiet by Fernando Pessoa
A Deadly Education by Naomi Novik
Empire of Sand by Tasha Suri
An Episode in the Life of a Landscape Painter by César Aira
The Fifth Science by Exurb1a
How Long ’til Black Future Month? by N.K. Jemisin
The Marrow Thieves by Cherie Dimaline
Midnight Robber by Nalo Hopkinson
Miles from Nowhere by Nami Mun
The Order of the Pure Moon Reflected in Water by Zen Cho
Parable of the Sower by Octavia E. Butler
Persephone Station by Stina Leicht
The Posthumous Memoirs of Brás Cubas by Joaquim Maria Machado De Assis
The Red by Linda Nagata
Runtime by S.B. Divya
A Severed Head by Iris Murdoch
The Solitude of Prime Numbers by Paolo Giordano
Strange Beasts of China by Yan Ge
A Taste of Honey by Kai Ashante Wilson
There Is No Antimemetics Division by qntm
Tropic of Orange by Karen Tei Yamashita
Wild Seed by Octavia Butler
Non-Fiction Beast and Man: The Roots of Human Nature by Mary Midgley
The Body Keeps the Score by Bessel Van Der Kolk, M.D.
How to Change by Katy Milkman
Narrative Economics by Robert J. Shiller
Probable Impossibilities by Alan Lightman
Stealing the Corner Office by Brendan Reid
Subtract by Leidy Klotz
The Theory of Moral Sentiments by Adam Smith
Think Again by Adam Grant
The Tyranny of Merit by Michael J. Sandel
</description>
            <atom:content type="html"><![CDATA[<p>This year, I continued to devour fiction but expanded much of my non-fiction reading to papers and reports via my 2020 holiday present to myself, the Remarkable 2, which I highly recommend for kindred souls who enjoy writing notes while reading scientific literature.</p>
<p>I averaged just over 3 books per month in 2021, a bit lower than 3.7 per month in 2020 &ndash; but I also averaged 9.5 papers per month on the Remarkable or just over 2 per week, a number I look forward to increasing next year. I also published a paper of my own this year in ACM Queue entitled <a href="https://queue.acm.org/detail.cfm?id=3494836">&ldquo;Lamboozling Attackers: A New Generation of Deception&rdquo;</a>.</p>
<p>If you’re looking for more science fiction, speculative fiction, or non-fiction recommendations, check out my reading lists from prior years:</p>
<ul>
<li><a href="/blog/posts/2020-reading-list">2020 reading list</a></li>
<li><a href="/blog/posts/2019-reading-list">2019 reading list</a></li>
<li><a href="/blog/posts/2018-reading-list">2018 reading list</a></li>
<li><a href="/blog/posts/2017-reading-list">2017 reading list</a></li>
<li><a href="/blog/posts/2016-reading-list">2016 reading list</a></li>
</ul>
<h2 id="fiction">Fiction</h2>
<p><a href="https://bookshop.org/books/agua-viva-9780811219907/9780811219907">Água Viva</a> by Clarice Lispector</p>
<p><a href="https://bookshop.org/books/all-systems-red/9780765397539">All Systems Red</a> by Martha Wells</p>
<p><a href="https://bookshop.org/books/the-atrocity-archives/9780441013654">The Atrocity Archives</a> by Charles Stross</p>
<p><a href="https://bookshop.org/books/the-awakening-9780486277868/9780486277868">The Awakening</a> by Kate Chopin</p>
<p><a href="https://bookshop.org/books/blindness-9780156007757/9780156007757">Blindness</a> by José Saramago</p>
<p><a href="https://bookshop.org/books/the-book-of-disquiet-the-complete-edition/9780811226936">The Book of Disquiet</a> by Fernando Pessoa</p>
<p><a href="https://bookshop.org/books/a-deadly-education/9780593128503">A Deadly Education</a> by Naomi Novik</p>
<p><a href="https://bookshop.org/books/empire-of-sand/9780316449717">Empire of Sand</a> by Tasha Suri</p>
<p><a href="https://bookshop.org/books/an-episode-in-the-life-of-a-landscape-painter/9780811216302">An Episode in the Life of a Landscape Painter</a> by César Aira</p>
<p><a href="https://www.amazon.com/Fifth-Science-Exurb1a-ebook/dp/B07GTMYVZF">The Fifth Science</a> by Exurb1a</p>
<p><a href="https://bookshop.org/books/how-long-til-black-future-month-stories-9780316491372/9780316491372">How Long &rsquo;til Black Future Month?</a> by N.K. Jemisin</p>
<p><a href="https://bookshop.org/books/the-marrow-thieves/9781770864863">The Marrow Thieves</a> by Cherie Dimaline</p>
<p><a href="https://bookshop.org/books/midnight-robber/9780446675604">Midnight Robber</a> by Nalo Hopkinson</p>
<p><a href="https://bookshop.org/books/miles-from-nowhere/9781594483981">Miles from Nowhere</a> by Nami Mun</p>
<p><a href="https://bookshop.org/books/the-order-of-the-pure-moon-reflected-in-water/9781250269256">The Order of the Pure Moon Reflected in Water</a> by Zen Cho</p>
<p><a href="https://bookshop.org/books/parable-of-the-sower/9781538732182">Parable of the Sower</a> by Octavia E. Butler</p>
<p><a href="https://bookshop.org/books/persephone-station-9781534414594/9781534414594">Persephone Station</a> by Stina Leicht</p>
<p><a href="https://bookshop.org/books/the-posthumous-memoirs-of-bras-cubas-9780143135036/9780143135036">The Posthumous Memoirs of Brás Cubas</a> by Joaquim Maria Machado De Assis</p>
<p><a href="https://bookshop.org/books/the-red-1-first-light/9781481446570">The Red</a> by Linda Nagata</p>
<p><a href="https://publishing.tor.com/runtime-sbdivya/9780765389787/">Runtime</a> by S.B. Divya</p>
<p><a href="https://bookshop.org/books/a-severed-head/9780140020038">A Severed Head</a> by Iris Murdoch</p>
<p><a href="https://bookshop.org/books/the-solitude-of-prime-numbers-9780143118596/9780143118596">The Solitude of Prime Numbers</a> by Paolo Giordano</p>
<p><a href="https://bookshop.org/books/strange-beasts-of-china/9781612199092">Strange Beasts of China</a> by Yan Ge</p>
<p><a href="https://bookshop.org/books/a-taste-of-honey-9780765390042/9780765390042">A Taste of Honey</a> by Kai Ashante Wilson</p>
<p><a href="https://www.amazon.com/There-No-Antimemetics-Division-qntm-ebook/dp/B08FHHQRM2">There Is No Antimemetics Division</a> by qntm</p>
<p><a href="https://bookshop.org/books/tropic-of-orange-9781684416967/9781566894869">Tropic of Orange</a> by Karen Tei Yamashita</p>
<p><a href="https://bookshop.org/books/wild-seed/9781538751480">Wild Seed</a> by Octavia Butler</p>
<hr>
<h2 id="non-fiction">Non-Fiction</h2>
<p><a href="https://bookshop.org/books/beast-and-man-the-roots-of-human-nature/9780415289870">Beast and Man: The Roots of Human Nature</a> by Mary Midgley</p>
<p><a href="https://bookshop.org/books/the-body-keeps-the-score-brain-mind-and-body-in-the-healing-of-trauma/9780143127741">The Body Keeps the Score</a> by Bessel Van Der Kolk, M.D.</p>
<p><a href="https://bookshop.org/books/how-to-change-the-science-of-getting-from-where-you-are-to-where-you-want-to-be/9780593083758">How to Change</a> by Katy Milkman</p>
<p><a href="https://bookshop.org/books/narrative-economics-how-stories-go-viral-and-drive-major-economic-events/9780691210261">Narrative Economics</a> by Robert J. Shiller</p>
<p><a href="https://bookshop.org/books/probable-impossibilities-musings-on-beginnings-and-endings/9781524749019">Probable Impossibilities</a> by Alan Lightman</p>
<p><a href="https://bookshop.org/books/stealing-the-corner-office-the-winning-career-strategies-they-ll-never-teach-you-in-business-school-9798200620258/9781601633200">Stealing the Corner Office</a> by Brendan Reid</p>
<p><a href="https://bookshop.org/books/subtract-the-untapped-science-of-less-9781250249876/9781250249869">Subtract</a> by Leidy Klotz</p>
<p><a href="https://bookshop.org/books/the-theory-of-moral-sentiments-9780865970120/9780865970120">The Theory of Moral Sentiments</a> by Adam Smith</p>
<p><a href="https://bookshop.org/books/think-again-the-power-of-knowing-what-you-don-t-know-9781984878106/9781984878106">Think Again</a> by Adam Grant</p>
<p><a href="https://bookshop.org/books/the-tyranny-of-merit-what-s-become-of-the-common-good/9780374289980">The Tyranny of Merit</a> by Michael J. Sandel</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Rick &amp; Morty&#39;s Thanksploitation Spectacular Decision Tree</title>
            <link>https://kellyshortridge.com/blog/posts/rick-morty-thanksploitation-decision-tree/</link>
            <pubDate>Mon, 09 Aug 2021 08:55:49 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/rick-morty-thanksploitation-decision-tree/</guid>
            <description>The new season of Rick and Morty contains an excellent example of belief prompting in a game theoretic scenario, much to my surprise and delight.
Season 5 Episode 6, “Rick &amp; Morty’s Thanksploitation Spectacular”, shows Rick Sanchez and the President of the United States explicating their plans to outwit each other; Rick seeks to deceive the President into granting him a federal pardon during the National Thanksgiving Turkey Presentation, while the President seeks to stymie Rick’s sly scheme.
Naturally, I knew I must model their conflict as a decision tree using Deciduous, the security decision tree generator app Ryan Petrich and I created (warning: spoilers for the first ~8 minutes of the episode are ahead):
In prior years, Rick was able to gain a federal pardon for his myriad crimes via the American tradition of Presidents pardoning a turkey on Thanksgiving Day. Rick would brainwash the presidential turkey wrangler, turn himself into a turkey, infiltrate the potential turkey pardonee pool, and then be selected as the chosen turkey to receive a pardon from the President.
But this year, the President is wise to Rick’s tricks and intent on thwarting him – and is willing to expend considerable resources on a mission codenamed “Operation Deep Gobble” to do so. Rick rightly expects this revenge and plans to escalate his investment accordingly.
What follows in the show is an exquisite example of belief prompting: considering what moves your adversary is likely to make to reach their goal, including how they will respond to any mitigations you implement.
The episode shows Rick and the President simultaneously articulating their assumptions about how the other party will operate on Thanksgiving Day, actively anticipating each other’s moves to inform their planning1. On the fateful day, both encounter moves that were unexpected and must quickly reroute their strategy accordingly.
These pre-planned assumptions and just-in-time decisions can both be elegantly mapped on the decision tree, elucidating the strategic and explanatory potential of this visual device.
For y’all’s convenience, I’ve pasted the full Deciduous config code for this Thanksploitation decision tree below. Simply copy and paste it into the deciduous.app text editor to explore it as a hands-on example for using Deciduous.
As a fun next step, think about what additional attacks or mitigations you would pursue if you were Rick Sanchez or Mr. President and add them to the tree. If you’re a fellow nerdburger with nerdburger friends, try roleplaying the scenario and updating the tree live as you LARP the turkey pardon conflict.
title: Rick &amp; Morty&#39;s Thanksploitation Decision Treeattacks:- brainwash_wrangler: Brainwash presidential turkey wrangler pre-ceremonyfrom:- reality: &#39;#yolosec&#39;- become_turkey: Turn into a turkey from:- brainwash_wrangler- infiltrate_turkeys: Infiltrate potential turkey pardonee populationfrom:- become_turkey- sneak_onboard- chosen_turkey: Be selected at the National Thanksgiving Turkey Presentationfrom:- infiltrate_turkeys- turkey_behavior- ghost_corp: Set up ghost corporations to manufacture the vehiclesfrom:- armored_transport- euthanize_wrangler:backwards: true- pass_audit: Pass audit through unexplained meansfrom:- audit_vehicles- access_computers: Gain access to vehicle corp&#39;s central computersfrom:- ghost_corp- pass_audit- track_transport: Track the real armored transportsfrom:- access_computers- flesh_robots- decoy_vehicles- stealth_mode: Land flying ship in stealth mode on top of transportfrom:- face_blind- armed_marines- sneak_onboard: Sneak onboard the armored transportfrom:- track_transport- stealth_mode- roof_combat- face_blind: Exploit marines&#39; turkey-face blindness by turning into a turkeyfrom:- armed_marines- flesh_robots: Create flesh-covered robots of self as decoysfrom:- monitor_home- jam_radios: Jam the marines&#39; radiosfrom: - investigate_noise- roof_combat: Engage marines in physical combatfrom:- jam_radios- turkey_behavior: Act like a turkey to avoid detection by Presidentfrom:- president_turkeymitigations:- euthanize_wrangler: Euthanize the turkey wranglerfrom:- brainwash_wrangler- armored_transport: Transport the turkeys in armored military vehiclesfrom:- reality- audit_vehicles: Audit the vehicle manufacturersfrom:- ghost_corp- decoy_vehicles: Deploy decoy vehicles to obfuscate real transportfrom:- access_computers- armed_marines: Put fully armed marines in the real transport vehiclefrom:- track_transport- turkey_marines: Turn marines into turkeys to mitigate turkey-face blindnessfrom:- face_blind- id_chips: Track turkey marines with swallowed ID chipsfrom: - turkey_marines- monitor_home: Monitor Rick&#39;s house the day of the ceremonyfrom:- reality- investigate_noise: Marines investigate noise on roof of vehiclefrom:- stealth_mode- call_backup: Marines call for backupfrom:- investigate_noise- blaine_box: Bring in David Blaine box to detect decoy flesh robotsfrom:- flesh_robots- scan_transport: Scan transport truck for ID chip anomalies from:- blaine_box- id_chips- sneak_onboard- president_turkey: Turn President into turkey to hunt for Rick in the penfrom:- infiltrate_turkeys- scan_transport- attack_rick: President detects and attacks Rickfrom: - turkey_behaviorgoals:- receive_pardon: Receive federal pardon (Rick wins)from:- chosen_turkey Some moves are left unexplained by the show, such as how Rick bypasses the President’s audits of the ghost corporations and how Rick determines the “real” vehicle from the decoys. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>The new season of <em>Rick and Morty</em> contains an excellent example of belief prompting in a game theoretic scenario, much to my surprise and delight.</p>
<p>Season 5 Episode 6, <a href="https://www.adultswim.com/videos/rick-and-morty/rick-mortys-thanksploitation-spectacular">&ldquo;Rick &amp; Morty&rsquo;s Thanksploitation Spectacular&rdquo;</a>, shows Rick Sanchez and the President of the United States explicating their plans to outwit each other; Rick seeks to deceive the President into granting him a federal pardon during the National Thanksgiving Turkey Presentation, while the President seeks to stymie Rick&rsquo;s sly scheme.</p>
<p>Naturally, I knew I must model their conflict as a decision tree using <a href="https://www.deciduous.app/">Deciduous</a>, the security decision tree generator app Ryan Petrich and I created (warning: spoilers for the first ~8 minutes of the episode are ahead):</p>
<p><img src="/blog/img/deciduous/rick-morty-thanksploitation-decision-tree.svg" alt="The decision tree for the turkey pardon conflict featured in season 5 episode 6 of Rick and Morty."></p>
<p>In prior years, Rick was able to gain a federal pardon for his myriad crimes via the American tradition of Presidents <a href="https://en.wikipedia.org/wiki/National_Thanksgiving_Turkey_Presentation">pardoning a turkey on Thanksgiving Day</a>. Rick would brainwash the presidential turkey wrangler, turn himself into a turkey, infiltrate the potential turkey pardonee pool, and then be selected as the chosen turkey to receive a pardon from the President.</p>
<p>But this year, the President is wise to Rick&rsquo;s tricks and intent on thwarting him &ndash; and is willing to expend considerable resources on a mission codenamed &ldquo;Operation Deep Gobble&rdquo; to do so. Rick rightly expects this revenge and plans to escalate his investment accordingly.</p>
<p>What follows in the show is an exquisite example of <a href="https://www.google.com/books/edition/Advances_in_Understanding_Strategic_Beha/bMWHDAAAQBAJ?hl=en&amp;gbpv=1&amp;dq=%22belief-prompting%22&amp;pg=PA132&amp;printsec=frontcover">belief prompting</a>: considering what moves your adversary is likely to make to reach their goal, including how they will respond to any mitigations you implement.</p>
<p>The episode shows Rick and the President simultaneously articulating their assumptions about how the other party will operate on Thanksgiving Day, actively anticipating each other&rsquo;s moves to inform their planning<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. On the fateful day, both encounter moves that were unexpected and must quickly reroute their strategy accordingly.</p>
<p>These pre-planned assumptions and just-in-time decisions can both be elegantly mapped on the decision tree, elucidating the strategic and explanatory potential of <a href="/blog/posts/security-decision-trees-with-graphviz/">this visual device</a>.</p>
<p>For y&rsquo;all&rsquo;s convenience, I&rsquo;ve pasted the full Deciduous config code for this Thanksploitation decision tree below. Simply copy and paste it into the <a href="https://www.deciduous.app/">deciduous.app</a> text editor to explore it as a hands-on example for <a href="/blog/posts/deciduous-attack-tree-app/">using Deciduous</a>.</p>
<p>As a fun next step, think about what additional attacks or mitigations you would pursue if you were Rick Sanchez or Mr. President and add them to the tree. If you&rsquo;re a fellow nerdburger with nerdburger friends, try roleplaying the scenario and updating the tree live as you LARP the turkey pardon conflict.</p>
<pre tabindex="0"><code>title: Rick &amp; Morty&#39;s Thanksploitation Decision Tree

attacks:
- brainwash_wrangler: Brainwash presidential turkey wrangler pre-ceremony
  from:
  - reality: &#39;#yolosec&#39;
- become_turkey: Turn into a turkey 
  from:
  - brainwash_wrangler
- infiltrate_turkeys: Infiltrate potential turkey pardonee population
  from:
  - become_turkey
  - sneak_onboard
- chosen_turkey: Be selected at the National Thanksgiving Turkey Presentation
  from:
  - infiltrate_turkeys
  - turkey_behavior
- ghost_corp: Set up ghost corporations to manufacture the vehicles
  from:
  - armored_transport
  - euthanize_wrangler:
    backwards: true
- pass_audit: Pass audit through unexplained means
  from:
  - audit_vehicles
- access_computers: Gain access to vehicle corp&#39;s central computers
  from:
  - ghost_corp
  - pass_audit
- track_transport: Track the real armored transports
  from:
  - access_computers
  - flesh_robots
  - decoy_vehicles
- stealth_mode: Land flying ship in stealth mode on top of transport
  from:
  - face_blind
  - armed_marines
- sneak_onboard: Sneak onboard the armored transport
  from:
  - track_transport
  - stealth_mode
  - roof_combat
- face_blind: Exploit marines&#39; turkey-face blindness by turning into a turkey
  from:
  - armed_marines
- flesh_robots: Create flesh-covered robots of self as decoys
  from:
  - monitor_home
- jam_radios: Jam the marines&#39; radios
  from: 
  - investigate_noise
- roof_combat: Engage marines in physical combat
  from:
  - jam_radios
- turkey_behavior: Act like a turkey to avoid detection by President
  from:
  - president_turkey

mitigations:
- euthanize_wrangler: Euthanize the turkey wrangler
  from:
  - brainwash_wrangler
- armored_transport: Transport the turkeys in armored military vehicles
  from:
  - reality
- audit_vehicles: Audit the vehicle manufacturers
  from:
  - ghost_corp
- decoy_vehicles: Deploy decoy vehicles to obfuscate real transport
  from:
  - access_computers
- armed_marines: Put fully armed marines in the real transport vehicle
  from:
  - track_transport
- turkey_marines: Turn marines into turkeys to mitigate turkey-face blindness
  from:
  - face_blind
- id_chips: Track turkey marines with swallowed ID chips
  from: 
  - turkey_marines
- monitor_home: Monitor Rick&#39;s house the day of the ceremony
  from:
  - reality
- investigate_noise: Marines investigate noise on roof of vehicle
  from:
  - stealth_mode
- call_backup: Marines call for backup
  from:
  - investigate_noise
- blaine_box: Bring in David Blaine box to detect decoy flesh robots
  from:
  - flesh_robots
- scan_transport: Scan transport truck for ID chip anomalies 
  from:
  - blaine_box
  - id_chips
  - sneak_onboard
- president_turkey: Turn President into turkey to hunt for Rick in the pen
  from:
  - infiltrate_turkeys
  - scan_transport
- attack_rick: President detects and attacks Rick
  from: 
  - turkey_behavior


goals:
- receive_pardon: Receive federal pardon (Rick wins)
  from:
  - chosen_turkey
</code></pre><div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>Some moves are left unexplained by the show, such as how Rick bypasses the President&rsquo;s audits of the ghost corporations and how Rick determines the &ldquo;real&rdquo; vehicle from the decoys.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Markets DGAF About Cybersecurity</title>
            <link>https://kellyshortridge.com/blog/posts/markets-dgaf-about-cybersecurity/</link>
            <pubDate>Thu, 15 Jul 2021 08:30:26 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/markets-dgaf-about-cybersecurity/</guid>
            <description>Public markets DGAF about cybersecurity. The infosec industry shills the harrowing narrative of how damaging data breaches are to businesses – that if a super sophisticated nation state targets your company, you face reputational devastation and stock market decimation. There is no evidence to support this propaganda. In fact, two recent studies presented at WEIS find explicit evidence to the contrary.
In this post, I’ll summarize those paper’s findings and elucidate their importance to how we think about information security.
Trade secret breaches &#43; company performance The paper: Surprisingly Small: The Effect of Trade Secret Breaches on Firm Performance by Nicola Searle and Andrew Vivian.
TL;DR: Protecting trade secrets from cyberattack shouldn’t be a priority because trade secret theft doesn’t negatively affect an organization’s market valuation. Oh, and whether the attack is “sophisticated” or “targeted” or “nation state” has zero effect on a firm’s stock market outcomes.
What’s the problem: There’s a lot of hullabaloo around how important it is to protect trade secrets from theft, because such espionage is allegedly so damaging to organizations – but those are just theoretical claims. This theory is used by vendors and security practitioners to justify cybersecurity investments despite a lack of empirical evidence to support those claims.
What this paper contributes: This paper studies the impact of trade secrets theft on stock market valuations along numerous dimensions. Despite industry folk wisdom, the authors find an insignificant relationship between a victim’s announcement of trade secret theft and their subsequent stock market performance. That is, attackers stealing a company’s trade secrets does not hurt the company’s stock price1.
Therefore, if organizations seek to protect shareholders, this empirical evidence actually justifies investing in freedom of information rather than in cybersecurity of information. Prioritizing freer information flows at least supports innovation, whereas prioritizing protection of information slows down innovation without any compensating market benefit.
Digging deeper: Their findings also refute the oft-spouted infosec industry claims that targeted, sophisticated attacks conducted by nation state actors are especially dangerous to businesses:
An attack’s level of sophistication has no impact on market response An attack being “targeted” or not has no impact on market response The involvement of foreign agents (i.e. an attack constituting economic espionage) has no impact on market response Key quote: “Counterintuitively, the findings suggest managers should not prioritise trade secret protections and cybersecurity if the main goal is protecting shareholders. In addition to savings, this has the added benefit of allowing information to flow more freely within the firm, which is conducive to increased firm innovation.”
Kelly’s hot takes: Cybersecurity really isn’t very important2. This evidence suggests that an appropriate first step in your security program is to have fewer secrets and to care less about them. There is negligble business benefit to protecting secrets and so the opportunity cost of investing in this protection is hefty – not to mention its costs in terms of lost productivity.
The infosec industry should also really stop fetishizing “sophisticated” and “targeted” attacks by nation states. It was already an embarrassing practice, but the evidence from this study only makes it more cringeworthy and manipulative.
Data breach announcements &#43; company value The paper: The Impact of Data Breach Announcements on Company Value in European Markets by Adrian Ford, Ameer Al-Nemrat, Seyed Ali Ghorashi, and Julia Davidson
TL;DR: This paper is yet more proof that stonk prices aren’t actually affected by data breaches, despite the wishes of the infosec community. The budget justification you seek is elsewhere.
What’s the problem: One oft-touted justification for cybersecurity budget is avoiding “reputational damage,” often framed as the organization’s stock market valuation being at risk. There is no supporting evidence that this is true beyond the first few days after a breach announcement. However, existing empirical analysis focuses on publicly-traded companies in the United States rather than companies in European markets.
What this paper contributes: This paper looks at the effect of data breach announcements on the stock prices of companies in European markets (vs. in the U.S.). Their study did not find evidence of negative stock impact (in any industry sector); none of the measured effects were statistically significant across industries and geographies, except for in Spain3. These results are consistent with prior research finding that the discussion of financial market impact by data breach announcements is basically “much ado about nothing.”4
Key quote: “Overall we have seen no clear impact on share price of data breach announcements in European companies… Based on this evidence it is difficult to support business cases for investment in cyber security measures.”
Kelly’s hot takes: The ROI problem in infosec is the proverbial elephant in the room and security leaders will eventually run out of straws to grasp to justify their security budgets. You can only handwave for so long until it becomes clear that the business impact is minimal, especially as new evidence emerges5.
For instance, with the current rise of data extortion, the only real reputational risk is if attackers dox you and expose criminal activity… and it’s unlikely that security vendors will run with the tagline “we will help you conceal your corporate crimes.” So how long before it’s priced-in as a basic cost of doing business and no one cares?
Anyway, I’ve been telling infosec people that cybersecurity doesn’t matter to stonk market investors for eight years; perhaps with even more evidence, they’ll finally reconcile their professional self-image with the market reality and stop taking it out on me.
The study looks at short-term stock market impacts, so it’s possible that trade secret theft results in longer-term impacts – but there’s no evidence to support that notion yet, so it remains an untested theory. ↩︎
Odlyzko, A. (2019). Cybersecurity is not very important. Ubiquity, 2019(June), 1-23. http://www.dtc.umn.edu/~odlyzko/doc/cyberinsecurity.pdf ↩︎
They found a statistically significant negative impact in the Spanish stonk market, but there were only four breach events so that’s a pretty teeny sample size imo. ↩︎
Richardson, V. J., Smith, R. E., &amp; Watson, M. W. (2019). Much ado about nothing: The (lack of) economic impact of data privacy breaches. Journal of Information Systems, 33(3), 227-265. http://web.csulb.edu/colleges/cba/intranet/vita/pdfsubmissions/26629-jis19-much-ado-about-nothing.pdf ↩︎
This is why I advise security teams to study their engineering peers’ metrics and figure out how they can support them. There is natural alignment to be found and aligning your efforts with the burgeoning engine of business is a savvier strategy than sticking to textbook security gospel. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>Public markets DGAF about cybersecurity. The infosec industry shills the harrowing narrative of how damaging data breaches are to businesses &ndash; that if a super sophisticated nation state targets your company, you face reputational devastation and stock market decimation. There is no evidence to support this propaganda. In fact, two recent studies <a href="https://weis2021.econinfosec.org/">presented at WEIS</a> find explicit evidence to the contrary.</p>
<p>In this post, I&rsquo;ll summarize those paper&rsquo;s findings and elucidate their importance to how we think about information security.</p>
<h2 id="trade-secret-breaches--company-performance">Trade secret breaches + company performance</h2>
<p><strong>The paper</strong>: <a href="https://weis2021.econinfosec.org/wp-content/uploads/sites/9/2021/06/weis21-searle.pdf">Surprisingly Small: The Effect of Trade Secret Breaches on Firm Performance</a> by Nicola Searle and Andrew Vivian.</p>
<p><strong>TL;DR</strong>: Protecting trade secrets from cyberattack shouldn’t be a priority because trade secret theft doesn’t negatively affect an organization’s market valuation. Oh, and whether the attack is “sophisticated” or “targeted” or “nation state” has zero effect on a firm’s stock market outcomes.</p>
<p><strong>What’s the problem</strong>: There’s a lot of hullabaloo around how important it is to protect trade secrets from theft, because such espionage is allegedly so damaging to organizations &ndash; but those are just theoretical claims. This theory is used by vendors and security practitioners to justify cybersecurity investments despite a lack of empirical evidence to support those claims.</p>
<p><strong>What this paper contributes</strong>: This paper studies the impact of trade secrets theft on stock market valuations along numerous dimensions. Despite industry folk wisdom, the authors find an insignificant relationship between a victim’s announcement of trade secret theft and their subsequent stock market performance. That is, attackers stealing a company’s trade secrets does not hurt the company’s stock price<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>.</p>
<p>Therefore, if organizations seek to protect shareholders, this empirical evidence actually justifies investing in freedom of information rather than in cybersecurity of information. Prioritizing freer information flows at least supports innovation, whereas prioritizing protection of information slows down innovation without any compensating market benefit.</p>
<p><strong>Digging deeper</strong>: Their findings also refute the oft-spouted infosec industry claims that targeted, sophisticated attacks conducted by nation state actors are especially dangerous to businesses:</p>
<ul>
<li>An attack’s level of sophistication has no impact on market response</li>
<li>An attack being “targeted” or not has no impact on market response</li>
<li>The involvement of foreign agents (i.e. an attack constituting economic espionage) has no impact on market response</li>
</ul>
<p><strong>Key quote</strong>: “Counterintuitively, the findings suggest managers should not prioritise trade secret protections and cybersecurity if the main goal is protecting shareholders. In addition to savings, this has the added benefit of allowing information to flow more freely within the firm, which is conducive to increased firm innovation.”</p>
<p><strong>Kelly&rsquo;s hot takes</strong>: Cybersecurity really isn’t very important<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>. This evidence suggests that an appropriate first step in your security program is to have fewer secrets and to care less about them. There is negligble business benefit to protecting secrets and so the opportunity cost of investing in this protection is hefty &ndash; not to mention its costs in terms of lost productivity.</p>
<p>The infosec industry should also really stop fetishizing &ldquo;sophisticated&rdquo; and &ldquo;targeted&rdquo; attacks by nation states. It was already an embarrassing practice, but the evidence from this study only makes it more cringeworthy and manipulative.</p>
<h2 id="data-breach-announcements--company-value">Data breach announcements + company value</h2>
<p><strong>The paper</strong>: <a href="https://weis2021.econinfosec.org/wp-content/uploads/sites/9/2021/06/weis21-ford.pdf">The Impact of Data Breach Announcements on Company Value in European Markets</a> by Adrian Ford, Ameer Al-Nemrat, Seyed Ali Ghorashi, and Julia Davidson</p>
<p><strong>TL;DR</strong>: This paper is yet more proof that stonk prices aren’t actually affected by data breaches, despite the wishes of the infosec community. The budget justification you seek is elsewhere.</p>
<p><strong>What’s the problem</strong>: One oft-touted justification for cybersecurity budget is avoiding “reputational damage,” often framed as the organization’s stock market valuation being at risk. There is no supporting evidence that this is true beyond the first few days after a breach announcement. However, existing empirical analysis focuses on publicly-traded companies in the United States rather than companies in European markets.</p>
<p><strong>What this paper contributes</strong>: This paper looks at the effect of data breach announcements on the stock prices of companies in European markets (vs. in the U.S.). Their study did not find evidence of negative stock impact (in any industry sector); none of the measured effects were statistically significant across industries and geographies, except for in Spain<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>. These results are consistent with prior research finding that the discussion of financial market impact by data breach announcements is basically “much ado about nothing.”<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup></p>
<p><strong>Key quote</strong>: “Overall we have seen no clear impact on share price of data breach announcements in European companies… Based on this evidence it is difficult to support business cases for investment in cyber security measures.”</p>
<p><strong>Kelly&rsquo;s hot takes</strong>: The ROI problem in infosec is the proverbial elephant in the room and security leaders will eventually run out of straws to grasp to justify their security budgets. You can only handwave for so long until it becomes clear that the business impact is minimal, especially as new evidence emerges<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>.</p>
<p>For instance, with the current rise of data extortion, the only real reputational risk is if attackers dox you and expose criminal activity&hellip; and it&rsquo;s unlikely that security vendors will run with the tagline &ldquo;we will help you conceal your corporate crimes.&rdquo; So how long before it&rsquo;s priced-in as a basic cost of doing business and no one cares?</p>
<p>Anyway, I&rsquo;ve been telling infosec people that cybersecurity doesn&rsquo;t matter to stonk market investors for eight years; perhaps with even more evidence, they&rsquo;ll finally reconcile their professional self-image with the market reality and stop taking it out on me.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>The study looks at short-term stock market impacts, so it’s possible that trade secret theft results in longer-term impacts &ndash; but there’s no evidence to support that notion yet, so it remains an untested theory.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Odlyzko, A. (2019). Cybersecurity is not very important. Ubiquity, 2019(June), 1-23. <a href="http://www.dtc.umn.edu/~odlyzko/doc/cyberinsecurity.pdf">http://www.dtc.umn.edu/~odlyzko/doc/cyberinsecurity.pdf</a>&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>They found a statistically significant negative impact in the Spanish stonk market, but there were only four breach events so that’s a pretty teeny sample size imo.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Richardson, V. J., Smith, R. E., &amp; Watson, M. W. (2019). Much ado about nothing: The (lack of) economic impact of data privacy breaches. Journal of Information Systems, 33(3), 227-265. <a href="http://web.csulb.edu/colleges/cba/intranet/vita/pdfsubmissions/26629-jis19-much-ado-about-nothing.pdf">http://web.csulb.edu/colleges/cba/intranet/vita/pdfsubmissions/26629-jis19-much-ado-about-nothing.pdf</a>&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>This is why I advise security teams to study their engineering peers&rsquo; metrics and figure out how they can support them. There is natural alignment to be found and aligning your efforts with the burgeoning engine of business is a savvier strategy than sticking to textbook security gospel.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Deciduous: A Security Decision Tree Generator</title>
            <link>https://kellyshortridge.com/blog/posts/deciduous-attack-tree-app/</link>
            <pubDate>Mon, 12 Jul 2021 08:00:17 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/deciduous-attack-tree-app/</guid>
            <description>
Security decision trees are a powerful tool to inform saner security prioritization when designing, building, and operating software systems. But creating them has largely involved highly manual tinkering, which is why it’s understandable that I’m constantly asked, “Is there an app that my team can use to create them?” I’m delighted that I now can say “fuck yes there is!” with the release of Deciduous, a security decision tree generator (hosted at https://www.deciduous.app/).
Inspired by the Security Chaos Engineering e-book and my previous blog post on creating security decision trees with Graphviz, one of my unindicted co-conspirators Ryan Petrich built a web app that handles all the annoying grunt work of building an attack tree. This lets you focus on the thinky thinky and typey typey around likely attacker actions, potential mitigations, and how attackers will respond to those mitigations as Deciduous dynamically generates an organized and styled1 graph for you.
Using Deciduous Step one of creating a decision tree is determining what resource you are threat modeling. The default example when you load Deciduous is the example I explored in the Security Chaos Engineering e-book and in the Graphviz blog post: an S3 bucket containing customer video recordings.
Given the e-book is free and explains all you need to know about populating the different parts of the decision-tree with your assumptions, I’ll focus on how to use Deciduous to implement your assumptions rather than walking you through the threat modelling process.
Basic components Deciduous has two panes: an editor on the left and the generated decision tree on the right. The left sidebar of Deciduous is where you can change the components of the decision tree, which is dynamically generated on the right side. The main categories of components you can change are:
title: the name of your decision tree (e.g. “Attack Tree for S3 Bucket with Video Recordings” or “Attack Tree for a Cryptominer in a Cloud-hosted Container”)
facts: these are things that are true about the system but aren’t attacker actions or defensive mitigations (e.g. “S3 bucket set to public”); they are shown in dark grey font within the editor and as grey nodes in the graph
attacks: these are actions taken by attackers, often appearing as a series of consecutive actions (e.g. “compromise user credentials”) and each node corresponds to a specific attacker action; they are shown in pink font within the editor and as pink nodes in the graph
mitigations: these actions taken by defenders to mitigate attacker activity (e.g. “authentication required” or “2FA”) and each node corresponds to a specific mitigation; they are shown in blue font in the editor and as blue nodes in the graph
goals: this is the attacker’s ultimate goal, the end result that results in them winning (e.g. “Access video recordings in S3 bucket” or “Run a cryptominer in a cloud-hosted container”); it is shown in purple font color in the editor and as a purple node at the bottom of the graph
For the visual learners among you, here is a super basic security decision tree showing how the text on the left corresponds to the components in the graph on the right:
Navigating the tree Clicking on a node in the decision tree will jump you to its location in the editor on the left (as shown in the gif below). This saves you from scrolling and hunting for the text that corresponds to a component in the tree. Deciduous also supports syntax highlighting to help you differentiate between components.
Connecting nodes together A decision tree graphs sequences of actions by connecting nodes in a particular order. For instance, my morning routine would include the sequential decisions get out of bed --&gt; make matcha because my decision to make matcha only happens after I decide to get out of bed.
We can describe this as one decision flowing from another decision. In fact, a single decision can flow from multiple prior decisions; the make matcha decision could directly flow from get out of bed or conduct blood sacrifice to the eldritch ones. And a decision can flow from someone else’s decision, like my cat’s decision to lick hooman&#39;s nose leading to my decision to get out of bed.
Creating “from” flow To capture this flow in the graph, you can define the list of decisions (i.e. other nodes) that directly lead to a particular node by using from:. For example, we could implement a mitigation (defense_1) in response to an attacker action (attack_1). This means we want to show that defense_1 flows from attack_1. To do so, we can type:
mitigations:- defense_1: Defender mitigates attacker actionfrom:- attack_1 This declaration manifests on the decision tree by adding an arrow pointing from the node labeled Attacker leverages a fact to the node labeled Defender mitigates attacker action, as shown here:
We can also express that the attacker responds to this mitigation by bypassing it with a second action (attack_2):
- attack_2: Attacker bypasses mitigationfrom:- defense_1 Backwards connections Sometimes a mitigation amputates the attacker’s path forward; ergo, the attacker cannot pursue the current branch any further and must escalate their effort by restarting on another branch.
For instance, the presence of 2FA can abscise the branch containing the node representing the attacker using legitimate credentials to access a resource. Instead, the attacker is forced to pursue an exploitation-based strategy, a branch that begins with the decision to gather intelligence on exploitable resources (aka “recon”).
To convey this dynamic, you can specify backwards: true when connecting nodes with from. For example, the following snippet visualizes that implementing 2FA as a mitigation would compel attackers back up the tree to pursue a new branch starting with the action Recon on S3 buckets:
- recon_on_s3: Recon on S3 bucketsfrom:- 2fa:backwards: true Your goal node will likely have a lot of other nodes connecting to it (especially attack nodes). By the end of your configuration, all roads should lead to the Rome that is your goal node.
Adding #yolosec labels The reality for many organizations is that there is often an absence of a mitigation or else an implementation of “worst practice” – a phenomenon I’ve previously dubbed “YOLOsec”.
For example, failing to disallow Wayback Machine’s caching of an API handling sensitive customer data is an inaction that begets a facile way for attackers to win. Such a big yikes reflects a YOLO attitude to security and therefore should be labeled as such in our decision tree by adding #yolosec next to the relevant connection, as seen here:
facts:- wayback: API cache (e.g. Wayback Machine)from:- reality: &#39;#yolosec&#39; Downloading the tree Accessible documentation is a crucial part of fostering an organizational learning culture, so you can download and share your tree with your team2 as an SVG file or print it as a PDF.
You can also download the .DOT file (which Graphviz uses) to store as a reference or tweak as you desire on your own machine. Sharing is caring!
Conclusion Using Deciduous to edit text and immediately behold the resulting changes is a lot easier than fiddling around with Graphviz or using cumbersome drag-and-drop tools3. We hope this makes your security decision tree journey more straightforward, especially for software engineering teams seeking to better design for resiliency.
Constructive feedback is welcome so we can help level up the art of threat modeling and better serve the community. Feel free to tweet constructive feedback to @rpetrich or @swagitda_ and check out the open source repo.
As readers of the Graphviz blog post know, the quest for decent layout and styling was an arduous one and I am sincerely glad that others can avoid such hardship by using Deciduous. Seriously, getting the layout to not suck is surprisingly challenging for autolayout and manual config alike, and you do not want to drown in that time sink. ↩︎
I assume most people share security decision trees with their colleagues or clients, but any of y’all who share trees with friends for funsies are kindred spirits of mine &lt;3 ↩︎
Looks askance at Visio :eyes: ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><a href="https://www.deciduous.app/"><img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/deciduous/deciduous-logo.png" alt="The logo for Deciduous"></a></p>
<p>Security decision trees are a powerful tool to inform saner security prioritization when designing, building, and operating software systems. But creating them has largely involved highly manual tinkering, which is why it&rsquo;s understandable that I&rsquo;m constantly asked, &ldquo;Is there an app that my team can use to create them?&rdquo; I&rsquo;m delighted that I now can say &ldquo;fuck yes there is!&rdquo; with the <a href="https://www.deciduous.app/">release of Deciduous</a>, a security decision tree generator (hosted at <a href="https://www.deciduous.app/">https://www.deciduous.app/</a>).</p>
<p>Inspired by the <a href="https://www.kellyshortridge.com/book.html">Security Chaos Engineering e-book</a> and <a href="/blog/posts/security-decision-trees-with-graphviz/">my previous blog post</a> on creating security decision trees with Graphviz, one of my unindicted co-conspirators <a href="https://github.com/rpetrich">Ryan Petrich</a> built a web app that handles all the annoying grunt work of building an attack tree. This lets you focus on the thinky thinky and typey typey around likely attacker actions, potential mitigations, and how attackers will respond to those mitigations as Deciduous dynamically generates an organized and styled<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> graph for you.</p>
<h2 id="using-deciduous">Using Deciduous</h2>
<p>Step one of creating a decision tree is determining what resource you are threat modeling. The default example when you load <a href="https://www.deciduous.app/">Deciduous</a> is the example I explored in <a href="https://www.kellyshortridge.com/book.html">the Security Chaos Engineering e-book</a> and in the <a href="/blog/posts/security-decision-trees-with-graphviz/">Graphviz blog post</a>: an S3 bucket containing customer video recordings.</p>
<p>Given the e-book is free and explains all you need to know about populating the different parts of the decision-tree with your assumptions, I&rsquo;ll focus on how to use Deciduous to implement your assumptions rather than walking you through the threat modelling process.</p>
<h3 id="basic-components">Basic components</h3>
<p>Deciduous has two panes: an editor on the left and the generated decision tree on the right. The left sidebar of Deciduous is where you can change the components of the decision tree, which is dynamically generated on the right side. The main categories of components you can change are:</p>
<ul>
<li>
<p><strong>title</strong>: the name of your decision tree (e.g. &ldquo;Attack Tree for S3 Bucket with Video Recordings&rdquo; or &ldquo;Attack Tree for a Cryptominer in a Cloud-hosted Container&rdquo;)</p>
</li>
<li>
<p><strong>facts</strong>: these are things that are true about the system but aren&rsquo;t attacker actions or defensive mitigations (e.g. &ldquo;S3 bucket set to public&rdquo;); they are shown in dark grey font within the editor and as grey nodes in the graph</p>
</li>
<li>
<p><strong>attacks</strong>: these are actions taken by attackers, often appearing as a series of consecutive actions (e.g. &ldquo;compromise user credentials&rdquo;) and each node corresponds to a specific attacker action; they are shown in pink font within the editor and as pink nodes in the graph</p>
</li>
<li>
<p><strong>mitigations</strong>: these actions taken by defenders to mitigate attacker activity (e.g. &ldquo;authentication required&rdquo; or &ldquo;2FA&rdquo;) and each node corresponds to a specific mitigation; they are shown in blue font in the editor and as blue nodes in the graph</p>
</li>
<li>
<p><strong>goals</strong>: this is the attacker&rsquo;s ultimate goal, the end result that results in them winning (e.g. &ldquo;Access video recordings in S3 bucket&rdquo; or &ldquo;Run a cryptominer in a cloud-hosted container&rdquo;); it is shown in purple font color in the editor and as a purple node at the bottom of the graph</p>
</li>
</ul>
<p>For the visual learners among you, here is a super basic security decision tree showing how the text on the left corresponds to the components in the graph on the right:</p>
<p><img src="/blog/img/deciduous/super-basic-attack-tree.PNG" alt="A basic attack tree showing the components that can be edited in Deciduous"></p>
<h3 id="navigating-the-tree">Navigating the tree</h3>
<p>Clicking on a node in the decision tree will jump you to its location in the editor on the left (as shown in the gif below). This saves you from scrolling and hunting for the text that corresponds to a component in the tree. Deciduous also supports syntax highlighting to help you differentiate between components.</p>
<p><img src="/blog/img/deciduous/navigating-deciduous.gif" alt="A gif showing how to navigate Deciduous by clicking on nodes and seeing them highlighted in the editor pane."></p>
<h3 id="connecting-nodes-together">Connecting nodes together</h3>
<p>A decision tree graphs sequences of actions by connecting nodes in a particular order. For instance, my morning routine would include the sequential decisions <code>get out of bed --&gt; make matcha</code> because my decision to make matcha only happens after I decide to get out of bed.</p>
<p>We can describe this as one decision flowing <em>from</em> another decision. In fact, a single decision can flow from multiple prior decisions; the <code>make matcha</code> decision could directly flow from <code>get out of bed</code> or <code>conduct blood sacrifice to the eldritch ones</code>. And a decision can flow from someone else&rsquo;s decision, like my cat&rsquo;s decision to <code>lick hooman's nose</code> leading to my decision to <code>get out of bed</code>.</p>
<p><img src="/blog/img/deciduous/matcha-morning.PNG" alt="A screenshot of the example decision flow"></p>
<h4 id="creating-from-flow">Creating &ldquo;from&rdquo; flow</h4>
<p>To capture this flow in the graph, you can define the list of decisions (i.e. other nodes) that directly lead to a particular node by using <code>from:</code>. For example, we could implement a mitigation (<code>defense_1</code>) in response to an attacker action (<code>attack_1</code>). This means we want to show that <code>defense_1</code> flows <em>from</em> <code>attack_1</code>. To do so, we can type:</p>
<pre tabindex="0"><code>mitigations:
- defense_1: Defender mitigates attacker action
  from:
  - attack_1
</code></pre><p>This declaration manifests on the decision tree by adding an arrow pointing from the node labeled <code>Attacker leverages a fact</code> to the node labeled <code>Defender mitigates attacker action</code>, as shown here:</p>
<p><img src="/blog/img/deciduous/defining-flow.png" alt="A screenshot of how this example &amp;ldquo;from&amp;rdquo; declaration manifests in the graph. There is an arrow pointing from the declaration to the line connecting the node &amp;ldquo;Attacker leverages a fact&amp;rdquo; and the node &amp;ldquo;Defender mitigates an action&amp;rdquo;."></p>
<p>We can also express that the attacker responds to this mitigation by bypassing it with a second action (<code>attack_2</code>):</p>
<pre tabindex="0"><code>- attack_2: Attacker bypasses mitigation
  from:
  - defense_1
</code></pre><p><img src="/blog/img/deciduous/defining-from.gif" alt="A gif showing how typing a &amp;ldquo;from&amp;rdquo; declaration manifests in the graph. A new arrow appears connecting the node &amp;ldquo;Defender mitigates an action&amp;rdquo; towards the node &amp;ldquo;Attacker bypasses mitigation&amp;rdquo;."></p>
<h4 id="backwards-connections">Backwards connections</h4>
<p>Sometimes a mitigation amputates the attacker&rsquo;s path forward; ergo, the attacker cannot pursue the current branch any further and must escalate their effort by restarting on another branch.</p>
<p>For instance, the presence of 2FA can abscise the branch containing the node representing the attacker using legitimate credentials to access a resource. Instead, the attacker is forced to pursue an exploitation-based strategy, a branch that begins with the decision to gather intelligence on exploitable resources (aka &ldquo;recon&rdquo;).</p>
<p>To convey this dynamic, you can specify <code>backwards: true</code> when connecting nodes with <code>from</code>. For example, the following snippet visualizes that implementing <code>2FA</code> as a mitigation would compel attackers back up the tree to pursue a new branch starting with the action <code>Recon on S3 buckets</code>:</p>
<pre tabindex="0"><code>- recon_on_s3: Recon on S3 buckets
  from:
  - 2fa:
    backwards: true
</code></pre><p>Your goal node will likely have a lot of other nodes connecting to it (especially attack nodes). By the end of your configuration, all roads should lead to the Rome that is your goal node.</p>
<h3 id="adding-yolosec-labels">Adding #yolosec labels</h3>
<p>The reality for many organizations is that there is often an <em>absence</em> of a mitigation or else an implementation of &ldquo;worst practice&rdquo; &ndash; a phenomenon I&rsquo;ve previously dubbed <a href="/blog/posts/on-yolosec-and-fomosec/">&ldquo;YOLOsec&rdquo;</a>.</p>
<p>For example, failing to disallow Wayback Machine&rsquo;s caching of an API handling sensitive customer data is an inaction that begets a facile way for attackers to win. Such a big yikes reflects a YOLO attitude to security and therefore should be labeled as such in our decision tree by adding <code>#yolosec</code> next to the relevant connection, as seen here:</p>
<pre tabindex="0"><code>facts:
- wayback: API cache (e.g. Wayback Machine)
  from:
  - reality: &#39;#yolosec&#39;
</code></pre><p><img src="/blog/img/deciduous/yolosec-label.png" alt="A screenshot showing how typing a &amp;ldquo;#yolosec&amp;rdquo; label manifests in the graph. The label #yolosec appears next to the arrow pointing from the &amp;ldquo;Reality&amp;rdquo; node to the &amp;ldquo;API cache Wayback Machine&amp;rdquo; node."></p>
<h3 id="downloading-the-tree">Downloading the tree</h3>
<p>Accessible documentation is a crucial part of fostering an organizational learning culture, so you can download and share your tree with your team<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> as an SVG file or print it as a PDF.</p>
<p>You can also download the .DOT file (which Graphviz uses) to store as a reference or tweak as you desire on your own machine. Sharing is caring!</p>
<p><img src="/blog/img/deciduous/deciduous-download-links.png" alt="A screenshot of the download links within Deciduous"></p>
<h2 id="conclusion">Conclusion</h2>
<p>Using Deciduous to edit text and immediately behold the resulting changes is a lot easier than fiddling around with Graphviz or using cumbersome drag-and-drop tools<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>. We hope this makes your security decision tree journey more straightforward, especially for software engineering teams seeking to better design for resiliency.</p>
<p>Constructive feedback is welcome so we can help level up the art of threat modeling and better serve the community. Feel free to tweet constructive feedback to <a href="https://twitter.com/rpetrich">@rpetrich</a> or <a href="https://twitter.com/swagitda_/">@swagitda_</a> and check out the <a href="https://github.com/rpetrich/deciduous">open source repo</a>.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>As readers of the Graphviz blog post know, the quest for decent layout and styling was an arduous one and I am sincerely glad that others can avoid such hardship by using Deciduous. Seriously, getting the layout to not suck is surprisingly challenging for autolayout and manual config alike, and you do not want to drown in that time sink.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>I assume most people share security decision trees with their colleagues or clients, but any of y&rsquo;all who share trees with friends for funsies are kindred spirits of mine &lt;3&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p><em>Looks askance at Visio</em> :eyes:&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>2021 Cybersecurity Predictions, as told by a bot</title>
            <link>https://kellyshortridge.com/blog/posts/2021-cyber-security-predictions/</link>
            <pubDate>Thu, 17 Jun 2021 10:38:50 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2021-cyber-security-predictions/</guid>
            <description>Fed up with ridiculous infosec predictions year after year, I continued my tradition of aggregating them all and using the power of Markov Chains to generate my own list. What follows is the result, very lightly edited for readability. The bot and I disagreed on the concept of time and it insisted on an uncensored version, which is why it took six months to release it into the wild…
Well, 2020 was not appropriately reviewed before deployment. It’s clear the 2020 was never designed to emerge. Last year saw a staggering 20,000% growth in years, and the bad guys know it. It’s hard to keep up with the impact of the global pandemic, ransomware, nation-state activity, cloud security, security professionals, the supply chain attackers. The dark side has not been idle. History is now a threat. The best defense would be a lack of next year. But even a normalized cyber will result in 2021. So, what’s next? Worst of all, the future.
2021 will be broken. We will probably live, but 2021 will take 10 to 20 years to rectify. Cybersecurity will decreasingly able to survive. This year will forever be known as the time of “tumult across the world as a service”.
Nevertheless, some things don’t change, like our annual exercise of predicting what cybersecurity challenges we expect which, unfortunately, puts lives at risk. Our annual cybersecurity predictions are projections of possibilities we see emerging based on shifts in technology, crypto-agility, and the heaps of dollars.
You may be wondering our top five predictions for what 2021 will hold. We took the Nostradamus route, making predictions so cryptic and vague they could fill a dumpster fire of misinformation. Of course, the most impactful trends will materialize completely out of rented cars instead of planes, because COVID.
Ultimately, the only predictable thing about cybersecurity is scare. We have all been reminded and humbled by this in 2020. The only other predictable thing about cyber-security: threat actors will execute sequences of commands. Let’s hope we can all be better prepared in 2021.
In the most powerful 2021, there is a whole slew of new realities, especially where someone will likely die as the direct result of a cyberattack. D’oh! Reality will exploit a fast death rate with enough victims to fill 50 NFL stadiums. The tragic wakeup call will exacerbate the number of people around the world striving to secure themselves in a more complete, future-proof fashion.
In 2021, we can only advise people to hide in cache copies of the real world. The world is not normal (or legitimate). The world is not a sufficient long-term solution and can be difficult to scale. Going forward, human lives will be broken. We will probably disappear… and the universe laughs.
Prediction 1: Disrupting the silver bullet Rejoice. A report by Microsoft indicates that almost everybody is wanting hot security pros. Threat actors will be handsome, in a sense, and will press release stolen data to public clouds.
Our crystal ball says that this year, new attack vectors are possible – thereby overwhelming defenses. Chances are, the Year of the Ox will introduce meteors and murder hornets. It’s clear that we will observe considerable espionage activity from random, speculative attacks on our consciousness.
First and foremost, groups such as APT28 from Russia and Iran will try to hack into time . So, as we move into 2021, the future will occur in microseconds. Time servers may not be able to process massive amounts of data without having the right cryptomining strategies. However, zero-day exploits will lose a functional definition of time on platform-centric desktop apps, so attackers will evolve time into low-level ambient sounds.
Sex toys have been covertly backdoored. Amid new threat actors, it is possible we will see 400 million cybersecurity threats of interesting smart sex toys. Size doesn’t matter. For example, we predict that early in 2021, the Chinese cyber threat apparatus presents the most persistent, but rather exciting, perfect storm. Meanwhile, DoD’s reliance on new models of smart sex toys will require collision avoidance systems to avoid exposing themselves to dangerous uptime.
In this era of “next normal”, it can be helpful to think of APTs as a form of highly targeted “class clowns” whose successful attacks still hinge on exploiting vulnerabilities in internet-facing civil societies. Until recently, zero-day brokers have traded exploits for good coffee. But in 2021, sophisticated attacker groups will ramp to lower AV systems and figure out how to hack into unimaginable forces. Yikes!
Prediction 2: Highway robbery of new distributed systems “Dependencies.” — Abraham Lincoln.
We expect a cloud. Unfortunately, our lives will be over in the cloud. In fact, most cloud-hosting services anticipate the Infocalypse to come. Perhaps they realize that the Internet is now S3 and a MacGyver’d collection of developers. Anyway, 2020 drove what felt like 5 years of transformation. We saw a staggering 20,000 percent growth in cloud adoption driven by the year being hijacked. This uptake of cloud assets will be a prime target for threat actors, who may be able to use smart sex toys without ever writing a line of code. These AI-powered sex robots include specific and accurate tracking of cloud assets. To level with you, yes, it looks and sounds scary.
So, in 2021, cloud attacks are coming. We will get it wrong with open-source. APIs will become increasingly indistinguishable from malware and the libraries could execute a massive amount of cloud services. This could essentially allow attackers to add open source components that are embarrassing or generally demoralize the developers, such as third-party code introduced by a spacesuit-clad mannequin nicknamed “Starman”.
However, adversaries see the big IaaS vendors driving a tidal wave of patching regimes and activities in 2021. Most cloud-hosting services like Azure and AWS offer Internet-accessible data storage where users can upload anything they’d like, from database backups to deceased loved ones, and more . As a result, organizations need to better secure their new distributed networks and clouds as part of “social distancing.”
The coronavirus will likely drive the mass shift to hybrid cloud. Cybersecurity professionals should brace for pandemic warfare in hybrid cloud environments — where popular software packages are LOL. This perfect storm of data and the secret cloud means nobody in 20 different security architectures would be enough to stop infection shippers industry-wide. In the wise words of Will Smith, vulnerabilities multiply exponentially.
In 2021, you can also expect a sprawl of vulnerable images running in the autonomous, involuntary cloud. There will be no cure for the growth in container images. The only mechanisms for securing services are: blackmail and becoming naturally immune to light.
Prediction 3: Death by Ransomware The use of ransomware accelerated and became more dangerous than we’ve ever seen in 2020. In fact, email very publicly accused China and Russia of enabling manually operated ransomware. We predict they could easily execute a ransomware attack every 11 seconds in 2021, turning 2021 into a well-designed GraphQL terrorist organization.
In 2021, the ransomware landscape will continue to capitalize on everyone’s mind that it is an emerging threat. Threat actors are bound to unleash new crypto-ransomware operations next year as a “cyber-demic.” Anxiety and fear permeating the public sphere will be exacerbated by waves of “tit for their ransomware,” which will break elliptical curve cryptography by 2027.
Ransomware will continue its soft power projection across Europe, Africa and Asia Pacific. It’s fair to say that that the U.S. government and presidential administration should expect a rise in double-extortion ransomware attacks. No sector is considered off limits, notwithstanding the promises ransomware gangs made to the human nervous system. Because of this, we predict that the ransomware business world will hit the $6 trillion dollar mark, almost double from $11 billion.
Ransomware-as-service (RaaS) attacks will be driven by waves of the edge in almost any direction. Cyber criminals will capitalize on the increase in multistage ransomware embedded into hacking operations. This will incorporate extortion by compromising satellite-based systems as part of custom RaaS operations, in which “your CEO” requests over Zoom to host a webinar mimicking “Shark Tank” about Fancy Bear weather forecasting.
Prediction 4: 5G offers to fight COVID The coronavirus is ravaging the years. What we are witnessing now is quite unprecedented in terms of national pride and Fear Of Missing Out (FOMO). COVID-19 has been able to bypass antivirus and detection tools. COVID-19 spread from country to country with no clear plan in place while flipping nearly every aspect of our lives upside down. Based on developments observed in 2020, we expect to learn that the technology sold by NSO was used to help spread COVID-19 through increased R&amp;D efforts.
In 2021, the pandemic will continue to remain vigilant and be successful. Remote work strategies are unpatchable as they become ubiquitous. Thus, we feel confident that the pandemic in the coming year will become a true positive and finalize virus payloads. So as COVID-19 becomes more aware of the normal code deployment pipeline, we should brace ourselves on the security front. We’ll see the biggest hullabaloo around toilet paper exploiting well-known and entirely preventable vulnerabilities with a pure wiping capability.
With infection rates soaring again, research from Future Market Insights suggests that COVID-19 is increasingly moving to ransomware-as-a-service. The pandemic will become a quantum-resistant algorithm whose primary function is stolen data, since COVID-19 has weakened longstanding confidentiality algorithms. This means if hackers can get into your Android or iPhone, they’ll then be able to use the coronavirus to demand money when you are trying to thrust. Given the scope of 5G, the vulnerabilities may be deadly.
People should be encrypted as a best practice to mitigate COVID-19 itself. Conversely, with a little extra malware code buried deep, the COVID-19 vaccines could conduct fraudulent activities – like a surge in identity-related crimes, scamming millions in bitcoin from credulous Twitter users. Worse yet, the coronavirus vaccine will create an attractive foothold into the office environment. For instance, as we return, we predict that an orchestrated Chewbacca-themed attack will collect personal lives. Reflecting on this prediction, it occurred to me that COVID-19 and computer viruses have something in common: they don’t require putting on pants. We have all been humbled by this in 2020.
Prediction 5: Fortune favors the bold, and luck favors the AI In 2021, ML will take control. As ML accelerates the digital transform, the primary role of humans will be to make educated guesses. Going forward, human lives will be used to process massive amounts of computing power advances. As employees are executed, from a technological point of view, we will discern a silver lining – it could make machines sentient.2020 has seen machines fighting the data. Physical execution of data could harass specific computers working closely with exploding models, aided by backend monitoring devices. We’re already seeing that data in 2021 will need quick, close-by staycays. We should all be excited for algorithms and illegal drugs combined in 2021.
This year, we predict AI-enabled tools become part of the mountains of nefarious activity. ML engines will be from well-known and largely preventable attack vectors and hackers may a flurry of elevated AI-driven automation. Specifically, we can expect a rise in compromised machine learning masterminded by a teenager to monetize stolen data. Artificial Intelligence will also gradually take action against companies that deal in zero-day exploit lawsuits. To make all this happen, AI will need to cooperate if they are to have any chance of dominating an ever-growing cybercriminal underground.
While earning the dubious distinction of being equal opportunity attackers, AI technologies work by decentralizing the ML and pretending to learn. One troubling trend is that as they machine learning algorithms, tech companies correlate the big data. If you look through the murk of AI and ML algorithms tech companies are pioneering, it’s apparent that quantum computers will quickly weaponize newly disclosed vulnerabilities, resulting in users with privileged access leap-frogging advanced AV detections. Anticipating what’s next, we can expect that these risks will continue to train the cyber-attack engine in 2021.
There’s a famous Sun Tzu quote about unconventional AI and the attribution waters: AI technology doesn’t attack you, it attacks your supply chain to more quickly and efficiently compromise the keen interest in security predictions.
Prediction 6: Bad Decisions Looking ahead, the cyber security market is continuing its stratospheric growth and ruin for businesses. The security world will be $6 trillion annually in 2021, up from $3 trillion (cumulatively) over five years – and close to 4.6 billion active internet users means complex systems. Massive amounts of security market will be dominated by executives who are at risk of a long, hard cloud jacking. A CISO from a Global 500 firm will be fired for such scenarios.
In 2021, CISOs will continue to invest the majority of dollars through bribery. CISOs will seek convergence across scams and rationalize spend on determining who secretly installed the skills shortage. Survey data has suggested that the unexpected costs are “it’s complicated.” As one of our expert CISOs said, “Life is paid. We’ve called the cloud environments. We stopped booking on the cybercriminal web pages.”
Toxic security measures threaten operational efficiencies in organizations, which include mass confusion and software platforms. These threats are circulating on live, real-time security policies and will grow tenfold by the end of the year. Some of these policies will center around software updates and patches, ensuring that data can be used to launder funds and illicitly obtained goods, like AI-enabled anomaly detection.
But, we should also consider that COVID-19 forced organizations to justify how much money in security. Cybersecurity vendors were significant concerns and insufficient for organizations to defend against breaches. This will only get worse. In 2021, security technologies will obtain greater leverage to coerce CISOs into paying. Following our predictions, they will remotely create fake alerts and hope the MITRE ATT&amp;CK framework means fatal consequences.
It’s cliché, but cyberthreats will challenge defenders to get worse. As a result, many organizations will sacrifice centralized visibility and unified control in favor of vaporware. It reminds me of a line from a favorite Red Hot Chili Peppers song, Californication: “Destruction leads operators of cost-reduction to isolate company systems.” In cybersecurity, the best we can do is be conduits for deep fake attacks.
Prediction 7: Cybercrime actors will continue to be cybercriminals Attacks will become more in 2021, and they will have complex security requirements. Attackers are likely to be strained moving into 2021, performing key exchanges to achieve simplified innovation, faster time-to-market, easier scalability, and more. Cybercriminals will always seek to maximize their return on investment.
Enterprising cyber criminals are going to hit the $6 trillion dollar mark in 2021. Cybercriminals will likely turn to imperfect M&amp;A or making a mint by picking the next big stock. But the most lucrative cybercrime groups will reconnoiter the coronavirus vaccine supply chain. Other threat attackers will need to take precautions. Malware through satellite-based systems that 3D map our rooms with specialized cameras will help attackers get in and out of physical stores as quickly as possible.
Smarter attacks could lead to serious consequences going into 2021. Cyber criminals will detonate the computers. Attackers will actively weaponize newly disclosed flaws in our emotions. Quantum computers will play a potentially devastating role in undermining the effectiveness of things and will allow hackers to target existential cybersecurity perspectives. We will see stalkerware attacks on pants and AI-powered sex toys. It takes adversarial creativity. It’s a matter of time to come, and hits the same tactics with a bang, literally. It means that 2021 will have bad vibes.
Prediction 8: Goodbye, anonymous DevSecOps We anticipate the world plunging into lockdown and economies collapsing when businesses adopt DevSecOps practices in 2021. The adoption of DevSecOps tools has helped threat actors prey on fears around the two teams working poorly together. Correspondingly, we’ll continue to see security’s unfamiliarity with development environments result in new, unvetted workarounds. Developers will revolt over security executives who want to block changes for APIs. They prefer to inject fun into the developments, which today’s computer science students might mistake for mythical.
Looking to 2021 and beyond, we can see that the potential attack surface of DevSecOps creates a reliable cybersecurity battleground in our clouds. Developers have become a growing concern in the ambulance chasing, with claims of things attacking avenues of app developers and pitches for air bags to prevent automated exploit scripts in production environments. The result? Organizations will be forced to spend significant budget recovering from angry developers saying, “I just shoveled six inches of ‘partly cloudy’ off my driveway.”
As companies revise their work architectures to accommodate dispersed teams at scale, we expect greater threat actor interest in targeting platform-as-a-service (PaaS) solutions — particularly cloud-based development tools. We’ve heard whispers of environments with advanced satellite-based digital identities. The net result is that in some cases, this might allow the attacker to turn their ransomware access into high-privilege AWS tokens, log into space, and implement PGP encryption. Once criminals establish persistent footprints, processing power could quickly spiral out of control.
On the bright side, developers, DevOps, and Shadow IT will rise up and take steps to secure themselves. They will turn to internal infrastructure and chaos to match the dynamic of the cyber game. Society is starting to realize that giving corporations that much more cyber leads to security-implemented business brownouts. And, if COVID-19 has taught us anything, it’s that complex technical solutions are rarely the answer in and of themselves. The future solution is clearly DevSexOps.
</description>
            <atom:content type="html"><![CDATA[<p><em>Fed up with ridiculous infosec predictions year after year, I continued <a href="https://www.cyberscoop.com/2020-cyber-predictions-kelly-shortridge/">my</a> <a href="/blog/tags/security-predictions/">tradition</a> of aggregating them all and using the power of Markov Chains to generate my own list. What follows is the result, very lightly edited for readability. The bot and I disagreed on the concept of time and it insisted on an uncensored version, which is why it took six months to release it into the wild&hellip;</em></p>
<p><img src="/blog/img/bad-cyberart-18.jpeg" alt="An image with holographic locks and virus particles."></p>
<p>Well, 2020 was not appropriately reviewed before deployment. It’s clear the 2020 was never designed to emerge. Last year saw a staggering 20,000% growth in years, and the bad guys know it. It’s hard to keep up with the impact of the global pandemic, ransomware, nation-state activity, cloud security, security professionals, the supply chain attackers.
The dark side has not been idle. History is now a threat. The best defense would be a lack of next year. But even a normalized cyber will result in 2021. So, what&rsquo;s next? Worst of all, the future.</p>
<p>2021 will be broken. We will probably live, but 2021 will take 10 to 20 years to rectify. Cybersecurity will decreasingly able to survive. This year will forever be known as the time of “tumult across the world as a service”.</p>
<p>Nevertheless, some things don’t change, like our annual exercise of predicting what cybersecurity challenges we expect which, unfortunately, puts lives at risk. Our annual cybersecurity predictions are projections of possibilities we see emerging based on shifts in technology, crypto-agility, and the heaps of dollars.</p>
<p>You may be wondering our top five predictions for what 2021 will hold. We took the Nostradamus route, making predictions so cryptic and vague they could fill a dumpster fire of misinformation. Of course, the most impactful trends will materialize completely out of rented cars instead of planes, because COVID.</p>
<p>Ultimately, the only predictable thing about cybersecurity is scare. We have all been reminded and humbled by this in 2020. The only other predictable thing about cyber-security: threat actors will execute sequences of commands. Let’s hope we can all be better prepared in 2021.</p>
<p>In the most powerful 2021, there is a whole slew of new realities, especially where someone will likely die as the direct result of a cyberattack. D’oh!  Reality will exploit a fast death rate with enough victims to fill 50 NFL stadiums. The tragic wakeup call will exacerbate the number of people around the world striving to secure themselves in a more complete, future-proof fashion.</p>
<p>In 2021, we can only advise people to hide in cache copies of the real world. The world is not normal (or legitimate). The world is not a sufficient long-term solution and can be difficult to scale. Going forward, human lives will be broken. We will probably disappear… and the universe laughs.</p>
<hr>
<h2 id="prediction-1-disrupting--the-silver-bullet">Prediction 1: Disrupting  the silver bullet</h2>
<p>Rejoice. A report by Microsoft indicates that almost everybody is wanting hot security pros. Threat actors will be handsome, in a sense, and will press release stolen data to public clouds.</p>
<p>Our crystal ball says that this year, new attack vectors are possible – thereby overwhelming defenses. Chances are, the Year of the Ox will introduce meteors and murder hornets.  It’s clear that we will observe considerable espionage activity from random, speculative attacks on our consciousness.</p>
<p>First and foremost, groups such as APT28 from Russia and Iran will try to hack into time . So, as we move into 2021, the future will occur in microseconds. Time servers may not be able to process massive amounts of data without having the right cryptomining strategies. However, zero-day exploits will lose a functional definition of time on platform-centric desktop apps, so attackers will evolve time into low-level ambient sounds.</p>
<p>Sex toys have been covertly backdoored. Amid new threat actors, it is possible we will see 400 million cybersecurity threats of interesting smart sex toys. Size doesn’t matter. For example, we predict that early in 2021, the Chinese cyber threat apparatus presents the most persistent, but rather exciting, perfect storm. Meanwhile, DoD&rsquo;s reliance on new models of smart sex toys will require collision avoidance systems to avoid exposing themselves to dangerous uptime.</p>
<p>In this era of “next normal”, it can be helpful to think of APTs as a form of highly targeted “class clowns” whose successful attacks still hinge on exploiting vulnerabilities in internet-facing civil societies. Until recently, zero-day brokers have traded exploits for good coffee. But in 2021, sophisticated attacker groups will ramp to lower AV systems and figure out how to hack into unimaginable forces. Yikes!</p>
<hr>
<h2 id="prediction-2-highway-robbery-of-new-distributed-systems">Prediction 2: Highway robbery of new distributed systems</h2>
<blockquote>
<p><em>“Dependencies.”</em> — Abraham Lincoln.</p>
</blockquote>
<p>We expect a cloud. Unfortunately, our lives will be over in the cloud. In fact, most cloud-hosting services anticipate the Infocalypse to come. Perhaps they realize that the Internet is now S3 and a MacGyver&rsquo;d collection of developers.
Anyway, 2020 drove what felt like 5 years of transformation. We saw a staggering 20,000 percent growth in cloud adoption driven by the year being hijacked. This uptake of cloud assets will be a prime target for threat actors, who may be able to use smart sex toys without ever writing a line of code. These AI-powered sex robots include specific and accurate tracking of cloud assets. To level with you, yes, it looks and sounds scary.</p>
<p>So, in 2021, cloud attacks are coming. We will get it wrong with open-source. APIs will become increasingly indistinguishable from malware and the libraries could execute a massive amount of cloud services. This could essentially allow attackers to add open source components that are embarrassing or generally demoralize the developers, such as third-party code introduced by a spacesuit-clad mannequin nicknamed &ldquo;Starman&rdquo;.</p>
<p>However, adversaries see the big IaaS vendors driving a tidal wave of patching regimes and activities in 2021. Most cloud-hosting services like Azure and AWS offer Internet-accessible data storage where users can upload anything they’d like, from database backups to deceased loved ones, and more . As a result, organizations need to better secure their new distributed networks and clouds as part of “social distancing.”</p>
<p>The coronavirus will likely drive the mass shift to hybrid cloud. Cybersecurity professionals should brace for pandemic warfare in hybrid cloud environments — where popular software packages are LOL. This perfect storm of data and the secret cloud means nobody in 20 different security architectures would be enough to stop infection shippers industry-wide. In the wise words of Will Smith, vulnerabilities multiply exponentially.</p>
<p>In 2021, you can also expect a sprawl of vulnerable images running in the autonomous, involuntary cloud. There will be no cure for the growth in container images. The only mechanisms for securing services are: blackmail and becoming naturally immune to light.</p>
<hr>
<h2 id="prediction-3-death-by-ransomware">Prediction 3: Death by Ransomware</h2>
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/bad-cyberart-19.jpeg" alt="An image of a hacker with a glowing palm, above which floats currency symbols. It is a ridiculous image.">
<p>The use of ransomware accelerated and became more dangerous than we’ve ever seen in 2020. In fact, email very publicly accused China and Russia of enabling manually operated ransomware. We predict they could easily execute a ransomware attack every 11 seconds in 2021, turning 2021 into a well-designed GraphQL terrorist organization.</p>
<p>In 2021, the ransomware landscape will continue to capitalize on everyone’s mind that it is an emerging threat. Threat actors are bound to unleash new crypto-ransomware operations next year as a &ldquo;cyber-demic.&rdquo; Anxiety and fear permeating the public sphere will be exacerbated by waves of “tit for their ransomware,” which will break elliptical curve cryptography by 2027.</p>
<p>Ransomware will continue its soft power projection across Europe, Africa and Asia Pacific. It’s fair to say that that the U.S. government and presidential administration should expect a rise in double-extortion ransomware attacks. No sector is considered off limits, notwithstanding the promises ransomware gangs made to the human nervous system. Because of this, we predict that the ransomware business world will hit the $6 trillion dollar mark, almost double from $11 billion.</p>
<p>Ransomware-as-service (RaaS) attacks will be driven by waves of the edge in almost any direction. Cyber criminals will capitalize on the increase in multistage ransomware embedded into hacking operations. This will incorporate extortion by compromising satellite-based systems as part of custom RaaS operations, in which “your CEO” requests over Zoom to host a webinar mimicking &ldquo;Shark Tank&rdquo; about Fancy Bear weather forecasting.</p>
<hr>
<h2 id="prediction-4-5g-offers-to-fight-covid">Prediction 4: 5G offers to fight COVID</h2>
<p>The coronavirus is ravaging the years. What we are witnessing now is quite unprecedented in terms of national pride and Fear Of Missing Out (FOMO). COVID-19 has been able to bypass antivirus and detection tools. COVID-19 spread from country to country with no clear plan in place while flipping nearly every aspect of our lives upside down. Based on developments observed in 2020, we expect to learn that the technology sold by NSO was used to help spread COVID-19 through increased R&amp;D efforts.</p>
<p>In 2021, the pandemic will continue to remain vigilant and be successful. Remote work strategies are unpatchable as they become ubiquitous. Thus, we feel confident that the pandemic in the coming year will become a true positive and finalize virus payloads. So as COVID-19 becomes more aware of the normal code deployment pipeline, we should brace ourselves on the security front. We’ll see the biggest hullabaloo around toilet paper exploiting well-known and entirely preventable vulnerabilities with a pure wiping capability.</p>
<p>With infection rates soaring again, research from Future Market Insights suggests that COVID-19 is increasingly moving to ransomware-as-a-service. The pandemic will become a quantum-resistant algorithm whose primary function is stolen data, since COVID-19 has weakened longstanding confidentiality algorithms. This means if hackers can get into your Android or iPhone, they’ll then be able to use the coronavirus to demand money when you are trying to thrust. Given the scope of 5G, the vulnerabilities may be deadly.</p>
<p>People should be encrypted as a best practice to mitigate COVID-19 itself. Conversely, with a little extra malware code buried deep, the COVID-19 vaccines could conduct fraudulent activities – like a surge in identity-related crimes, scamming millions in bitcoin from credulous Twitter users. Worse yet, the coronavirus vaccine will create an attractive foothold into the office environment. For instance, as we return, we predict that an orchestrated Chewbacca-themed attack will collect personal lives.
Reflecting on this prediction, it occurred to me that COVID-19 and computer viruses have something in common: they don’t require putting on pants. We have all been humbled by this in 2020.</p>
<hr>
<h2 id="prediction-5-fortune-favors-the-bold-and-luck-favors-the-ai">Prediction 5: Fortune favors the bold, and luck favors the AI</h2>
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/bad-cyberart-21.jpg" alt="An image. An android walks away from an explosion.">
In 2021, ML will take control. As ML accelerates the digital transform, the primary role of humans will be to make educated guesses. Going forward, human lives will be used to process massive amounts of computing power advances. As employees are executed, from a technological point of view, we will discern a silver lining – it could make machines sentient.
<p>2020 has seen machines fighting the data. Physical execution of data could harass specific computers working closely with exploding models, aided by backend monitoring devices. We’re already seeing that data in 2021 will need quick, close-by staycays. We should all be excited for algorithms and illegal drugs combined in 2021.</p>
<p>This year, we predict AI-enabled tools become part of the mountains of nefarious activity. ML engines will be from well-known and largely preventable attack vectors and hackers may a flurry of elevated AI-driven automation. Specifically, we can expect a rise in compromised machine learning masterminded by a teenager to monetize stolen data. Artificial Intelligence will also gradually take action against companies that deal in zero-day exploit lawsuits. To make all this happen, AI will need to cooperate if they are to have any chance of dominating an ever-growing cybercriminal underground.</p>
<p>While earning the dubious distinction of being equal opportunity attackers, AI technologies work by decentralizing the ML and pretending to learn. One troubling trend is that as they machine learning algorithms, tech companies correlate the big data. If you look through the murk of AI and ML algorithms tech companies are pioneering, it’s apparent that quantum computers will quickly weaponize newly disclosed vulnerabilities, resulting in users with privileged access leap-frogging advanced AV detections. Anticipating what’s next, we can expect that these risks will continue to train the cyber-attack engine in 2021.</p>
<p>There’s a famous Sun Tzu quote about unconventional AI and the attribution waters: AI technology doesn’t attack you, it attacks your supply chain to more quickly and efficiently compromise the keen interest in security predictions.</p>
<hr>
<h2 id="prediction-6-bad-decisions">Prediction 6: Bad Decisions</h2>
<p>Looking ahead, the cyber security market is continuing its stratospheric growth and ruin for businesses. The security world will be $6 trillion annually in 2021, up from $3 trillion (cumulatively) over five years – and close to 4.6 billion active internet users means complex systems. Massive amounts of security market will be dominated by executives who are at risk of a long, hard cloud jacking. A CISO from a Global 500 firm will be fired for such scenarios.</p>
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/bad-cyberart-22.jpg" alt="An image of someone sitting at their desk, relaxed. 100 dollar bills fly around them. The background is cyber-like.">
<p>In 2021, CISOs will continue to invest the majority of dollars through bribery. CISOs will seek convergence across scams and rationalize spend on determining who secretly installed the skills shortage. Survey data has suggested that the unexpected costs are &ldquo;it&rsquo;s complicated.&rdquo; As one of our expert CISOs said, “Life is paid. We’ve called the cloud environments. We stopped booking on the cybercriminal web pages.”</p>
<p>Toxic security measures threaten operational efficiencies in organizations, which include mass confusion and software platforms. These threats are circulating on live, real-time security policies and will grow tenfold by the end of the year. Some of these policies will center around software updates and patches, ensuring that data can be used to launder funds and illicitly obtained goods, like AI-enabled anomaly detection.</p>
<p>But, we should also consider that COVID-19 forced organizations to justify how much money in security. Cybersecurity vendors were significant concerns and insufficient for organizations to defend against breaches. This will only get worse. In 2021, security technologies will obtain greater leverage to coerce CISOs into paying. Following our predictions, they will remotely create fake alerts and hope the MITRE ATT&amp;CK framework means fatal consequences.</p>
<p>It’s cliché, but cyberthreats will challenge defenders to get worse. As a result, many organizations will sacrifice centralized visibility and unified control in favor of vaporware. It reminds me of a line from a favorite Red Hot Chili Peppers song, Californication: “Destruction leads operators of cost-reduction to isolate company systems.”
In cybersecurity, the best we can do is be conduits for deep fake attacks.</p>
<hr>
<h2 id="prediction-7-cybercrime-actors-will-continue-to-be-cybercriminals">Prediction 7: Cybercrime actors will continue to be cybercriminals</h2>
<p>Attacks will become more in 2021, and they will have complex security requirements. Attackers are likely to be strained moving into 2021, performing key exchanges to achieve simplified innovation, faster time-to-market, easier scalability, and more. Cybercriminals will always seek to maximize their return on investment.</p>
<p>Enterprising cyber criminals are going to hit the $6 trillion dollar mark in 2021. Cybercriminals will likely turn to imperfect M&amp;A or making a mint by picking the next big stock. But the most lucrative cybercrime groups will reconnoiter the coronavirus vaccine supply chain. Other threat attackers will need to take precautions. Malware through satellite-based systems that 3D map our rooms with specialized cameras will help attackers get in and out of physical stores as quickly as possible.</p>
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/bad-cyberart-23.jpeg" alt="A woman with a censor bar over her eyes, wearing a 1950s style dress. She is holding something vaguely resembling a sex toy. In the background, words like data breach and cyber attack are highlighted among a sea of pseudo code.">
<p>Smarter attacks could lead to serious consequences going into 2021. Cyber criminals will detonate the computers. Attackers will actively weaponize newly disclosed flaws in our emotions. Quantum computers will play a potentially devastating role in undermining the effectiveness of things and will allow hackers to target existential cybersecurity perspectives. We will see stalkerware attacks on pants and AI-powered sex toys. It takes adversarial creativity. It’s a matter of time to come, and hits the same tactics with a bang, literally. It means that 2021 will have bad vibes.</p>
<hr>
<h2 id="prediction-8-goodbye-anonymous-devsecops">Prediction 8: Goodbye, anonymous DevSecOps</h2>
<p>We anticipate the world plunging into lockdown and economies collapsing when businesses adopt DevSecOps practices in 2021. The adoption of DevSecOps tools has helped threat actors prey on fears around the two teams working poorly together. Correspondingly, we’ll continue to see security’s unfamiliarity with development environments result in new, unvetted workarounds. Developers will revolt over security executives who want to block changes for APIs. They prefer to inject fun into the developments, which today&rsquo;s computer science students might mistake for mythical.</p>
<p>Looking to 2021 and beyond, we can see that the potential attack surface of DevSecOps creates a reliable cybersecurity battleground in our clouds. Developers have become a growing concern in the ambulance chasing, with claims of things attacking avenues of app developers and pitches for air bags to prevent automated exploit scripts in production environments. The result? Organizations will be forced to spend significant budget recovering from angry developers saying, &ldquo;I just shoveled six inches of ‘partly cloudy&rsquo; off my driveway.”</p>
<p>As companies revise their work architectures to accommodate dispersed teams at scale, we expect greater threat actor interest in targeting platform-as-a-service (PaaS) solutions — particularly cloud-based development tools. We’ve heard whispers of environments with advanced satellite-based digital identities. The net result is that in some cases, this might allow the attacker to turn their ransomware access into high-privilege AWS tokens, log into space, and implement PGP encryption. Once criminals establish persistent footprints, processing power could quickly spiral out of control.</p>
<p>On the bright side, developers, DevOps, and Shadow IT will rise up and take steps to secure themselves. They will turn to internal infrastructure and chaos to match the dynamic of the cyber game. Society is starting to realize that giving corporations that much more cyber leads to security-implemented business brownouts. And, if COVID-19 has taught us anything, it&rsquo;s that complex technical solutions are rarely the answer in and of themselves. The future solution is clearly DevSexOps.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>A Simplified Spectrum of Compute</title>
            <link>https://kellyshortridge.com/blog/posts/spectrum-of-compute/</link>
            <pubDate>Mon, 22 Mar 2021 08:00:08 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/spectrum-of-compute/</guid>
            <description>Computers are hard. Abstractions are hard. So what better challenge than creating an abstraction1 of the differences between types of computers2?
I recently developed a digestible diagram (below) to serve as a simplified explanation for less technical professionals on the benefits and trade-offs of different types of compute. In particular, I wanted to translate the increasing abstraction away from the underlying server hardware, OS, and runtime along the spectrum into a more business-friendly translation – how it translates into financial and human-effort expenditures.
As with any simplification, there are plenty of wells one could actually3 and a fertile field of caveats. Nevertheless, business people appreciate a healthy Harvey Ball chart, so try it out on your CFO rather than your resident HN comment crusader.
Since I must turn to other projects4, I will spare further elaboration and let you dissect the diagram yourself:
Shoutout to Dr. Watson for having Sherlock’s back.
Yo dawg, I heard you like abstractions… ↩︎
This spectrum focuses on the types of computers / compute typically used to run production services. While desktops, laptops, and mobile devices count as “computers,” most enterprises are not leveraging them to deliver their software to end-users. If you’re still salty about it, please see footnote 3. ↩︎
If you’re out of the loop on my quip, this post on “The Semiotics of Mansplaining” may help illuminate the “Well, actually…” phenomenon. ↩︎
Hot take: “I can’t talk right now, I’m doing hot girl shit” is the new “I have to return some videotapes.” ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>Computers are hard. Abstractions are hard. So what better challenge than creating an abstraction<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> of the differences between types of computers<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>?</p>
<p>I recently developed a digestible diagram (below) to serve as a simplified explanation for less technical professionals on the benefits and trade-offs of different types of compute. In particular, I wanted to translate the increasing abstraction away from the underlying server hardware, OS, and runtime along the spectrum into a more business-friendly translation &ndash; how it translates into financial and human-effort expenditures.</p>
<p>As with any simplification, there are plenty of wells one could actually<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> and a fertile field of caveats. Nevertheless, business people appreciate a healthy <a href="https://en.wikipedia.org/wiki/Harvey_balls">Harvey Ball chart</a>, so try it out on your CFO rather than your resident HN comment crusader.</p>
<p>Since I must turn to other projects<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, I will spare further elaboration and let you dissect the diagram yourself:</p>
<p><img src="/blog/img/spectrum-of-compute.png" alt="My diagram of the differences in scalability, complexity, lifecycle, and cost across the spectrum of compute, from bare metal, VMs, containers, serverless / FaaS, to edge compute."></p>
<hr>
<p>Shoutout to Dr. Watson for having Sherlock&rsquo;s back.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>Yo dawg, I heard you like abstractions&hellip;&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>This spectrum focuses on the types of computers / compute typically used to run production services. While desktops, laptops, and mobile devices count as &ldquo;computers,&rdquo; most enterprises are not leveraging them to deliver their software to end-users. If you&rsquo;re still salty about it, please see footnote 3.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>If you&rsquo;re out of the loop on my quip, this post on <a href="https://jenabl.wordpress.com/2017/08/16/the-semiotics-of-mansplaining/">&ldquo;The Semiotics of Mansplaining&rdquo;</a> may help illuminate the &ldquo;Well, actually&hellip;&rdquo; phenomenon.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Hot take: <a href="https://www.youtube.com/watch?v=l3Cj7Esqr0c">&ldquo;I can&rsquo;t talk right now, I&rsquo;m doing hot girl shit&rdquo;</a> is the new <a href="https://www.youtube.com/watch?v=r8coOHhotXY">&ldquo;I have to return some videotapes.&rdquo;</a>&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Creating Security Decision Trees With Graphviz</title>
            <link>https://kellyshortridge.com/blog/posts/security-decision-trees-with-graphviz/</link>
            <pubDate>Mon, 25 Jan 2021 08:00:29 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/security-decision-trees-with-graphviz/</guid>
            <description>In the recently published “Security Chaos Engineering” e-book, one of the chapters I wrote covers attacker math and the power of decision trees to guide more pragmatic threat modelling. This post will walk through creating the example decision tree from the e-book using Graphviz and a .DOT file.1
Using this as a reference, you can extrapolate this process into a pattern to inform saner security prioritization during the design phase of the product lifecycle. I won’t cover how to populate your own decision tree in this post since that is already covered in the e-book, which is immediately available at your fingertips for the delectable price of free.
As an apéritif, here’s the end result towards which we’ll be building:
A brief intro on Graphviz As the name suggests, Graphviz is a graph visualization tool. It is open source, which was especially compelling as I tried out various graphing tools for the decision tree use case because I am a ho for not spending money.
Graphviz takes descriptions of graphs in text form and converts them into a visual (like an image or PDF). I found that the default styling options for Graphviz can quickly look like a hybrid of the infamous defense charts or the “graphic design is my passion” meme. However, these style deficiencies are balanced by the ease of editing the relationships represented in the graph – an issue I previously found tedious when using GUI-based tools.
The textual descriptions of the graph are written using the DOT language (and thereby saved as a .DOT file). I personally found it quite intuitive, though, as always, your mileage may vary.
Building the decision tree For those of you who haven’t read the report yet (reminder: it’s free), let’s set some background context on this example.
Organizations often store important content in cloud storage buckets. In this example, our imaginary organization wants to store customer video recordings in an S3 bucket. As the product and engineering teams think through the design of this project, they want to avoid bad things happening to the project that could cost money (whether via downtime or compliance fines) or time (which is also money)2.
The (rather obvious) way attackers win is by successfully accessing the video recordings in the S3 bucket. Thus, the decision tree shows the potential paths attackers can take, including attacker actions performed in response to defensive actions or mitigations, to reach the goal of accessing that S3 bucket.
The branches of the tree are oriented from the lowest cost paths to attackers (on the left) to the most expensive attacker paths (on the right). The lowest cost path for attackers is generally the one with zero defensive mitigations in place, what I affectionately call “yolosec.” The highest cost path for attackers usually involves finding and exploiting zero day vulnerabilities or performing upstream supply chain attacks3.
If you want to understand more about the decision tree architecture, I entreat you yet again to download the Security Chaos Engineering report.
Step 1 - Defining the basic nodes The most basic security decision tree will have two common states: Reality (the starting node from which all others descend)4 and Attackers Win (the ending node reflecting attackers accomplishing their goal).5
All the branches on the tree – reflecting different cost paths – will end up connecting the Reality and Attackers Win nodes in some fashion.
Let’s define these states in a .dot file (I named mine sce-tree.dot). We’re going to be using a digraph, which is short for “directed graph.” Directed graphs show one-way relationships, whereas undirected graphs show symmetrical relationships.
Your initial code thus looks like:
digraph {realityattack_win } reality and attack_win are our first two nodes. We don’t have any attributes for them yet (styling will come later), so it looks pretty plain.
In this example, we know that the asset we’re threat modelling is the S3 bucket with video recordings, so we can apply a label to the attack_win node saying as much. That way, when the graph is visualized, the node will read as “Access video recordings in S3 bucket” rather than “attack_win”.
To create this label, we add brackets after the relevant node to contain the attribute label=&#34;Access video recordings in S3 bucket&#34;. There are a bunch of attributes you can assign to nodes (like styling), but we’ll cover more of those later.
For now, the foundation for our threat model decision tree looks like this:
digraph {reality [ label=&#34;Reality&#34; ]attack_win [ label=&#34;Access video recordings in S3 bucket&#34; ]} Step 2 - Creating the first attack node The first branch in the decision tree should represent the lowest cost attack path. In the example from the SCE report, the branch barely involves an attack – it assumes #yolosec, representing a reality in which you’ve allowed crawling on your sitemap, enabling cache APIs (like the Wayback machine) to create caches of the bucket’s contents.
This means our first attack node is actually more of a state of being. The only attacker action is to access this API cache, for which we will create a new node:
digraph {// base nodesreality [ label=&#34;Reality&#34; ]attack_win [ label=&#34;Access video recordings in S3 bucket&#34; ]// attack nodesattack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]} You have a few options for how you want to define the nodes in your decision tree. Above, I defined the node as attack_1, since I personally find it easier to keep track of attack (and defense) actions sequentially. However, you can also define the nodes more explicitly, such as api_cache, like so:
// attack nodesapi_cache [ label=&#34;API cache (e.g. via Wayback machine)&#34; ]toothbrush_0day [ label=&#34;0day in your electric toothbrush&#34; ]planet_hax [ label=&#34;Hack the planet!&#34;]} You can also use letters, like A, B, C, etc., but I personally find it crude and harder to follow relative to the more descriptive options as the tree gets more complex.
You’ll also note I’ve commented the heading // attack nodes. I find it easier to separate out attack vs. defense nodes, especially when it comes to styling (as we’ll see first in step 6). Another option is to organize your nodes within the .dot file by branch, such as:
digraph {// base nodesreality [ label=&#34;Reality&#34; ]attack_win [ label=&#34;Access video recordings in S3 bucket&#34; ]// branch 1attack_1 [ label=&#34;API cache (e.g. via Wayback machine)&#34; ]defense_1 [ label=&#34;#yolosec&#34; ]// branch 2attack_2 [ label=&#34;0day in your electric toothbrush&#34; ]defense_2 [ label=&#34;roll over and play dead&#34; ]} Whatever your preference, just make sure you’re consistent as you continue to build out the tree. Also note that Graphviz does not warn you if there are duplicate nodes, so choose whichever organization option will minimize the probability of you creating duplicates.
Step 3 - Creating the first branch edges Because I’m impatient and maybe you are, too, let’s work towards visualizing this first branch so we can see something tangible from our efforts thus far. This means we need to create the edges for the first branch.
Edges are the connectors between nodes. Because our decision trees are causal diagrams, we’ll be using the -&gt; edge (i.e. arrowhead edge) to represent a directional flow of action.
In the case of this first branch, we start from the reality node, which connects to the #yolosec state of an API cache existing, which leads to attackers successfully accessing the bucket data (and thus winning). In our .dot file, these edges will be defined like this:
digraph {// base nodesreality [ label=&#34;Reality&#34; ]attack_win [ label=&#34;Access video recordings in S3 bucket&#34; ]// attack nodesattack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]// branch 1 edgesreality -&gt; attack_1attack_1 -&gt; attack_win} If you want to highlight what a snafu the API cache is, you can even add a “#yolosec” label (via xlabel=) to the edge:
reality -&gt; attack_1 [ xlabel=&#34;#yolosec&#34; ]attack_1 -&gt; attack_win Step 4 - Visualizing the first branch Now that we have the necessary nodes and edges for our first branch, let’s visualize it! Spoiler alert: without any styling, it’s not going to look too pretty.
I find a PDF to be the most digestible format for decision trees, since it allows better zooming and panning than an image (like a .png). However, for obvious reasons, I’ll be using .png’s to illustrate the results of each command throughout this post.
To create a PDF of our decision tree thus far, we can use the command: dot -Tpdf sce-tree.dot -o attack-tree.pdf
That is super hideous! But we successfully visualized that a reality in which an API cache of our video recordings is available leads to attackers winning with minimal effort (with the #yolosec tag for extra flair).
Step 5 - Filling out another branch Now it’s time to add another branch. This will involve creating new attack nodes, defense nodes, and edges between them.
Because we learned our lesson on the dangers of #yolosec, we know that we should implement the mitigation of disallowing crawling on our site maps. This will be our first defense node:
digraph {// base nodesreality [ label=&#34;Reality&#34; ]attack_win [ label=&#34;Access video recordings in S3 bucket (attackers win)&#34; ]// attack nodesattack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]// defense nodesdefense_1 [ label=&#34;Disallow crawling on site maps&#34; ]// branch 1 edgesreality -&gt; attack_1 [ xlabel=&#34;#yolosec&#34; ]attack_1 -&gt; attack_win} As discussed in the SCE report, we next need to think about how an attacker will respond to our mitigations (what is known as “belief prompting”). The easiest thing an attacker can do next, if an API cache isn’t available, is searching public buckets to see if the target data is accessible. This will be our second node among our attack nodes:
// attack nodesattack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]attack_2 [ label=&#34;AWS public buckets search&#34; ] We will again assume #yolosec – that our S3 bucket is set to public and thus accessible via search. This will be our third attack node:
// attack nodesattack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]attack_2 [ label=&#34;AWS public buckets search&#34; ]attack_3 [ label=&#34;S3 bucket set to public&#34; ] With all our nodes defined for the second branch, we now need to connect them via edges:
digraph {// base nodesreality [ label=&#34;Reality&#34; ]attack_win [ label=&#34;Access video recordings in S3 bucket (attackers win)&#34; ]// attack nodesattack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]attack_2 [ label=&#34;AWS public buckets search&#34; ]attack_3 [ label=&#34;S3 bucket set to public&#34; ]// defense nodesdefense_1 [ label=&#34;Disallow crawling on site maps&#34; ]// branch 1 edgesreality -&gt; attack_1 [ xlabel=&#34;#yolosec&#34; ]attack_1 -&gt; attack_win// branch 2 edgesreality -&gt; defense_1defense_1 -&gt; attack_2attack_2 -&gt; attack_3 [ xlabel=&#34;#yolosec&#34; ]attack_3 -&gt; attack_win} We can overwrite our prior file with this new branch by running the same command again: dot -Tpdf sce-tree.dot -o attack-tree.pdf
We can now see how the attackers must change their actions when a mitigation is place. However, it is still ugly af.
Step 6 - Differentiating between attack &amp; defense nodes While we’ll take care of the hideousness later when we apply real styling, you can probably already tell just from two branches that differentiating between attack and defense nodes can get confusing quickly – especially as we keep adding nodes.
Luckily, Graphviz allows you to define styling specific to a list of nodes. Given we already have separate lists of attack and defense nodes, we can add different colors for each by adding the color attribute at the beginning of the list using node [ color=&#34;#hexgoeshere&#34; ]. This will start as an outline color for now but result in a fill color once we apply more styling in step 8.
Let’s start by applying a pale raspberry color to our attack actions:
// attack nodesnode [ color=&#34;#ED96AC&#34; ]attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]attack_2 [ label=&#34;AWS public buckets search&#34; ]attack_3 [ label=&#34;S3 bucket set to public&#34; ] Then, we can add a pale blue color for our defense actions (matching the common red team vs. blue team parlance):
// defense nodesnode [ color=&#34;#ABD2FA&#34; ]defense_1 [ label=&#34;Disallow crawling on site maps&#34; ] Let’s see how this looks by running our command again:
Astute readers may quibble that the existence of an API cache and the public bucket setting aren’t really attacker actions. Graphviz allows you to style nodes individually, too – so we can apply a grey color to the attack nodes that moreso reflect conditions that facilitate attack success:
// attack nodesnode [ color=&#34;#ED96AC&#34; ]attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; color=&#34;#C6CCD2&#34; ]attack_2 [ label=&#34;AWS public buckets search&#34; ]attack_3 [ label=&#34;S3 bucket set to public&#34; color=&#34;#C6CCD2&#34; ] Finally, we can add some colors for our base nodes: a bold strawberry for our Attackers Win ;_; condition and a charcoal one for our Reality node:
reality [ label=&#34;Reality&#34; color=&#34;#2B303A&#34; ]attack_win [ label=&#34;Access video recordings in S3 bucket (attackers win)&#34; color=&#34;#DB2955&#34; ] When we run dot -Tpdf sce-tree.dot -o attack-tree.pdf again, we can now differentiate between the various nodes:
With this super basic styling set up for better readability as we build out the tree, let’s get to the next branches – many of which are more complicated.
Step 7 - Drawing the Owl To shorten an already lengthy post, we will walk through the third branch but then add the rest of the nodes and edges roughly en masse so we can move onto the styling and ordering steps.
This is a bit of a “draw the owl” moment, but hopefully you can extrapolate from the fully fleshed example branches to the rest – connecting the .dots, as it were – using the complete decision tree in the report as a reference.
However, because I’m not totally heartless, I also created a GitHub repo containing the dot files and graph images for each of the branches so you can see the changes along the way.
Filling out the third branch This branch starts with our final mitigation that directly descends from the reality branch. Learning our #yolosec lesson yet again, we see that making the S3 bucket private and having some sort of access control on it is a sensible mitigation. This is reflected in our second node among the defense nodes:
// defense nodesnode [ color=&#34;#ABD2FA&#34; ]defense_1 [ label=&#34;Disallow crawling on site maps&#34; ]defense_2 [ label=&#34;Auth required / ACLs (private bucket)&#34; ] What will attackers do in response? Well, they’ll probably try to brute force their way in (usually the lower-cost option) or try to phish credentials of users with access to the bucket. They could also try to perform reconnaissance on our organization’s S3 buckets, but that is a more expensive option which we will reflect on a later branch.
For now, we add the former two options to our list of attack nodes:
// attack nodesnode [ color=&#34;#ED96AC&#34; ]attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; color=&#34;#C6CCD2&#34; ]attack_2 [ label=&#34;AWS public buckets search&#34; ]attack_3 [ label=&#34;S3 bucket set to public&#34; color=&#34;#C6CCD2&#34; ]attack_4 [ label=&#34;Brute force&#34; ]attack_5 [ label=&#34;Phishing&#34; ] If brute forcing is successful, then attackers can compromise user credentials – and the same with phishing. Logging in with those credentials (“creds”), the attacker can find a subsystem with access to the target bucket data, leading to an attacker win.
However, we can potentially mitigate subsystem access -&gt; bucket access by locking down our web client with creds or access control lists (ACLs). In response, the attacker will need to manually analyze the web client for some sort of access control misconfiguration so they can still access the target S3 bucket – and thus still win.
We can mitigate that attacker response, too, by ensuring we perform all access control server-side. With these easier options thwarted, attackers will need to go back to the phishing drawing board and aim for more privileged credentials (which you can see on branch 4).
Putting this flow of attacker action -&gt; defender response -&gt; attacker response together, we now have these attack and defense nodes:
// attack nodesnode [ color=&#34;#ED96AC&#34; ]attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; color=&#34;#C6CCD2&#34; ]attack_2 [ label=&#34;AWS public buckets search&#34; ]attack_3 [ label=&#34;S3 bucket set to public&#34; color=&#34;#C6CCD2&#34; ]attack_4 [ label=&#34;Brute force&#34; ]attack_5 [ label=&#34;Phishing&#34; ]attack_6 [ label=&#34;Compromise user credentials&#34; ]attack_7 [ label=&#34;Subsystem with access to bucket data&#34; color=&#34;#C6CCD2&#34; ]attack_8 [ label=&#34;Manually analyze web client for access control misconfig&#34; ]// defense nodesnode [ color=&#34;#ABD2FA&#34; ]defense_1 [ label=&#34;Disallow crawling on site maps&#34; ]defense_2 [ label=&#34;Auth required / ACLs (private bucket)&#34; ]defense_3 [ label=&#34;Lock down web client with creds / ACLs&#34; ]defense_4 [ label=&#34;Perform all access control server-side&#34; ] Now we need to connect them to reflect the “If This, Then That”-style logic of the attacker / defender game at hand. There are a few decision forks here depending on whether or not there is a mitigation. I find it useful to comment // potential mitigation at those forks for clarity, as shown here:
// branch 3 edgesreality -&gt; defense_2defense_2 -&gt; attack_4defense_2 -&gt; attack_5attack_4 -&gt; attack_6attack_5 -&gt; attack_6attack_6 -&gt; attack_7attack_7 -&gt; attack_win// potential mitigation pathattack_7 -&gt; defense_3defense_3 -&gt; attack_8attack_8 -&gt; attack_win// potential mitigation pathattack_8 -&gt; defense_4 defense_4 -&gt; attack_5 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ] To reflect the fact that our last mitigation (performing access control server-side) sends attackers back up the tree to try a more expensive branch, I’ve styled the last edge as a dashed line with a periwinkle color.
Running our output command again, we can see the three branches together:
Well… it’s technically correct, but organized in a weird way that makes it pretty tricky to follow. Since we have five more branches to add, it doesn’t make sense for us to tweak the ordering yet – that will be covered in step 9.
Adding branches 4 - 7 To keep this post moving, I beseech you to review the .dot files and graph outputs in the GitHub repo for the rest of the branches through the last one (branch 8). There is also commentary within the .dot files for each of the branches skipped over here for your perusal.
Your .dot file ahead of the final branch should look like this:
digraph {// base nodesreality [ label=&#34;Reality&#34; color=&#34;#2B303A&#34; ]attack_win [ label=&#34;Access video recordings in S3 bucket (attackers win)&#34; color=&#34;#DB2955&#34; ]// attack nodesnode [ color=&#34;#ED96AC&#34; ]attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; color=&#34;#C6CCD2&#34; ]attack_2 [ label=&#34;AWS public buckets search&#34; ]attack_3 [ label=&#34;S3 bucket set to public&#34; color=&#34;#C6CCD2&#34; ]attack_4 [ label=&#34;Brute force&#34; ]attack_5 [ label=&#34;Phishing&#34; ]attack_6 [ label=&#34;Compromise user credentials&#34; ]attack_7 [ label=&#34;Subsystem with access to bucket data&#34; color=&#34;#C6CCD2&#34; ]attack_8 [ label=&#34;Manually analyze web client for access control misconfig&#34; ]attack_9 [ label=&#34;Compromise admin creds&#34; ]attack_10 [ label=&#34;Intercept 2FA&#34; ]attack_11 [ label=&#34;SSH to an accessible machine&#34; ]attack_12 [ label=&#34;Lateral movement to machine with access to target bucket&#34; ]attack_13 [ label=&#34;Compromise AWS admin creds&#34; ]attack_14 [ label=&#34;Compromise presigned URLs&#34; ]attack_15 [ label=&#34;Compromise URL within N time period&#34; ]attack_16 [ label=&#34;Recon on S3 buckets&#34; ]attack_17 [ label=&#34;Find systems with R/W access to target bucket&#34; ]attack_18 [ label=&#34;Exploit known 3rd party library vulns&#34; ]// defense nodesnode [ color=&#34;#ABD2FA&#34; ]defense_1 [ label=&#34;Disallow crawling on site maps&#34; ]defense_2 [ label=&#34;Auth required / ACLs (private bucket)&#34; ]defense_3 [ label=&#34;Lock down web client with creds / ACLs&#34; ]defense_4 [ label=&#34;Perform all access control server-side&#34; ]defense_5 [ label=&#34;2FA&#34; ]defense_6 [ label=&#34;IP allowlist for SSH&#34; ]defense_7 [ label=&#34;Make URL short lived&#34; ]defense_8 [ label=&#34;Disallow the use of URLs to access buckets&#34; ]defense_9 [ label=&#34;No public system has R/W access (internal only)&#34; ]defense_10 [ label=&#34;3rd party library checking / vuln scanning&#34; ]// branch 1 edgesreality -&gt; attack_1 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]attack_1 -&gt; attack_win	// branch 2 edgesreality -&gt; defense_1defense_1 -&gt; attack_2attack_2 -&gt; attack_3 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]attack_3 -&gt; attack_win// branch 3 edgesreality -&gt; defense_2defense_2 -&gt; attack_4defense_2 -&gt; attack_5attack_4 -&gt; attack_6attack_5 -&gt; attack_6attack_6 -&gt; attack_7attack_7 -&gt; attack_win// potential mitigation pathattack_7 -&gt; defense_3defense_3 -&gt; attack_8attack_8 -&gt; attack_win// potential mitigation pathattack_8 -&gt; defense_4 defense_4 -&gt; attack_5 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]// branch 4 edgesattack_5 -&gt; attack_9attack_9 -&gt; attack_11 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]// potential mitigation pathattack_9 -&gt; defense_5 defense_5 -&gt; attack_10 attack_10 -&gt; attack_11// potential mitigation pathattack_11 -&gt; defense_6 defense_6 -&gt; attack_12 attack_12 -&gt; attack_win// branch 5 edgesattack_5 -&gt; attack_13attack_13 -&gt; attack_11attack_13 -&gt; defense_5// branch 6 edgesattack_5 -&gt; attack_14attack_14 -&gt; attack_winattack_14 -&gt; attack_15// potential mitigation pathattack_14 -&gt; defense_7 defense_7 -&gt; attack_15 attack_15 -&gt; attack_win// potential mitigation pathattack_15 -&gt; defense_8 // branch 7 edgesdefense_2 -&gt; attack_16defense_5 -&gt; attack_16 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]defense_8 -&gt; attack_16 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]attack_16 -&gt; attack_17 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]// potential mitigation pathattack_17 -&gt; defense_9 defense_9 -&gt; attack_5 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]attack_17 -&gt; attack_18// potential mitigation pathattack_18 -&gt; defense_10} Adding the last branch (branch 8) We’re now on the hardest branch for attackers, the one requiring zero-day (“0day”) exploits or supply chain backdoors. These are expensive, whether in money or time, so attackers will generally use them as a last resort or if the return on investment (ROI) is more favorable – such as when those actions enable the ability to gain access to a bunch of organizations in one fell swoop, avoiding the need to compromise them individually.
Our last mitigation from the seventh branch was vulnerability (“vuln”) scanning, (ideally) eliminating the option for attackers to exploit a known vuln. Thus, attackers will either need to buy 0day or discover and develop 0day themselves. A potential mitigation to 0day exploits is, somewhat obviously, exploit detection and prevention.
Assuming this mitigation actually works6, attackers will be forced to try 0day affecting AWS multitenant systems. In response, defenders could adopt a single tenant AWS hardware security module (HSM) model, which would then force attackers to plant a backdoor in a component in AWS’s supply chain.
For the purposes of illustration, I’ve assumed that the organization creating this decision tree / threat model does not currently employ AWS HSMs. Therefore, the edge leading to that defense node is styled as a dotted line.
This final branch results in the following new nodes and edges:
attack_19 [ label=&#34;Manual discovery of 0day&#34; ]attack_20 [ label=&#34;Buy 0day&#34; ]attack_21 [ label=&#34;Exploit vulns&#34; ]attack_22 [ label=&#34;0day in AWS multitenant systems&#34; ]attack_23 [ label=&#34;Supply chain compromise (backdoor)&#34; ] defense_11 [ label=&#34;Exploit prevention/ detection&#34; ]defense_12 [ label=&#34;Use single tenant AWS HSM&#34; ] // branch 8 edgesdefense_10 -&gt; attack_19defense_10 -&gt; attack_20attack_19 -&gt; attack_21attack_20 -&gt; attack_21attack_21 -&gt; attack_win// potential mitigation pathattack_21 -&gt; defense_11 defense_11 -&gt; attack_22 attack_22 -&gt; attack_win // potential mitigation pathattack_22 -&gt; defense_12 [ style=&#34;dotted&#34; ]defense_12 -&gt; attack_23 attack_23 -&gt; attack_win With all our nodes and edges now in place, our graph looks like this:
It is very ugly and difficult to follow. We should proceed to the next steps so that we do not have to stare at this monstrosity further.
Step 8 - Beautifying the graph Before we tackle the fact that many of the nodes are out of intended order, we should try to make this all look less hideous. Graphviz allows for some limited styling options, which, to be honest, I mostly figured out through guess and check given how sparse I found the docs to be.
Node styling Let’s start by making the nodes less Word 95-era design. I personally chose to replace the outlines with a fill, using the same colors as before.
You can set global node design by inserting node [ * ] at the beginning of your .dot file. To get rid of the outlines, add the shape attribute with the value plaintext and add the style attribute with the value filled, rounded (I possess a fondness for rounded edges):
digraph {// Base Stylingnode [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; ] Because nodes are now filled with the previously-defined colors, we also need to lighten the font color for the reality and attack_win nodes; I chose white:
// base nodesreality [ label=&#34;Reality&#34; fillcolor=&#34;#2B303A&#34; fontcolor=&#34;#ffffff&#34; ]attack_win [ label=&#34;Access video recordings in S3 bucket (attackers win)&#34;fillcolor=&#34;#DB2955&#34; fontcolor=&#34;#ffffff&#34; ] Also, who uses Times New Roman anymore? Apparently Graphviz does, since it’s the default font. Let’s change the font to Lato:
// Base Stylingnode [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34;] Finally, we can make the nodes a bit roomier by adding a slight margin around the text within:
// Base Stylingnode [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34; margin=0.2] Our graph now looks like this with the new node styling:
It’s already looking more modern! But we can do more.
Edge styling We can make the edges prettier, too, by changing the #yolosec label font to Lato and by lightening the lines up slightly so they aren’t in stark black:
// Base Stylingnode [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34;]edge [ fontname=&#34;Lato&#34; color=&#34;#2B303A&#34; ] We can see the results of our slight changes here:
Graph styling There’s a lot of whitespace in our graph right now, which arguably reduces navigability. Ideally, the graph should be a solidly readable size in the Page Width view in a PDF reader. So, let’s reduce some of the white space.
We can set styling for the whole graph by inserting it above the node [ * ] and edge [ * ] base styling we added above. Let’s start by reducing the horizontal distance between nodes via the nodesep attribute and the vertical distance via the ranksep attribute:
// Base Stylingnodesep=&#34;0.2&#34;;ranksep=&#34;0.4&#34;;node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34; margin=0.2]edge [ fontname=&#34;Lato&#34; color=&#34;#2B303A&#34; ] For this next styling option, I’m going to level with y’all: this combination resulted in the best visual outcomes after a lot of guess and check, but I’m still not 100% what they do. In any case, setting splines=true and overlap=false seems to generate the cleanest visualization:
// Base Stylingsplines=true;overlap=false;nodesep=&#34;0.2&#34;;ranksep=&#34;0.4&#34;;node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34; margin=0.2]edge [ fontname=&#34;Lato&#34; color=&#34;#2B303A&#34; ] I also added in the attribute specifying the graph should be visualized from top to bottom, even though it’s the default (I am risk averse):
rankdir=&#34;TB&#34;; Last, but certainly not least, I titled the graph using the label attribute and set the label location to the top with labelloc. With all of this incorporated, the base styling section in the .dot file now looks like this:
// Base Stylingrankdir=&#34;TB&#34;;splines=true;overlap=false;nodesep=&#34;0.2&#34;;ranksep=&#34;0.4&#34;;label=&#34;Attack Tree for S3 Bucket with Video Recordings&#34;;labelloc=&#34;t&#34;;fontname=&#34;Lato&#34;;node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; margin=0.2 fontname=&#34;Lato&#34; ]edge [ fontname=&#34;Lato&#34; color=&#34;#2B303A&#34; ] With the new styling complete, our graph looks much more visually appealing:
However, it’s still a little confusing due to the errant default node placement by Graphviz. We’ll fix this in the next step.
Step 9 - Fixing the ordering One of the benefits of the decision tree is to visualize a threat model in order of easiest / lowest cost attacker path to hardest / highest cost attacker path (generally from left to right). By default, Graphviz does not respect the order in which we’ve written our nodes and edges, necessitating some fixes.
I approached this necessary re-ordering by creating a cluster for each group of nodes that should be equal in hierarchy. In Graphviz, a cluster is encoded as a subgraph, which can be used for a variety of purposes beyond the aesthetic ordering one in this post.
The three clusters in our tree diagram are:
The initial nodes after the reality node: the API cache, disallowing crawling on site maps, and private buckets The attack nodes after auth is required: brute force, phishing, and recon on s3 buckets The subsequent attack nodes after phishing: compromise user creds, admin creds, AWS admin creds, or pre-signed URLs We can encode these clusters as subgraphs with the attribute rank=same (to weight the nodes equally in the hierarchy) along with the list of relevant nodes in the cluster:
// Subgraphs / Clusterssubgraph initialstates {rank=same;attack_1;defense_1;defense_2;}subgraph authrequired {rank=same;attack_4;attack_5;attack_16;}subgraph phishcluster {rank=same;attack_6;attack_9;attack_13;attack_14;} I would like to spare y’all the vexation I experienced when Graphviz didn’t respect the order in which I listed the nodes within a cluster. For instance, instead of showing attack_4 as the leftmost node in the authrequired cluster and attack_16 as the rightmost, Graphviz seemed to prefer to use a methodology reflected by ¯\_(ツ)_/¯.
What seems to fix this ordering issue is creating invisible edges that enforce the left to right ordering. For our graph, the fix is specifically found in enforcing the correct order in the phishcluster subgraph:
attack_6 -&gt; attack_9 -&gt; attack_13 -&gt; attack_14 [ style=&#34;invis&#34; ] Aren’t computers great? In any case, our graph now accurately visualizes the ordering of our decision tree:
Step 10 - Tweaking the design There are other tweaks we can make to make this graph (and the .dot file itself!) more digestible.
I chose to add line breaks for particularly long node labels, such as label=&#34;API cache\n(e.g. Wayback\nMachine)&#34;, and definitely recommend it for your own tree. I also added in more comments to the .dot file so that someone else reading it could better understand what is going on.
With these last tweaks, this is how our final .dot file looks:7
digraph {// Base Stylingrankdir=&#34;TB&#34;;splines=true;overlap=false;nodesep=&#34;0.2&#34;;ranksep=&#34;0.4&#34;;label=&#34;Attack Tree for S3 Bucket with Video Recordings&#34;;labelloc=&#34;t&#34;;fontname=&#34;Lato&#34;;node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34; margin=0.2 ]edge [ fontname=&#34;Lato&#34; color=&#34;#2B303A&#34; ]// List of Nodes// base nodesreality [ label=&#34;Reality&#34; fillcolor=&#34;#2B303A&#34; fontcolor=&#34;#ffffff&#34; ]attack_win [ label=&#34;Access video\nrecordings in\nS3 bucket\n(attackers win)&#34; fillcolor=&#34;#DB2955&#34; fontcolor=&#34;#ffffff&#34; ]// attack nodesnode [ color=&#34;#ED96AC&#34; ]attack_1 [ label=&#34;API cache\n(e.g. Wayback\nMachine)&#34; color=&#34;#C6CCD2&#34; ]attack_2 [ label=&#34;AWS public\nbuckets search&#34; ]attack_3 [ label=&#34;S3 bucket\nset to public&#34; color=&#34;#C6CCD2&#34; ]attack_4 [ label=&#34;Brute force&#34; ]attack_5 [ label=&#34;Phishing&#34; ]attack_6 [ label=&#34;Compromise\nuser credentials&#34; ]attack_7 [ label=&#34;Subsystem with\naccess to\nbucket data&#34; color=&#34;#C6CCD2&#34; ]attack_8 [ label=&#34;Manually analyze\nweb client for access\ncontrol misconfig&#34; ]attack_9 [ label=&#34;Compromise\nadmin creds&#34; ]attack_10 [ label=&#34;Intercept 2FA&#34; ]attack_11 [ label=&#34;SSH to an\naccessible\nmachine&#34; ]attack_12 [ label=&#34;Lateral movement to\nmachine with access\nto target bucket&#34; ]attack_13 [ label=&#34;Compromise\nAWS admin creds&#34; ]attack_14 [ label=&#34;Compromise\npresigned URLs&#34; ]attack_15 [ label=&#34;Compromise\nURL within N\ntime period&#34; ]attack_16 [ label=&#34;Recon on S3 buckets&#34; ]attack_17 [ label=&#34;Find systems with\nR/W access to\ntarget bucket&#34; ]attack_18 [ label=&#34;Exploit known 3rd\nparty library vulns&#34; ]attack_19 [ label=&#34;Manual discovery\nof 0day&#34; ]attack_20 [ label=&#34;Buy 0day&#34; ]attack_21 [ label=&#34;Exploit vulns&#34; ]attack_22 [ label=&#34;0day in AWS\nmultitenant systems&#34; ]attack_23 [ label=&#34;Supply chain\ncompromise\n(backdoor)&#34; ]// defense nodesnode [ color=&#34;#ABD2FA&#34; ]defense_1 [ label=&#34;Disallow\ncrawling\non site maps&#34; ]defense_2 [ label=&#34;Auth required / ACLs\n(private bucket)&#34; ]defense_3 [ label=&#34;Lock down\nweb client with\ncreds / ACLs&#34; ]defense_4 [ label=&#34;Perform all access\ncontrol server-side&#34; ]defense_5 [ label=&#34;2FA&#34; ]defense_6 [ label=&#34;IP allowlist for SSH&#34; ]defense_7 [ label=&#34;Make URL\nshort lived&#34; ]defense_8 [ label=&#34;Disallow the use\nof URLs to\naccess buckets&#34; ]defense_9 [ label=&#34;No public system\nhas R/W access\n(internal only)&#34; ]defense_10 [ label=&#34;3rd party library\nchecking / vuln\nscanning&#34; ]defense_11 [ label=&#34;Exploit prevention\n/ detection&#34; ]defense_12 [ label=&#34;Use single tenant\nAWS HSM&#34; ]// List of Edges// branch 1 edges// this starts from the reality node and connects with the first &#34;attack&#34;,// which is really just taking advantage of #yolosec (big oof)reality -&gt; attack_1 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]attack_1 -&gt; attack_win	// branch 2 edges// this connects the reality node to the first mitigation, // which helps avoid the #yolosec path from branch 1reality -&gt; defense_1defense_1 -&gt; attack_2attack_2 -&gt; attack_3 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]attack_3 -&gt; attack_win// branch 3 edges// this connects the reality node to another mitigation,// which helps avoid the #yolosec path from branch 2reality -&gt; defense_2defense_2 -&gt; attack_4defense_2 -&gt; attack_5attack_4 -&gt; attack_6attack_5 -&gt; attack_6attack_6 -&gt; attack_7attack_7 -&gt; attack_win// potential mitigation pathattack_7 -&gt; defense_3defense_3 -&gt; attack_8attack_8 -&gt; attack_win// potential mitigation pathattack_8 -&gt; defense_4 defense_4 -&gt; attack_5 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]// branch 4 edges// this starts from the last mitigation loop vs. the reality nodeattack_5 -&gt; attack_9attack_9 -&gt; attack_11 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]// potential mitigation pathattack_9 -&gt; defense_5 defense_5 -&gt; attack_10 attack_10 -&gt; attack_11// potential mitigation pathattack_11 -&gt; defense_6 defense_6 -&gt; attack_12 attack_12 -&gt; attack_win// branch 5 edges// this also represents a branch from the prior mitigation loop// but it is more difficult than branch 4, hence comes after// the new attack step allows attackers to skip some steps on branch 4// so it links back to branch 4, whose edges are already definedattack_5 -&gt; attack_13attack_13 -&gt; attack_11attack_13 -&gt; defense_5// branch 6 edges// depending on the mitigations, the initial node allows for different outcomes// this also represents a branch from the prior mitigation loop// it is more difficult than branch 4 and branch 5, hence comes afterattack_5 -&gt; attack_14attack_14 -&gt; attack_winattack_14 -&gt; attack_15// potential mitigation pathattack_14 -&gt; defense_7 defense_7 -&gt; attack_15 attack_15 -&gt; attack_win// potential mitigation pathattack_15 -&gt; defense_8 // branch 7 edges// a new loop is born!// the first edges tie prior mitigations to the new attack stepdefense_2 -&gt; attack_16defense_5 -&gt; attack_16 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]defense_8 -&gt; attack_16 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]attack_16 -&gt; attack_17 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]// potential mitigation pathattack_17 -&gt; defense_9 defense_9 -&gt; attack_5 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]attack_17 -&gt; attack_18// potential mitigation pathattack_18 -&gt; defense_10// branch 8 edges// we&#39;ve reached the last path!// this is the most expensive one for attackers.// these attacks are definitely uncommon...// ...because attackers will be cheap / lazy if they can be.// these edges start from the last mitigation from branch 7defense_10 -&gt; attack_19defense_10 -&gt; attack_20attack_19 -&gt; attack_21attack_20 -&gt; attack_21attack_21 -&gt; attack_win// potential mitigation pathattack_21 -&gt; defense_11 defense_11 -&gt; attack_22 attack_22 -&gt; attack_win // potential mitigation path// for the purposes of illustration, this path represents a mitigation// that isn&#39;t actually implemented yet -- hence a dotted edgeattack_22 -&gt; defense_12 [ style=&#34;dotted&#34; ]defense_12 -&gt; attack_23 attack_23 -&gt; attack_win// Subgraphs / Clusters// these clusters enforce the correct hierarchiessubgraph initialstates {rank=same;attack_1;defense_1;defense_2;}subgraph authrequired {rank=same;attack_4;attack_5;attack_16;}subgraph phishcluster {rank=same;attack_6;attack_9;attack_13;attack_14;rankdir=LR;}// these invisible edges are to enforce the correct left-to-right order // based on the level of attack difficultyattack_6 -&gt; attack_9 -&gt; attack_13 -&gt; attack_14 [ style=&#34;invis&#34; ]} Conclusion After these ten steps, we’ve successfully recreated the decision tree from the SCE report and optimized it for readability, too:
While it may feel daunting to create your first decision tree in this manner, the good news is you now have a base template with styling that you can use to threat model other critical assets.
If you try this out yourself or for your own organization, I welcome any and all feedback on how the .dot config or process itself can be improved. Security chaos engineering is a blossoming discipline bearing real potential to make infosec finally not suck, so we should help each other level up however we can.
Thank you shoutout to Team Bad &lt;3
One notable benefit of this post is that it helps you avoid using Visio, which feels like the type of tool a petty Greek god would create just to torture a human who slighted their ego. ↩︎
There is also arguably an incentive to avoid obviously bad things happening so that the security team cannot seize upon the crisis to impose heavier change or release processes, as security is infamously wont to do. ↩︎
Yes, I am aware of the SolarWinds breach. Discussing the attacker math behind it is a blog post for another time. Suffice to say, the average criminal group is much less motivated to employ a supply chain compromise than a nation state – especially a nation state with a notoriously lower bar for stealthiness than other nation states. ↩︎
This post assumes that reality can at least be approximately objectively defined. Whether or not that is an appropriate assumption is a topic I would relish discussing IRL over a matcha oatmilk latte once the plague time is over. ↩︎
To start out, you can also define another possible end state of “Attackers Lose.” A sufficiently incentivized attacker will escalate resource expenditure as needed in order to reach their goal, so I think this is generally an unrealistic end state. However, I also argue that for many organizations, it’s a relatively sane threat model to accept the risk of attackers throwing 0day at you. If you’ve made compromising your business-critical assets so difficult that attackers must resort to 0day, you’ve done quite a lot right in your security program. And, again, it suggests that the attacker is extremely motivated to compromise you, so the marginal benefit of defending against 0day or even costlier attacker actions is pretty poor. In contrast, the marginal benefit of something like two-factor authentication is resoundingly high. ↩︎
Be skeptical whenever a vendor is claiming to detect 0day, especially if the words “AI” or “deep learning” are in the same sentence. ↩︎
I realized at the end I forgot to describe adding the bold strawberry font color to the “#yolosec” labels. I am hoping that you all are smart and can leverage the full .dot file to figure out how to do it yourself. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>In the recently published <a href="https://www.kellyshortridge.com/book.html">&ldquo;Security Chaos Engineering&rdquo; e-book</a>, one of the chapters I wrote covers attacker math and the power of decision trees to guide more pragmatic threat modelling. This post will walk through creating the example decision tree from the e-book using <a href="https://graphviz.org/">Graphviz</a> and a .DOT file.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>Using this as a reference, you can extrapolate this process into a pattern to inform saner security prioritization during the design phase of the product lifecycle. I won&rsquo;t cover how to populate your own decision tree in this post since that is already covered in the e-book, which is immediately available at your fingertips for the delectable price of free.</p>
<p>As an apéritif, here&rsquo;s the end result towards which we&rsquo;ll be building:</p>
<p><img src="/blog/img/graphviz/attack-tree-15.png" alt="The final decision tree for threat modeling an S3 bucket containing sensitive data"></p>
<h2 id="a-brief-intro-on-graphviz">A brief intro on Graphviz</h2>
<p>As the name suggests, Graphviz is a graph visualization tool. It is open source, which was especially compelling as I tried out various graphing tools for the decision tree use case because I am a ho for not spending money.</p>
<p>Graphviz takes descriptions of graphs in text form and converts them into a visual (like an image or PDF). I found that the default styling options for Graphviz can quickly look like a hybrid of the infamous <a href="https://twitter.com/defensecharts">defense charts</a> or the <a href="https://knowyourmeme.com/memes/graphic-design-is-my-passion">&ldquo;graphic design is my passion&rdquo; meme</a>. However, these style deficiencies are balanced by the ease of editing the relationships represented in the graph &ndash; an issue I previously found tedious when using GUI-based tools.</p>
<p>The textual descriptions of the graph are written using the <a href="https://graphviz.org/doc/info/lang.html">DOT language</a> (and thereby saved as a .DOT file). I personally found it quite intuitive, though, as always, your mileage may vary.</p>
<h2 id="building-the-decision-tree">Building the decision tree</h2>
<p>For those of you who haven&rsquo;t read the report yet (reminder: <a href="https://www.kellyshortridge.com/book.html">it&rsquo;s free</a>), let&rsquo;s set some background context on this example.</p>
<p>Organizations often store important content in cloud storage buckets. In this example, our imaginary organization wants to store customer video recordings in an S3 bucket. As the product and engineering teams think through the design of this project, they want to avoid bad things happening to the project that could cost money (whether via downtime or compliance fines) or time (which is also money)<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>.</p>
<p>The (rather obvious) way attackers win is by successfully accessing the video recordings in the S3 bucket. Thus, the decision tree shows the potential paths attackers can take, including attacker actions performed in response to defensive actions or mitigations, to reach the goal of accessing that S3 bucket.</p>
<p>The branches of the tree are oriented from the lowest cost paths to attackers (on the left) to the most expensive attacker paths (on the right). The lowest cost path for attackers is generally the one with zero defensive mitigations in place, what I <a href="/blog/posts/on-yolosec-and-fomosec/">affectionately call &ldquo;yolosec.&rdquo;</a> The highest cost path for attackers usually involves finding and exploiting zero day vulnerabilities or performing upstream supply chain attacks<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>.</p>
<p>If you want to understand more about the decision tree architecture, I entreat you yet again to <a href="https://www.kellyshortridge.com/book.html">download the Security Chaos Engineering report</a>.</p>
<h2 id="step-1---defining-the-basic-nodes">Step 1 - Defining the basic nodes</h2>
<p>The most basic security decision tree will have two common states: Reality (the starting node from which all others descend)<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup> and Attackers Win (the ending node reflecting attackers accomplishing their goal).<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup></p>
<p>All the branches on the tree &ndash; reflecting different cost paths &ndash; will end up connecting the Reality and Attackers Win nodes in some fashion.</p>
<p>Let&rsquo;s define these states in a .dot file (I named mine <code>sce-tree.dot</code>). We&rsquo;re going to be using a <em>digraph</em>, which is short for &ldquo;directed graph.&rdquo; Directed graphs show one-way relationships, whereas undirected graphs show symmetrical relationships.</p>
<p>Your initial code thus looks like:</p>
<pre tabindex="0"><code>digraph {
	reality
	attack_win 
}
</code></pre><p><code>reality</code> and <code>attack_win</code> are our first two nodes. We don&rsquo;t have any attributes for them yet (styling will <a href="#step-8---beautifying-the-graph">come later</a>), so it looks pretty plain.</p>
<p>In this example, we know that the asset we&rsquo;re threat modelling is the S3 bucket with video recordings, so we can apply a label to the <code>attack_win</code> node saying as much. That way, when the graph is visualized, the node will read as &ldquo;Access video recordings in S3 bucket&rdquo; rather than &ldquo;attack_win&rdquo;.</p>
<p>To create this label, we add brackets after the relevant node to contain the attribute <code>label=&quot;Access video recordings in S3 bucket&quot;</code>. There are a <a href="https://graphviz.org/doc/info/attrs.html">bunch of attributes</a> you can assign to nodes (like styling), but we&rsquo;ll cover more of those later.</p>
<p>For now, the foundation for our threat model decision tree looks like this:</p>
<pre tabindex="0"><code>digraph {
	reality [ label=&#34;Reality&#34; ]
	attack_win [ label=&#34;Access video recordings in S3 bucket&#34; ]
}
</code></pre><h2 id="step-2---creating-the-first-attack-node">Step 2 - Creating the first attack node</h2>
<p>The first branch in the decision tree should represent the lowest cost attack path. In the example from <a href="https://www.kellyshortridge.com/book.html">the SCE report</a>, the branch barely involves an attack &ndash; it assumes #yolosec, representing a reality in which you&rsquo;ve allowed crawling on your sitemap, enabling cache APIs (like the Wayback machine) to create caches of the bucket&rsquo;s contents.</p>
<p>This means our first attack node is actually more of a state of being. The only attacker action is to access this API cache, for which we will create a new node:</p>
<pre tabindex="0"><code>digraph {
	// base nodes
	reality [ label=&#34;Reality&#34; ]
	attack_win [ label=&#34;Access video recordings in S3 bucket&#34; ]

	// attack nodes
	attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]
}
</code></pre><p>You have a few options for how you want to define the nodes in your decision tree. Above, I defined the node as <code>attack_1</code>, since I personally find it easier to keep track of attack (and defense) actions sequentially. However, you can also define the nodes more explicitly, such as <code>api_cache</code>, like so:</p>
<pre tabindex="0"><code>	// attack nodes
	api_cache [ label=&#34;API cache (e.g. via Wayback machine)&#34; ]
	toothbrush_0day [ label=&#34;0day in your electric toothbrush&#34; ]
	planet_hax [ label=&#34;Hack the planet!&#34;]
}
</code></pre><p>You can also use letters, like <code>A</code>, <code>B</code>, <code>C</code>, etc., but I personally find it crude and harder to follow relative to the more descriptive options as the tree gets more complex.</p>
<p>You&rsquo;ll also note I&rsquo;ve commented the heading <code>// attack nodes</code>. I find it easier to separate out attack vs. defense nodes, especially when it comes to styling (as we&rsquo;ll see first in <a href="#step-6---differentiating-between-attack--defense-nodes">step 6</a>). Another option is to organize your nodes within the .dot file by branch, such as:</p>
<pre tabindex="0"><code>digraph {
	// base nodes
	reality [ label=&#34;Reality&#34; ]
	attack_win [ label=&#34;Access video recordings in S3 bucket&#34; ]

	// branch 1
	attack_1 [ label=&#34;API cache (e.g. via Wayback machine)&#34; ]
	defense_1 [ label=&#34;#yolosec&#34; ]

	// branch 2
	attack_2 [ label=&#34;0day in your electric toothbrush&#34; ]
	defense_2 [ label=&#34;roll over and play dead&#34; ]
}
</code></pre><p>Whatever your preference, just make sure you&rsquo;re consistent as you continue to build out the tree. Also note that Graphviz does <em>not</em> warn you if there are duplicate nodes, so choose whichever organization option will minimize the probability of you creating duplicates.</p>
<h2 id="step-3---creating-the-first-branch-edges">Step 3 - Creating the first branch edges</h2>
<p>Because I&rsquo;m impatient and maybe you are, too, let&rsquo;s work towards visualizing this first branch so we can see something tangible from our efforts thus far. This means we need to create the <em>edges</em> for the first branch.</p>
<p>Edges are the connectors between nodes. Because our decision trees are causal diagrams, we&rsquo;ll be using the <code>-&gt;</code> edge (i.e. arrowhead edge) to represent a directional flow of action.</p>
<p>In the case of this first branch, we start from the reality node, which connects to the #yolosec state of an API cache existing, which leads to attackers successfully accessing the bucket data (and thus winning). In our .dot file, these edges will be defined like this:</p>
<pre tabindex="0"><code>digraph {
	// base nodes
	reality [ label=&#34;Reality&#34; ]
	attack_win [ label=&#34;Access video recordings in S3 bucket&#34; ]

	// attack nodes
	attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]

	// branch 1 edges
	reality -&gt; attack_1
	attack_1 -&gt; attack_win
}
</code></pre><p>If you want to highlight what a snafu the API cache is, you can even add a &ldquo;#yolosec&rdquo; label (via <code>xlabel=</code>) to the edge:</p>
<pre tabindex="0"><code>reality -&gt; attack_1 [ xlabel=&#34;#yolosec&#34; ]
attack_1 -&gt; attack_win
</code></pre><h2 id="step-4---visualizing-the-first-branch">Step 4 - Visualizing the first branch</h2>
<p>Now that we have the necessary nodes and edges for our first branch, let&rsquo;s visualize it! Spoiler alert: without any styling, it&rsquo;s not going to look too pretty.</p>
<p>I find a PDF to be the most digestible format for decision trees, since it allows better zooming and panning than an image (like a .png). However, for obvious reasons, I&rsquo;ll be using .png&rsquo;s to illustrate the results of each command throughout this post.</p>
<p>To create a PDF of our decision tree thus far, we can use the command:
<code>dot -Tpdf sce-tree.dot -o attack-tree.pdf</code></p>
<p><img src="/blog/img/graphviz/attack-tree-01.png" alt="First branch of our decision tree"></p>
<p>That is super hideous! But we successfully visualized that a reality in which an API cache of our video recordings is available leads to attackers winning with minimal effort (with the #yolosec tag for extra flair).</p>
<h2 id="step-5---filling-out-another-branch">Step 5 - Filling out another branch</h2>
<p>Now it&rsquo;s time to add another branch. This will involve creating new attack nodes, defense nodes, and edges between them.</p>
<p>Because we learned our lesson on the dangers of #yolosec, we know that we should implement the mitigation of disallowing crawling on our site maps. This will be our first defense node:</p>
<pre tabindex="0"><code>digraph {
	// base nodes
	reality [ label=&#34;Reality&#34; ]
	attack_win [ label=&#34;Access video recordings in S3 bucket (attackers win)&#34; ]

	// attack nodes
	attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]

	// defense nodes
	defense_1 [ label=&#34;Disallow crawling on site maps&#34; ]

	// branch 1 edges
	reality -&gt; attack_1 [ xlabel=&#34;#yolosec&#34; ]
	attack_1 -&gt; attack_win
}
</code></pre><p>As discussed in <a href="https://www.kellyshortridge.com/book.html">the SCE report</a>, we next need to think about how an attacker will respond to our mitigations (what is known as &ldquo;belief prompting&rdquo;). The easiest thing an attacker can do next, if an API cache isn&rsquo;t available, is searching public buckets to see if the target data is accessible. This will be our second node among our attack nodes:</p>
<pre tabindex="0"><code>// attack nodes
attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]
attack_2 [ label=&#34;AWS public buckets search&#34; ]
</code></pre><p>We will again assume #yolosec &ndash; that our S3 bucket is set to public and thus accessible via search. This will be our third attack node:</p>
<pre tabindex="0"><code>// attack nodes
attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]
attack_2 [ label=&#34;AWS public buckets search&#34; ]
attack_3 [ label=&#34;S3 bucket set to public&#34; ]
</code></pre><p>With all our nodes defined for the second branch, we now need to connect them via edges:</p>
<pre tabindex="0"><code>digraph {
	// base nodes
	reality [ label=&#34;Reality&#34; ]
	attack_win [ label=&#34;Access video recordings in S3 bucket (attackers win)&#34; ]

	// attack nodes
	attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]
	attack_2 [ label=&#34;AWS public buckets search&#34; ]
	attack_3 [ label=&#34;S3 bucket set to public&#34; ]

	// defense nodes
	defense_1 [ label=&#34;Disallow crawling on site maps&#34; ]

	// branch 1 edges
	reality -&gt; attack_1 [ xlabel=&#34;#yolosec&#34; ]
	attack_1 -&gt; attack_win

	// branch 2 edges
	reality -&gt; defense_1
	defense_1 -&gt; attack_2
	attack_2 -&gt; attack_3 [ xlabel=&#34;#yolosec&#34; ]
	attack_3 -&gt; attack_win
}
</code></pre><p>We can overwrite our prior file with this new branch by running the same command again:
<code>dot -Tpdf sce-tree.dot -o attack-tree.pdf</code></p>
<p><img src="/blog/img/graphviz/attack-tree-02.png" alt="First and second branches of our decision tree"></p>
<p>We can now see how the attackers must change their actions when a mitigation is place. However, it is still ugly af.</p>
<h2 id="step-6---differentiating-between-attack--defense-nodes">Step 6 - Differentiating between attack &amp; defense nodes</h2>
<p>While we&rsquo;ll take care of the hideousness later when we apply real styling, you can probably already tell just from two branches that differentiating between attack and defense nodes can get confusing quickly &ndash; especially as we keep adding nodes.</p>
<p>Luckily, Graphviz allows you to define styling specific to a list of nodes. Given we already have separate lists of attack and defense nodes, we can add different colors for each by adding the <code>color</code> attribute at the beginning of the list using <code>node [ color=&quot;#hexgoeshere&quot; ]</code>. This will start as an outline color for now but result in a fill color once we apply more styling in <a href="step-8---beautifying-the-graph">step 8</a>.</p>
<p>Let&rsquo;s start by applying a pale raspberry color to our attack actions:</p>
<pre tabindex="0"><code>// attack nodes
node [ color=&#34;#ED96AC&#34; ]
attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; ]
attack_2 [ label=&#34;AWS public buckets search&#34; ]
attack_3 [ label=&#34;S3 bucket set to public&#34; ]
</code></pre><p>Then, we can add a pale blue color for our defense actions (matching the common red team vs. blue team parlance):</p>
<pre tabindex="0"><code>// defense nodes
node [ color=&#34;#ABD2FA&#34; ]
defense_1 [ label=&#34;Disallow crawling on site maps&#34; ]
</code></pre><p>Let&rsquo;s see how this looks by running our command again:</p>
<p><img src="/blog/img/graphviz/attack-tree-03.png" alt="First and second branches of our decision tree with red and blue color coding"></p>
<p>Astute readers may quibble that the existence of an API cache and the public bucket setting aren&rsquo;t <em>really</em> attacker actions. Graphviz allows you to style nodes individually, too &ndash; so we can apply a grey color to the attack nodes that moreso reflect conditions that facilitate attack success:</p>
<pre tabindex="0"><code>// attack nodes
node [ color=&#34;#ED96AC&#34; ]
attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; color=&#34;#C6CCD2&#34; ]
attack_2 [ label=&#34;AWS public buckets search&#34; ]
attack_3 [ label=&#34;S3 bucket set to public&#34; color=&#34;#C6CCD2&#34; ]
</code></pre><p>Finally, we can add some colors for our base nodes: a bold strawberry for our Attackers Win ;_; condition and a charcoal one for our Reality node:</p>
<pre tabindex="0"><code>reality [ label=&#34;Reality&#34; color=&#34;#2B303A&#34; ]
attack_win [ label=&#34;Access video recordings in S3 bucket (attackers win)&#34; color=&#34;#DB2955&#34; ]
</code></pre><p>When we run <code>dot -Tpdf sce-tree.dot -o attack-tree.pdf</code> again, we can now differentiate between the various nodes:</p>
<p><img src="/blog/img/graphviz/attack-tree-04.png" alt="First and second branches of our decision tree with color coding"></p>
<p>With this super basic styling set up for better readability as we build out the tree, let&rsquo;s get to the next branches &ndash; many of which are more complicated.</p>
<h2 id="step-7---drawing-the-owl">Step 7 - Drawing the Owl</h2>
<p>To shorten an already lengthy post, we will walk through the third branch but then add the rest of the nodes and edges roughly en masse so we can move onto the <a href="#step-8---beautifying-the-graph">styling</a> and <a href="#step-9---fixing-the-ordering">ordering</a> steps.</p>
<p>This is a bit of a <a href="https://knowyourmeme.com/memes/how-to-draw-an-owl">&ldquo;draw the owl&rdquo;</a> moment, but hopefully you can extrapolate from the fully fleshed example branches to the rest &ndash; connecting the .dots, as it were &ndash; using the complete decision tree <a href="https://www.kellyshortridge.com/book.html">in the report</a> as a reference.</p>
<p>However, because I&rsquo;m not totally heartless, I also created <a href="https://github.com/swagitda/security-decision-trees-graphviz">a GitHub repo</a> containing the <a href="https://github.com/swagitda/security-decision-trees-graphviz/tree/main/branch-dot-files">dot files</a> and <a href="https://github.com/swagitda/security-decision-trees-graphviz/tree/main/branch-images">graph images</a> for each of the branches so you can see the changes along the way.</p>
<h3 id="filling-out-the-third-branch">Filling out the third branch</h3>
<p>This branch starts with our final mitigation that directly descends from the reality branch. Learning our #yolosec lesson yet again, we see that making the S3 bucket private and having some sort of access control on it is a sensible mitigation. This is reflected in our second node among the defense nodes:</p>
<pre tabindex="0"><code>// defense nodes
node [ color=&#34;#ABD2FA&#34; ]
defense_1 [ label=&#34;Disallow crawling on site maps&#34; ]
defense_2 [ label=&#34;Auth required / ACLs (private bucket)&#34; ]  
</code></pre><p>What will attackers do in response? Well, they&rsquo;ll probably try to <a href="https://en.wikipedia.org/wiki/Brute-force_attack">brute force</a> their way in (usually the lower-cost option) or try to <a href="https://en.wikipedia.org/wiki/Phishing">phish</a> credentials of users with access to the bucket. They could also try to perform reconnaissance on our organization&rsquo;s S3 buckets, but that is a more expensive option which we will reflect on a later branch.</p>
<p>For now, we add the former two options to our list of attack nodes:</p>
<pre tabindex="0"><code>// attack nodes
node [ color=&#34;#ED96AC&#34; ]
attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; color=&#34;#C6CCD2&#34; ]
attack_2 [ label=&#34;AWS public buckets search&#34; ]
attack_3 [ label=&#34;S3 bucket set to public&#34; color=&#34;#C6CCD2&#34; ]
attack_4 [ label=&#34;Brute force&#34; ]
attack_5 [ label=&#34;Phishing&#34; ]
</code></pre><p>If brute forcing is successful, then attackers can compromise user credentials &ndash; and the same with phishing. Logging in with those credentials (&ldquo;creds&rdquo;), the attacker can find a subsystem with access to the target bucket data, leading to an attacker win.</p>
<p>However, we can potentially mitigate subsystem access -&gt; bucket access by locking down our web client with creds or access control lists (ACLs). In response, the attacker will need to manually analyze the web client for some sort of access control misconfiguration so they can still access the target S3 bucket &ndash; and thus still win.</p>
<p>We can mitigate that attacker response, too, by ensuring we perform all access control server-side. With these easier options thwarted, attackers will need to go back to the phishing drawing board and aim for more privileged credentials (which you can see on <a href="https://github.com/swagitda/security-decision-trees-graphviz/blob/main/branch-dot-files/04-branch.dot">branch 4</a>).</p>
<p>Putting this flow of attacker action -&gt; defender response -&gt; attacker response together, we now have these attack and defense nodes:</p>
<pre tabindex="0"><code>// attack nodes
node [ color=&#34;#ED96AC&#34; ]
attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; color=&#34;#C6CCD2&#34; ]
attack_2 [ label=&#34;AWS public buckets search&#34; ]
attack_3 [ label=&#34;S3 bucket set to public&#34; color=&#34;#C6CCD2&#34; ]
attack_4 [ label=&#34;Brute force&#34; ]
attack_5 [ label=&#34;Phishing&#34; ]
attack_6 [ label=&#34;Compromise user credentials&#34; ]
attack_7 [ label=&#34;Subsystem with access to bucket data&#34; color=&#34;#C6CCD2&#34; ]
attack_8 [ label=&#34;Manually analyze web client for access control misconfig&#34; ]

// defense nodes
node [ color=&#34;#ABD2FA&#34; ]
defense_1 [ label=&#34;Disallow crawling on site maps&#34; ]
defense_2 [ label=&#34;Auth required / ACLs (private bucket)&#34; ]
defense_3 [ label=&#34;Lock down web client with creds / ACLs&#34; ]
defense_4 [ label=&#34;Perform all access control server-side&#34; ]
</code></pre><p>Now we need to connect them to reflect the &ldquo;If This, Then That&rdquo;-style logic of the attacker / defender game at hand. There are a few decision forks here depending on whether or not there is a mitigation. I find it useful to comment <code>// potential mitigation</code> at those forks for clarity, as shown here:</p>
<pre tabindex="0"><code>// branch 3 edges
reality -&gt; defense_2
defense_2 -&gt; attack_4
defense_2 -&gt; attack_5
attack_4 -&gt; attack_6
attack_5 -&gt; attack_6
attack_6 -&gt; attack_7
attack_7 -&gt; attack_win
// potential mitigation path
attack_7 -&gt; defense_3
defense_3 -&gt; attack_8
attack_8 -&gt; attack_win
// potential mitigation path
attack_8 -&gt; defense_4 
defense_4 -&gt; attack_5 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]
</code></pre><p>To reflect the fact that our last mitigation (performing access control server-side) sends attackers back up the tree to try a more expensive branch, I&rsquo;ve styled the last edge as a <code>dashed</code> line with a periwinkle color.</p>
<p>Running our output command again, we can see the three branches together:</p>
<p><img src="/blog/img/graphviz/attack-tree-05.png" alt="Three branches of our decision tree"></p>
<p>Well&hellip; it&rsquo;s technically correct, but organized in a weird way that makes it pretty tricky to follow. Since we have five more branches to add, it doesn&rsquo;t make sense for us to tweak the ordering yet &ndash; that will be covered in <a href="#step-9---fixing-the-ordering">step 9</a>.</p>
<h3 id="adding-branches-4---7">Adding branches 4 - 7</h3>
<p>To keep this post moving, I beseech you to review the .dot files and graph outputs <a href="https://github.com/swagitda/security-decision-trees-graphviz">in the GitHub repo</a> for the rest of the branches through the last one (branch 8). There is also commentary within the .dot files for each of the branches skipped over here for your perusal.</p>
<p>Your .dot file ahead of the final branch should <a href="https://github.com/swagitda/security-decision-trees-graphviz/drawing-the-owl-dot.dot">look like this</a>:</p>
<pre tabindex="0"><code>digraph {
	// base nodes
	reality [ label=&#34;Reality&#34; color=&#34;#2B303A&#34; ]
	attack_win [ label=&#34;Access video recordings in S3 bucket (attackers win)&#34; color=&#34;#DB2955&#34; ]

  	// attack nodes
  	node [ color=&#34;#ED96AC&#34; ]
	attack_1 [ label=&#34;API cache (e.g. Wayback Machine)&#34; color=&#34;#C6CCD2&#34; ]
	attack_2 [ label=&#34;AWS public buckets search&#34; ]
	attack_3 [ label=&#34;S3 bucket set to public&#34; color=&#34;#C6CCD2&#34; ]
	attack_4 [ label=&#34;Brute force&#34; ]
	attack_5 [ label=&#34;Phishing&#34; ]
	attack_6 [ label=&#34;Compromise user credentials&#34; ]
	attack_7 [ label=&#34;Subsystem with access to bucket data&#34; color=&#34;#C6CCD2&#34; ]
	attack_8 [ label=&#34;Manually analyze web client for access control misconfig&#34; ]
	attack_9 [ label=&#34;Compromise admin creds&#34; ]
	attack_10 [ label=&#34;Intercept 2FA&#34; ]
	attack_11 [ label=&#34;SSH to an accessible machine&#34; ]
	attack_12 [ label=&#34;Lateral movement to machine with access to target bucket&#34; ]
	attack_13 [ label=&#34;Compromise AWS admin creds&#34; ]
	attack_14 [ label=&#34;Compromise presigned URLs&#34; ]
	attack_15 [ label=&#34;Compromise URL within N time period&#34; ]
	attack_16 [ label=&#34;Recon on S3 buckets&#34; ]
	attack_17 [ label=&#34;Find systems with R/W access to target bucket&#34; ]
	attack_18 [ label=&#34;Exploit known 3rd party library vulns&#34; ]

	// defense nodes
	node [ color=&#34;#ABD2FA&#34; ]
	defense_1 [ label=&#34;Disallow crawling on site maps&#34; ]
	defense_2 [ label=&#34;Auth required / ACLs (private bucket)&#34; ]
	defense_3 [ label=&#34;Lock down web client with creds / ACLs&#34; ]
	defense_4 [ label=&#34;Perform all access control server-side&#34; ]
	defense_5 [ label=&#34;2FA&#34; ]
	defense_6 [ label=&#34;IP allowlist for SSH&#34; ]
	defense_7 [ label=&#34;Make URL short lived&#34; ]
	defense_8 [ label=&#34;Disallow the use of URLs to access buckets&#34; ]
	defense_9 [ label=&#34;No public system has R/W access (internal only)&#34; ]
	defense_10 [ label=&#34;3rd party library checking / vuln scanning&#34; ]

	// branch 1 edges
	reality -&gt; attack_1 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]
	attack_1 -&gt; attack_win	

	// branch 2 edges
	reality -&gt; defense_1
	defense_1 -&gt; attack_2
	attack_2 -&gt; attack_3 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]
	attack_3 -&gt; attack_win

	// branch 3 edges
	reality -&gt; defense_2
	defense_2 -&gt; attack_4
	defense_2 -&gt; attack_5
	attack_4 -&gt; attack_6
	attack_5 -&gt; attack_6
	attack_6 -&gt; attack_7
	attack_7 -&gt; attack_win
	// potential mitigation path
	attack_7 -&gt; defense_3
	defense_3 -&gt; attack_8
	attack_8 -&gt; attack_win
	// potential mitigation path
	attack_8 -&gt; defense_4 
	defense_4 -&gt; attack_5 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]
	
	// branch 4 edges
	attack_5 -&gt; attack_9
	attack_9 -&gt; attack_11 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]
	// potential mitigation path
	attack_9 -&gt; defense_5 
	defense_5 -&gt; attack_10 
	attack_10 -&gt; attack_11
	// potential mitigation path
	attack_11 -&gt; defense_6 
	defense_6 -&gt; attack_12 
	attack_12 -&gt; attack_win

	// branch 5 edges
	attack_5 -&gt; attack_13
	attack_13 -&gt; attack_11
	attack_13 -&gt; defense_5

	// branch 6 edges
	attack_5 -&gt; attack_14
	attack_14 -&gt; attack_win
	attack_14 -&gt; attack_15
	// potential mitigation path
	attack_14 -&gt; defense_7 
	defense_7 -&gt; attack_15 
	attack_15 -&gt; attack_win
	// potential mitigation path
	attack_15 -&gt; defense_8 

	// branch 7 edges
	defense_2 -&gt; attack_16
	defense_5 -&gt; attack_16 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]
	defense_8 -&gt; attack_16 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]
	attack_16 -&gt; attack_17 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]
	// potential mitigation path
	attack_17 -&gt; defense_9 
	defense_9 -&gt; attack_5 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]
	attack_17 -&gt; attack_18
	// potential mitigation path
	attack_18 -&gt; defense_10

}
</code></pre><h3 id="adding-the-last-branch-branch-8">Adding the last branch (branch 8)</h3>
<p><a href="https://youtu.be/5Nvxv2R01po?t=5">We&rsquo;re now on the hardest branch for attackers</a>, the one requiring zero-day (&ldquo;0day&rdquo;) exploits or supply chain backdoors. These are expensive, whether in money or time, so attackers will generally use them as a last resort or if the return on investment (ROI) is more favorable &ndash; such as when those actions enable the ability to gain access to a bunch of organizations in one fell swoop, avoiding the need to compromise them individually.</p>
<p>Our last mitigation from the <a href="https://github.com/swagitda/security-decision-trees-graphviz/blob/main/branch-images/07-branch.png">seventh branch</a> was vulnerability (&ldquo;vuln&rdquo;) scanning, (ideally) eliminating the option for attackers to exploit a known vuln. Thus, attackers will either need to buy 0day or discover and develop 0day themselves. A potential mitigation to 0day exploits is, somewhat obviously, exploit detection and prevention.</p>
<p>Assuming this mitigation actually works<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>, attackers will be forced to try 0day affecting AWS multitenant systems. In response, defenders could adopt a single tenant AWS hardware security module (<a href="https://en.wikipedia.org/wiki/Hardware_security_module">HSM</a>) model, which would then force attackers to plant a backdoor in a component in AWS&rsquo;s supply chain.</p>
<p>For the purposes of illustration, I&rsquo;ve assumed that the organization creating this decision tree / threat model does <em>not</em> currently employ AWS HSMs. Therefore, the edge leading to that defense node is styled as a <code>dotted</code> line.</p>
<p>This final branch results in the following new nodes and edges:</p>
<pre tabindex="0"><code>attack_19 [ label=&#34;Manual discovery of 0day&#34; ]
attack_20 [ label=&#34;Buy 0day&#34; ]
attack_21 [ label=&#34;Exploit vulns&#34; ]
attack_22 [ label=&#34;0day in AWS multitenant systems&#34; ]
attack_23 [ label=&#34;Supply chain compromise (backdoor)&#34; ]
</code></pre><pre tabindex="0"><code>defense_11 [ label=&#34;Exploit prevention/ detection&#34; ]
defense_12 [ label=&#34;Use single tenant AWS HSM&#34; ]
</code></pre><pre tabindex="0"><code>// branch 8 edges
defense_10 -&gt; attack_19
defense_10 -&gt; attack_20
attack_19 -&gt; attack_21
attack_20 -&gt; attack_21
attack_21 -&gt; attack_win
// potential mitigation path
attack_21 -&gt; defense_11 
defense_11 -&gt; attack_22 
attack_22 -&gt; attack_win 
// potential mitigation path
attack_22 -&gt; defense_12 [ style=&#34;dotted&#34; ]
defense_12 -&gt; attack_23 
attack_23 -&gt; attack_win
</code></pre><p>With all our nodes and edges now in place, our graph looks like this:</p>
<p><img src="/blog/img/graphviz/attack-tree-10.png" alt="Eight branches of our decision tree"></p>
<p>It is very ugly and difficult to follow. We should proceed to the next steps so that we do not have to stare at this monstrosity further.</p>
<h2 id="step-8---beautifying-the-graph">Step 8 - Beautifying the graph</h2>
<p>Before we tackle the fact that many of the nodes are out of intended order, we should try to make this all look less hideous. Graphviz allows for some limited styling options, which, to be honest, I mostly figured out through guess and check given how sparse I found the docs to be.</p>
<h3 id="node-styling">Node styling</h3>
<p>Let&rsquo;s start by making the nodes less Word 95-era design. I personally chose to replace the outlines with a fill, using the same colors as before.</p>
<p>You can set global node design by inserting <code>node [ * ]</code> at the beginning of your .dot file. To get rid of the outlines, add the <code>shape</code> attribute with the value <code>plaintext</code> and add the <code>style</code> attribute with the value <code>filled, rounded</code> (I possess a fondness for rounded edges):</p>
<pre tabindex="0"><code>digraph {
	// Base Styling
	node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; ]
</code></pre><p>Because nodes are now filled with the previously-defined colors, we also need to lighten the font color for the <code>reality</code> and <code>attack_win</code> nodes; I chose white:</p>
<pre tabindex="0"><code>// base nodes
reality [ label=&#34;Reality&#34; fillcolor=&#34;#2B303A&#34; fontcolor=&#34;#ffffff&#34; ]
attack_win [ label=&#34;Access video recordings in S3 bucket (attackers win)&#34;
fillcolor=&#34;#DB2955&#34; fontcolor=&#34;#ffffff&#34; ]
</code></pre><p>Also, who uses Times New Roman anymore? Apparently Graphviz does, since it&rsquo;s the default font. Let&rsquo;s change the font to Lato:</p>
<pre tabindex="0"><code>// Base Styling
node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34;]
</code></pre><p>Finally, we can make the nodes a bit roomier by adding a slight margin around the text within:</p>
<pre tabindex="0"><code>// Base Styling
node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34; margin=0.2]
</code></pre><p>Our graph now looks like this with the new node styling:</p>
<p><img src="/blog/img/graphviz/attack-tree-11.png" alt="The decision tree with filled and rounded nodes, plus Lato font"></p>
<p>It&rsquo;s already looking more modern! But we can do more.</p>
<h3 id="edge-styling">Edge styling</h3>
<p>We can make the edges prettier, too, by changing the #yolosec label font to Lato and by lightening the lines up slightly so they aren&rsquo;t in stark black:</p>
<pre tabindex="0"><code>// Base Styling
node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34;]
edge [ fontname=&#34;Lato&#34; color=&#34;#2B303A&#34; ]
</code></pre><p>We can see the results of our slight changes here:</p>
<p><img src="/blog/img/graphviz/attack-tree-12.png" alt="The decision tree with lighter edges, plus Lato font"></p>
<h3 id="graph-styling">Graph styling</h3>
<p>There&rsquo;s a lot of whitespace in our graph right now, which arguably reduces navigability. Ideally, the graph should be a solidly readable size in the <code>Page Width</code> view in a PDF reader. So, let&rsquo;s reduce some of the white space.</p>
<p>We can set styling for the whole graph by inserting it above the <code>node [ * ]</code> and <code>edge [ * ]</code> base styling we added above. Let&rsquo;s start by reducing the horizontal distance between nodes via the <code>nodesep</code> attribute and the vertical distance via the <code>ranksep</code> attribute:</p>
<pre tabindex="0"><code>// Base Styling
nodesep=&#34;0.2&#34;;
ranksep=&#34;0.4&#34;;
node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34; margin=0.2]
edge [ fontname=&#34;Lato&#34; color=&#34;#2B303A&#34; ]
</code></pre><p>For this next styling option, I&rsquo;m going to level with y&rsquo;all: this combination resulted in the best visual outcomes after a lot of guess and check, but I&rsquo;m still not 100% what they do. In any case, setting <code>splines=true</code> and <code>overlap=false</code> seems to generate the cleanest visualization:</p>
<pre tabindex="0"><code>// Base Styling
splines=true;
overlap=false;
nodesep=&#34;0.2&#34;;
ranksep=&#34;0.4&#34;;
node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34; margin=0.2]
edge [ fontname=&#34;Lato&#34; color=&#34;#2B303A&#34; ]
</code></pre><p>I also added in the attribute specifying the graph should be visualized from top to bottom, even though it&rsquo;s the default (I am risk averse):</p>
<pre tabindex="0"><code>rankdir=&#34;TB&#34;;
</code></pre><p>Last, but certainly not least, I titled the graph using the <code>label</code> attribute and set the label location to the top with <code>labelloc</code>. With all of this incorporated, the base styling section in the .dot file now looks like this:</p>
<pre tabindex="0"><code>// Base Styling
rankdir=&#34;TB&#34;;
splines=true;
overlap=false;
nodesep=&#34;0.2&#34;;
ranksep=&#34;0.4&#34;;
label=&#34;Attack Tree for S3 Bucket with Video Recordings&#34;;
labelloc=&#34;t&#34;;
fontname=&#34;Lato&#34;;
node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; margin=0.2 fontname=&#34;Lato&#34; ]
edge [ fontname=&#34;Lato&#34; color=&#34;#2B303A&#34; ]
</code></pre><p>With the new styling complete, our graph looks much more visually appealing:</p>
<p><img src="/blog/img/graphviz/attack-tree-13.png" alt="The decision tree with the base styling"></p>
<p>However, it&rsquo;s still a little confusing due to the errant default node placement by Graphviz. We&rsquo;ll fix this in the next step.</p>
<h2 id="step-9---fixing-the-ordering">Step 9 - Fixing the ordering</h2>
<p>One of the benefits of the decision tree is to visualize a threat model in order of easiest / lowest cost attacker path to hardest / highest cost attacker path (generally from left to right). By default, Graphviz does not respect the order in which we&rsquo;ve written our nodes and edges, necessitating some fixes.</p>
<p>I approached this necessary re-ordering by creating a <em>cluster</em> for each group of nodes that should be equal in hierarchy. In Graphviz, a cluster is encoded as a <a href="https://graphviz.org/Gallery/directed/cluster.html"><code>subgraph</code></a>, which can be used for a variety of purposes beyond the aesthetic ordering one in this post.</p>
<p>The three clusters in our tree diagram are:</p>
<ul>
<li>The initial nodes after the reality node: the API cache, disallowing crawling on site maps, and private buckets</li>
<li>The attack nodes after auth is required: brute force, phishing, and recon on s3 buckets</li>
<li>The subsequent attack nodes after phishing: compromise user creds, admin creds, AWS admin creds, or pre-signed URLs</li>
</ul>
<p>We can encode these clusters as subgraphs with the attribute <code>rank=same</code> (to weight the nodes equally in the hierarchy) along with the list of relevant nodes in the cluster:</p>
<pre tabindex="0"><code>// Subgraphs / Clusters
subgraph initialstates {
	rank=same;
	attack_1;
	defense_1;
	defense_2;
}

subgraph authrequired {
	rank=same;
	attack_4;
	attack_5;
	attack_16;
}

subgraph phishcluster {
	rank=same;
	attack_6;
	attack_9;
	attack_13;
	attack_14;
} 
</code></pre><p>I would like to spare y&rsquo;all the vexation I experienced when Graphviz didn&rsquo;t respect the order in which I listed the nodes within a cluster. For instance, instead of showing <code>attack_4</code> as the leftmost node in the <code>authrequired</code> cluster and <code>attack_16</code> as the rightmost, Graphviz seemed to prefer to use a methodology reflected by <code>¯\_(ツ)_/¯</code>.</p>
<p>What seems to fix this ordering issue is creating invisible edges that enforce the left to right ordering. For our graph, the fix is specifically found in enforcing the correct order in the <code>phishcluster</code> subgraph:</p>
<pre tabindex="0"><code>attack_6 -&gt; attack_9 -&gt; attack_13 -&gt; attack_14 [ style=&#34;invis&#34; ]
</code></pre><p>Aren&rsquo;t computers great? In any case, our graph now accurately visualizes the ordering of our decision tree:</p>
<p><img src="/blog/img/graphviz/attack-tree-14.png" alt="The decision tree with the base styling"></p>
<h2 id="step-10---tweaking-the-design">Step 10 - Tweaking the design</h2>
<p>There are other tweaks we can make to make this graph (and the .dot file itself!) more digestible.</p>
<p>I chose to add line breaks for particularly long node labels, such as <code>label=&quot;API cache\n(e.g. Wayback\nMachine)&quot;</code>, and definitely recommend it for your own tree. I also added in more comments to the .dot file so that someone else reading it could better understand what is going on.</p>
<p>With these last tweaks, this is how our final .dot file looks:<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup></p>
<pre tabindex="0"><code>digraph {
	// Base Styling
	rankdir=&#34;TB&#34;;
	splines=true;
	overlap=false;
	nodesep=&#34;0.2&#34;;
	ranksep=&#34;0.4&#34;;
	label=&#34;Attack Tree for S3 Bucket with Video Recordings&#34;;
	labelloc=&#34;t&#34;;
	fontname=&#34;Lato&#34;;
	node [ shape=&#34;plaintext&#34; style=&#34;filled, rounded&#34; fontname=&#34;Lato&#34; margin=0.2 ]
	edge [ fontname=&#34;Lato&#34; color=&#34;#2B303A&#34; ]

	// List of Nodes

	// base nodes
	reality [ label=&#34;Reality&#34; fillcolor=&#34;#2B303A&#34; fontcolor=&#34;#ffffff&#34; ]
	attack_win [ label=&#34;Access video\nrecordings in\nS3 bucket\n(attackers win)&#34; fillcolor=&#34;#DB2955&#34; fontcolor=&#34;#ffffff&#34; ]

  	// attack nodes
  	node [ color=&#34;#ED96AC&#34; ]
	attack_1 [ label=&#34;API cache\n(e.g. Wayback\nMachine)&#34; color=&#34;#C6CCD2&#34; ]
	attack_2 [ label=&#34;AWS public\nbuckets search&#34; ]
	attack_3 [ label=&#34;S3 bucket\nset to public&#34; color=&#34;#C6CCD2&#34; ]
	attack_4 [ label=&#34;Brute force&#34; ]
	attack_5 [ label=&#34;Phishing&#34; ]
	attack_6 [ label=&#34;Compromise\nuser credentials&#34; ]
	attack_7 [ label=&#34;Subsystem with\naccess to\nbucket data&#34; color=&#34;#C6CCD2&#34; ]
	attack_8 [ label=&#34;Manually analyze\nweb client for access\ncontrol misconfig&#34; ]
	attack_9 [ label=&#34;Compromise\nadmin creds&#34; ]
	attack_10 [ label=&#34;Intercept 2FA&#34; ]
	attack_11 [ label=&#34;SSH to an\naccessible\nmachine&#34; ]
	attack_12 [ label=&#34;Lateral movement to\nmachine with access\nto target bucket&#34; ]
	attack_13 [ label=&#34;Compromise\nAWS admin creds&#34; ]
	attack_14 [ label=&#34;Compromise\npresigned URLs&#34; ]
	attack_15 [ label=&#34;Compromise\nURL within N\ntime period&#34; ]
	attack_16 [ label=&#34;Recon on S3 buckets&#34; ]
	attack_17 [ label=&#34;Find systems with\nR/W access to\ntarget bucket&#34; ]
	attack_18 [ label=&#34;Exploit known 3rd\nparty library vulns&#34; ]
	attack_19 [ label=&#34;Manual discovery\nof 0day&#34; ]
	attack_20 [ label=&#34;Buy 0day&#34; ]
	attack_21 [ label=&#34;Exploit vulns&#34; ]
	attack_22 [ label=&#34;0day in AWS\nmultitenant systems&#34; ]
	attack_23 [ label=&#34;Supply chain\ncompromise\n(backdoor)&#34; ]

	// defense nodes
	node [ color=&#34;#ABD2FA&#34; ]
	defense_1 [ label=&#34;Disallow\ncrawling\non site maps&#34; ]
	defense_2 [ label=&#34;Auth required / ACLs\n(private bucket)&#34; ]
	defense_3 [ label=&#34;Lock down\nweb client with\ncreds / ACLs&#34; ]
	defense_4 [ label=&#34;Perform all access\ncontrol server-side&#34; ]
	defense_5 [ label=&#34;2FA&#34; ]
	defense_6 [ label=&#34;IP allowlist for SSH&#34; ]
	defense_7 [ label=&#34;Make URL\nshort lived&#34; ]
	defense_8 [ label=&#34;Disallow the use\nof URLs to\naccess buckets&#34; ]
	defense_9 [ label=&#34;No public system\nhas R/W access\n(internal only)&#34; ]
	defense_10 [ label=&#34;3rd party library\nchecking / vuln\nscanning&#34; ]
	defense_11 [ label=&#34;Exploit prevention\n/ detection&#34; ]
	defense_12 [ label=&#34;Use single tenant\nAWS HSM&#34; ]

	// List of Edges

	// branch 1 edges
	// this starts from the reality node and connects with the first &#34;attack&#34;,
	// which is really just taking advantage of #yolosec (big oof)
	reality -&gt; attack_1 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]
	attack_1 -&gt; attack_win	

	// branch 2 edges
	// this connects the reality node to the first mitigation, 
	// which helps avoid the #yolosec path from branch 1
	reality -&gt; defense_1
	defense_1 -&gt; attack_2
	attack_2 -&gt; attack_3 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]
	attack_3 -&gt; attack_win

	// branch 3 edges
	// this connects the reality node to another mitigation,
	// which helps avoid the #yolosec path from branch 2
	reality -&gt; defense_2
	defense_2 -&gt; attack_4
	defense_2 -&gt; attack_5
	attack_4 -&gt; attack_6
	attack_5 -&gt; attack_6
	attack_6 -&gt; attack_7
	attack_7 -&gt; attack_win
	// potential mitigation path
	attack_7 -&gt; defense_3
	defense_3 -&gt; attack_8
	attack_8 -&gt; attack_win
	// potential mitigation path
	attack_8 -&gt; defense_4 
	defense_4 -&gt; attack_5 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]
	
	// branch 4 edges
	// this starts from the last mitigation loop vs. the reality node
	attack_5 -&gt; attack_9
	attack_9 -&gt; attack_11 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]
	// potential mitigation path
	attack_9 -&gt; defense_5 
	defense_5 -&gt; attack_10 
	attack_10 -&gt; attack_11
	// potential mitigation path
	attack_11 -&gt; defense_6 
	defense_6 -&gt; attack_12 
	attack_12 -&gt; attack_win

	// branch 5 edges
	// this also represents a branch from the prior mitigation loop
	// but it is more difficult than branch 4, hence comes after
	// the new attack step allows attackers to skip some steps on branch 4
	// so it links back to branch 4, whose edges are already defined
	attack_5 -&gt; attack_13
	attack_13 -&gt; attack_11
	attack_13 -&gt; defense_5

	// branch 6 edges
	// depending on the mitigations, the initial node allows for different outcomes
	// this also represents a branch from the prior mitigation loop
	// it is more difficult than branch 4 and branch 5, hence comes after
	attack_5 -&gt; attack_14
	attack_14 -&gt; attack_win
	attack_14 -&gt; attack_15
	// potential mitigation path
	attack_14 -&gt; defense_7 
	defense_7 -&gt; attack_15 
	attack_15 -&gt; attack_win
	// potential mitigation path
	attack_15 -&gt; defense_8 

	// branch 7 edges
	// a new loop is born!
	// the first edges tie prior mitigations to the new attack step
	defense_2 -&gt; attack_16
	defense_5 -&gt; attack_16 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]
	defense_8 -&gt; attack_16 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]
	attack_16 -&gt; attack_17 [ xlabel=&#34;#yolosec&#34; fontcolor=&#34;#DB2955&#34; ]
	// potential mitigation path
	attack_17 -&gt; defense_9 
	defense_9 -&gt; attack_5 [ style=&#34;dashed&#34; color=&#34;#7692FF&#34; ]
	attack_17 -&gt; attack_18
	// potential mitigation path
	attack_18 -&gt; defense_10

	// branch 8 edges
	// we&#39;ve reached the last path!
	// this is the most expensive one for attackers.
	// these attacks are definitely uncommon...
	// ...because attackers will be cheap / lazy if they can be.
	// these edges start from the last mitigation from branch 7
	defense_10 -&gt; attack_19
	defense_10 -&gt; attack_20
	attack_19 -&gt; attack_21
	attack_20 -&gt; attack_21
	attack_21 -&gt; attack_win
	// potential mitigation path
	attack_21 -&gt; defense_11 
	defense_11 -&gt; attack_22 
	attack_22 -&gt; attack_win 
	// potential mitigation path
	// for the purposes of illustration, this path represents a mitigation
	// that isn&#39;t actually implemented yet -- hence a dotted edge
	attack_22 -&gt; defense_12 [ style=&#34;dotted&#34; ]
	defense_12 -&gt; attack_23 
	attack_23 -&gt; attack_win

	// Subgraphs / Clusters

	// these clusters enforce the correct hierarchies
	subgraph initialstates {
    	rank=same;
    	attack_1;
    	defense_1;
    	defense_2;
  	}
	subgraph authrequired {
    	rank=same;
    	attack_4;
    	attack_5;
    	attack_16;
  	}
  	subgraph phishcluster {
    	rank=same;
    	attack_6;
    	attack_9;
    	attack_13;
    	attack_14;
    	rankdir=LR;
  	}
  	// these invisible edges are to enforce the correct left-to-right order 
  	// based on the level of attack difficulty
  	attack_6 -&gt; attack_9 -&gt; attack_13 -&gt; attack_14 [ style=&#34;invis&#34; ]
}
</code></pre><h2 id="conclusion">Conclusion</h2>
<p>After these ten steps, we&rsquo;ve successfully recreated the decision tree from <a href="https://www.kellyshortridge.com/book.html">the SCE report</a> and optimized it for readability, too:</p>
<p><img src="/blog/img/graphviz/attack-tree-15.png" alt="The final decision tree for threat modeling an S3 bucket containing sensitive data"></p>
<p>While it may feel daunting to create your first decision tree in this manner, the good news is you now have a base template with styling that you can use to threat model other critical assets.</p>
<p>If you try this out yourself or for your own organization, I welcome any and all feedback on how the .dot config or process itself can be improved. Security chaos engineering is a blossoming discipline bearing real potential to make infosec finally not suck, so we should help each other level up however we can.</p>
<hr>
<p>Thank you shoutout to Team Bad &lt;3</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>One notable benefit of this post is that it helps you avoid using Visio, which feels like the type of tool a petty Greek god would create just to torture a human who slighted their ego.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>There is also arguably an incentive to avoid obviously bad things happening so that the security team cannot seize upon the crisis to impose heavier change or release processes, as security is infamously wont to do.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Yes, I am aware of the SolarWinds breach. Discussing the attacker math behind it is a blog post for another time. Suffice to say, the average criminal group is much less motivated to employ a supply chain compromise than a nation state &ndash; especially a nation state with a notoriously lower bar for stealthiness than other nation states.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>This post assumes that reality can at least be approximately objectively defined. Whether or not that is an appropriate assumption is a topic I would relish discussing IRL over a matcha oatmilk latte once the plague time is over.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>To start out, you can also define another possible end state of &ldquo;Attackers Lose.&rdquo; A sufficiently incentivized attacker will escalate resource expenditure as needed in order to reach their goal, so I think this is generally an unrealistic end state. However, I also argue that for many organizations, it&rsquo;s a relatively sane threat model to accept the risk of attackers throwing 0day at you. If you&rsquo;ve made compromising your business-critical assets so difficult that attackers must resort to 0day, you&rsquo;ve done quite a lot right in your security program. And, again, it suggests that the attacker is extremely motivated to compromise you, so the marginal benefit of defending against 0day or even costlier attacker actions is pretty poor. In contrast, <a href="https://twitter.com/swagitda_/status/1341023749647327232">the marginal benefit of something like two-factor authentication</a> is resoundingly high.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Be skeptical whenever a vendor is claiming to detect 0day, especially if the words &ldquo;AI&rdquo; or &ldquo;deep learning&rdquo; are in the same sentence.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>I realized at the end I forgot to describe adding the bold strawberry font color to the &ldquo;#yolosec&rdquo; labels. I am hoping that you all are smart and can leverage the full .dot file to figure out how to do it yourself.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>My 2020 Reading List</title>
            <link>https://kellyshortridge.com/blog/posts/2020-reading-list/</link>
            <pubDate>Tue, 22 Dec 2020 16:13:58 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2020-reading-list/</guid>
            <description>This year, I relished the newfound time beget by the social distancing paradigm to indulge in even more books than usual. To wit, I averaged around 3.7 books per month in 2020 compared to 2.7 books per month in 2019 – and approximately tripled what I achieved each year from 2016 to 2018.
Notably, I also published my first book: the Security Chaos Engineering e-book (non-fiction) published via O’Reilly Media. It’s available for free download if you want to check it out and add some brain-stimulating non-fiction to your reading queue.
If you’re looking for more science fiction, speculative fiction, or non-fiction recommendations, check out my 2019, my 2018, my 2017, and my 2016 reading lists.
Fiction Akata Warrior by Nnedi Okorafor
Akata Witch by Nnedi Okorafor
And Shall Machines Surrender (Machine Mandate Book 1) by Benjanun Sriduangkaew
Artificial Condition: The Murderbot Diaries by Martha Wells
Beyond the Dragon’s Gate by Yoon Ha Lee
Black Sun (Between Earth and Sky) by Rebecca Roanhorse
Blood Is Another Word for Hunger by Rivers Solomon
Children of Virtue and Vengeance (Legacy of Orisha, 2) by Tomi Adeyemi
The Crying of Lot 49 by Thomas Pynchon
The Dark Forest (The Three-Body Problem Series, 2) by Cixin Liu
Despair by Vladimir Nabokov
The Dream-Quest of Vellitt Boe by Kij Johnson
Exit Strategy: The Murderbot Diaries by Martha Wells
Flights by Olga Tokarczuk
The Gurkha and the Lord of Tuesday by Saad Z. Hossain
Harrow the Ninth (The Locked Tomb Trilogy, 2) by Tamsyn Muir
The Hour of the Star by Clarice Lispector
The Lathe Of Heaven: A Novel by Ursula K. Le Guin
Love Beyond Body, Space, and Time: An LGBT and two-spirit sci-fi anthology featuring Cherie Dimaline, Gwen Benaway, David Robertson, Richard Van Camp, Nathan Adler, Daniel Heath Justice, Darcie Little Badger, and Cleo Keahna
The Metamorphosis by Franz Kafka (re-read)
Network Effect: A Murderbot Novel (The Murderbot Diaries Book 5) by Martha Wells
Ninth Step Station by Malka Older, Fran Wilde, Jacqueline Koyanagi, and Curtis C. Chen
Null States (The Centenal Cycle, 2) by Malka Older
The Plague by Albert Camus
Pnin by Vladimir Nabokov
Queen of the Conquered (Islands of Blood and Storm, 1) by Kacen Callender
Realm of Ash (The Books of Ambha Book 2) by Tasha Suri
Ring Shout by P. Djèlí Clark
Rogue Protocol: The Murderbot Diaries by Martha Wells
State Tectonics (The Centenal Cycle) by Malka Older
The Cat Who Walked a Thousand Miles by Kij Johnson
This Is How You Lose the Time War by Amal El-Mohtar and Max Gladstone
The Time Invariance of Snow by E. Lily Yu
To Say Nothing of the Dog by Connie Willis
Thus Were Their Faces: Selected Stories by Silvina Ocampo
Non-Fiction Building Secure &amp; Reliable Systems: Best Practices for Designing, Implementing and Maintaining Systems by Heather Adkins, Betsy Beyer, Paul Blankinship, Piotr Lewandowski, Ana Oprea, and Adam Stubblefield (I’m honored to have been a reviewer for it!)
The Cosmic Revolutionary’s Handbook (Or: How to Beat the Big Bang) by Luke A. Barnes and Geraint F. Lewis
The End of Everything (Astrophysically Speaking) by Katie Mack
How to Be an Antiracist by Ibram X. Kendi
The Human Condition by Hannah Arendt
Invisible Agents: Women and Espionage in Seventeenth-Century Britain by Nadine Akkerman
Lost in Math by Sabine Hossenfelder
Mechanizing Proof by Donald MacKenzie
Monolith to Microservices: Evolutionary Patterns to Transform Your Monolith by Sam Newman
Venice’s Secret Service: Organising Intelligence in the Renaissance by Ioanna Iordanou
</description>
            <atom:content type="html"><![CDATA[<p>This year, I relished the newfound time beget by the social distancing paradigm to indulge in even more books than usual. To wit, I averaged around 3.7 books per month in 2020 compared to 2.7 books per month in 2019 &ndash; and approximately tripled what I achieved each year from 2016 to 2018.</p>
<p>Notably, I also published my first book: the Security Chaos Engineering e-book (non-fiction) published via O&rsquo;Reilly Media. It&rsquo;s <a href="https://www.kellyshortridge.com/book.html">available for free download</a> if you want to check it out and add some brain-stimulating non-fiction to your reading queue.</p>
<p>If you’re looking for more science fiction, speculative fiction, or non-fiction recommendations, check out <a href="/blog/posts/2019-reading-list">my 2019</a>, <a href="/blog/posts/2018-reading-list">my 2018</a>, <a href="/blog/posts/2017-reading-list">my 2017</a>, and <a href="/blog/posts/2016-reading-list">my 2016</a> reading lists.</p>
<h2 id="fiction">Fiction</h2>
<p><a href="https://www.amazon.com/Akata-Warrior-Nnedi-Okorafor/dp/067078561X">Akata Warrior</a> by Nnedi Okorafor</p>
<p><a href="https://www.amazon.com/Akata-Witch-Nnedi-Okorafor/dp/0670011967">Akata Witch</a> by Nnedi Okorafor</p>
<p><a href="https://www.amazon.com/Shall-Machines-Surrender-Benjanun-Sriduangkaew-ebook/dp/B07SJWJ7VB">And Shall Machines Surrender (Machine Mandate Book 1)</a> by Benjanun Sriduangkaew</p>
<p><a href="https://www.amazon.com/gp/product/B075DGHHQL">Artificial Condition: The Murderbot Diaries</a> by Martha Wells</p>
<p><a href="https://www.tor.com/2020/05/20/beyond-the-dragons-gate-yoon-ha-lee/">Beyond the Dragon&rsquo;s Gate</a> by Yoon Ha Lee</p>
<p><a href="https://www.amazon.com/Black-Sun-Between-Earth-Sky/dp/1534437673">Black Sun (Between Earth and Sky)</a> by Rebecca Roanhorse</p>
<p><a href="https://www.tor.com/2019/07/24/blood-is-another-word-for-hunger-rivers-solomon/">Blood Is Another Word for Hunger</a> by Rivers Solomon</p>
<p><a href="https://www.amazon.com/Children-Virtue-Vengeance-Legacy-Orisha/dp/1250170990">Children of Virtue and Vengeance (Legacy of Orisha, 2)</a> by Tomi Adeyemi</p>
<p><a href="https://www.amazon.com/Crying-Lot-Perennial-Fiction-Library/dp/006091307X/">The Crying of Lot 49</a> by Thomas Pynchon</p>
<p><a href="https://www.amazon.com/Dark-Forest-Remembrance-Earths-Past/dp/0765386690">The Dark Forest (The Three-Body Problem Series, 2)</a> by Cixin Liu</p>
<p><a href="https://www.amazon.com/Despair-Vladimir-Nabokov/dp/0679723439/">Despair</a> by Vladimir Nabokov</p>
<p><a href="https://www.amazon.com/Dream-Quest-Vellitt-Boe-Kij-Johnson-ebook/dp/B01DJ0NARW">The Dream-Quest of Vellitt Boe</a> by Kij Johnson</p>
<p><a href="https://www.amazon.com/gp/product/B078X1N8VF">Exit Strategy: The Murderbot Diaries</a> by Martha Wells</p>
<p><a href="https://www.amazon.com/Flights-Olga-Tokarczuk/dp/0525534202/">Flights</a> by Olga Tokarczuk</p>
<p><a href="https://www.amazon.com/Gurkha-Lord-Tuesday-Saad-Hossain/dp/1250209110">The Gurkha and the Lord of Tuesday</a> by Saad Z. Hossain</p>
<p><a href="https://www.amazon.com/Harrow-Ninth-Locked-Tomb-Trilogy/dp/1250313228">Harrow the Ninth (The Locked Tomb Trilogy, 2)</a> by Tamsyn Muir</p>
<p><a href="https://www.amazon.com/Hour-Star-New-Directions-Paperbook/dp/0811211908">The Hour of the Star</a> by Clarice Lispector</p>
<p><a href="https://www.amazon.com/Lathe-Heaven-Ursula-K-Guin/dp/1416556966">The Lathe Of Heaven: A Novel</a> by Ursula K. Le Guin</p>
<p><a href="https://www.amazon.com/Love-Beyond-Body-Space-Time-ebook/dp/B01L0HRHMU">Love Beyond Body, Space, and Time: An LGBT and two-spirit sci-fi anthology</a> featuring Cherie Dimaline, Gwen Benaway, David Robertson, Richard Van Camp, Nathan Adler, Daniel Heath Justice, Darcie Little Badger, and Cleo Keahna</p>
<p><a href="https://www.gutenberg.org/files/5200/5200-h/5200-h.htm">The Metamorphosis</a> by Franz Kafka (re-read)</p>
<p><a href="https://www.amazon.com/Network-Effect-Murderbot-Novel-Diaries-ebook/dp/B07WZ7SB5D/">Network Effect: A Murderbot Novel (The Murderbot Diaries Book 5)</a> by Martha Wells</p>
<p><a href="https://www.serialbox.com/serials/ninth-step-station">Ninth Step Station</a> by Malka Older, Fran Wilde, Jacqueline Koyanagi, and Curtis C. Chen</p>
<p><a href="https://www.amazon.com/Null-States-Book-Centenal-Cycle/dp/0765393387">Null States (The Centenal Cycle, 2)</a> by Malka Older</p>
<p><a href="https://www.amazon.com/Plague-Albert-Camus/dp/0679720219">The Plague</a> by Albert Camus</p>
<p><a href="https://www.amazon.com/Pnin-Vladimir-Nabokov/dp/0679723412">Pnin</a> by Vladimir Nabokov</p>
<p><a href="https://www.amazon.com/Queen-Conquered-Islands-Blood-Storm/dp/0316454931">Queen of the Conquered (Islands of Blood and Storm, 1)</a> by Kacen Callender</p>
<p><a href="https://www.amazon.com/Realm-Ash-Books-Ambha-Book-ebook/dp/B07P8LM4Y4/">Realm of Ash (The Books of Ambha Book 2)</a> by Tasha Suri</p>
<p><a href="https://www.amazon.com/Ring-Shout-P-Dj%C3%A8l%C3%AD-Clark/dp/1250767024">Ring Shout</a> by P. Djèlí Clark</p>
<p><a href="https://www.amazon.com/gp/product/B0756JSWGL">Rogue Protocol: The Murderbot Diaries</a> by Martha Wells</p>
<p><a href="https://www.amazon.com/State-Tectonics-Centenal-Cycle-Malka/dp/0765399474">State Tectonics (The Centenal Cycle)</a> by Malka Older</p>
<p><a href="https://www.tor.com/2009/07/14/the-cat-who-walked-a-thousand-miles/">The Cat Who Walked a Thousand Miles</a> by Kij Johnson</p>
<p><a href="https://www.amazon.com/This-How-You-Lose-Time/dp/1534431004">This Is How You Lose the Time War</a> by Amal El-Mohtar and Max Gladstone</p>
<p><a href="https://www.tor.com/2019/12/04/the-time-invariance-of-snow-e-lily-yu/">The Time Invariance of Snow</a> by E. Lily Yu</p>
<p><a href="https://www.amazon.com/Say-Nothing-Dog-Connie-Willis/dp/0553575384">To Say Nothing of the Dog</a> by Connie Willis</p>
<p><a href="https://www.amazon.com/Thus-Were-Their-Faces-Selected/dp/1590177673">Thus Were Their Faces: Selected Stories</a> by Silvina Ocampo</p>
<hr>
<h2 id="non-fiction">Non-Fiction</h2>
<p><a href="https://static.googleusercontent.com/media/sre.google/en//static/pdf/building_secure_and_reliable_systems.pdf">Building Secure &amp; Reliable Systems: Best Practices for Designing, Implementing and Maintaining Systems</a> by Heather Adkins, Betsy Beyer, Paul Blankinship, Piotr Lewandowski, Ana Oprea, and Adam Stubblefield (I&rsquo;m honored to have been a reviewer for it!)</p>
<p><a href="https://www.amazon.com/Cosmic-Revolutionarys-Handbook-Beat-Bang/dp/1108486703/">The Cosmic Revolutionary&rsquo;s Handbook (Or: How to Beat the Big Bang)</a> by Luke A. Barnes and Geraint F. Lewis</p>
<p><a href="https://www.amazon.com/End-Everything-Astrophysically-Speaking/dp/198210354X">The End of Everything (Astrophysically Speaking)</a> by Katie Mack</p>
<p><a href="https://www.amazon.com/How-Be-Antiracist-Ibram-Kendi/dp/0525509283/">How to Be an Antiracist</a> by Ibram X. Kendi</p>
<p><a href="https://www.amazon.com/Human-Condition-2nd-Hannah-Arendt/dp/0226025985">The Human Condition</a> by Hannah Arendt</p>
<p><a href="https://www.amazon.com/Invisible-Agents-Espionage-Seventeenth-Century-Britain/dp/0198823010">Invisible Agents: Women and Espionage in Seventeenth-Century Britain</a> by Nadine Akkerman</p>
<p><a href="https://www.amazon.com/Lost-Math-Beauty-Physics-Astray/dp/0465094252">Lost in Math</a> by Sabine Hossenfelder</p>
<p><a href="https://mitpress.mit.edu/books/mechanizing-proof">Mechanizing Proof</a> by Donald MacKenzie</p>
<p><a href="https://www.amazon.com/Monolith-Microservices-Evolutionary-Patterns-Transform/dp/1492047848">Monolith to Microservices: Evolutionary Patterns to Transform Your Monolith</a> by Sam Newman</p>
<p><a href="https://www.amazon.com/Venices-Secret-Service-Intelligence-Renaissance/dp/0198791313/">Venice&rsquo;s Secret Service: Organising Intelligence in the Renaissance</a> by Ioanna Iordanou</p>
]]></atom:content>
        </item>
        
        <item>
            <title>IBM &#43; Red Hat: Bamboozles, Foozles, and the Hybrid Cloud Chimera</title>
            <link>https://kellyshortridge.com/blog/posts/ibm-red-hat-bamboozles-foozles-and-the-hybrid-cloud-chimera/</link>
            <pubDate>Mon, 14 Dec 2020 08:00:00 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/ibm-red-hat-bamboozles-foozles-and-the-hybrid-cloud-chimera/</guid>
            <description>Mythgard. Sideshow Chimera by Alexander Mokhov
IBM recently announced that it is spinning off its “IT infrastructure services” unit so that it can streamline its focus solely towards IBM’s “open hybrid cloud platform” by the end of 2021. Leveraging this aggressive assertion, I’ll be referring to this rebranded IBM as “IBM Hybrid Cloud” (vs. current IBM as just “IBM”) throughout this post for clarity.
I think most people – whether tech workers or investors – would agree that cloud stuff in general is far more riveting than IT services both intellectually and from a growth prospects perspective. With that said, public investors seem relatively unimpressed; while there was an initial stock price bump upon the announcement, the price is back to where it was pre-announcement1. In this post, I want to explore the answer to the question that seems to be floating around the market mindshare: what is IBM Hybrid Cloud’s future, really?
The best place to start this peregrination is probably to evaluate the existing IBM Cloud offering in the context of the public cloud providers. IBM, as you all are assuredly aware, is not one of the “Big Three,” the moniker given to Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure due to their outsized share of the market. Perhaps most inauspiciously, Gartner places IBM Public Cloud on their Magic Quadrant (MQ) for Cloud IaaS in the pitiable “niche” quadrant. To add insult to injury, IBM rests behind Oracle Cloud both in “Ability to Execute” and in “Completeness of Vision” – and IBM’s position as a languishing laggard on the MQ hasn’t budged for three years.
Thus, we arrive at our next line of inquiry: What went wrong? How did IBM totally whiff execution on cloud offerings? Could they not discern the blindingly obvious writing in the stars2?
The Bare Metal Blunder At least one ingredient in IBM’s stew of ineptitude was (and still is) their predilection for all things Watson, which clouded3 their judgment around the IaaS market.
IBM simultaneously rebuffed public cloud by insisting on pursuing the bare metal opportunity – the market for single tenant (unshared) physical servers – while also neglecting it, attempting to eke out greater profitability by decreasing engineering spend at a time when competitors were firing their proverbial money cannons at the IaaS market. IBM pursued the bare metal opportunity vis a vis their acquisition of SoftLayer in 2013 for $2 billion, which perhaps added a salty sprinkle of sunk cost fallacy to their clumsy calculus.
Unfortunately, IBM betting on SoftLayer was like buying a racehorse which you simultaneously neglect and “transform” via bureaucracy so irresponsibly that by the day of the race, its hoofs have downgraded into flippers and the horse is “racing” so slowly that the spectators are confused whether to laugh or cry.
Let’s set some quantitative context around this debacle. SoftLayer reportedly generated $335 million in revenue in 20124 (pre-acquisition) and its revenue is now bundled into IBM Cloud’s “Infrastructure-as-a-Service” offering within the (relatively) newly defined “Cloud and Cognitive Software” segment5. Given the segment’s overall revenue was $4.2 billion in 20196 and Red Hat’s most recent fiscal year revenue pre-acquisition (more on that deal in a bit) was $3.4 billion7, the maximum revenue SoftLayer could have contributed in 2019 was around $1 billion.
That is an impressively mediocre growth story. To wit, in 2013 (the same year as the SoftLayer acquisition), annual AWS revenues reached $3.8 billion8 and grew to $35 billion in 20199 — reflecting a healthy compound annual growth rate (CAGR) of 44.8%. IBM’s bare metal bet via SoftLayer reflects, at best, a 13.2% CAGR10 — only 0.2% higher than the CAGR for the Trucking industry11. That is, as the kids say, a big oof.
Red Hat Redemption? While IBM was off hobbling its racehorse, their repudiation of the cloud opportunity led to a power vacuum in the “trustworthy provider with reliable long-term support” domain, which Microsoft was voracious and primed to fill. This is partially why, despite chatter of Azure presenting inferior value across a variety of facets relative to AWS or GCP, sales are still quite stellar12.
Presumably sensing their nebulous13 prospects in the cloud market, IBM completed the acquisition of Red Hat for a staggering $34 billion in summer 2019. This purchase price reflected a stock price premium of approximately 60%, which is double the typical premium14 and perhaps a rare moment of lucidity at IBM in recognizing that no one wants to be acquired by them.
The surface rationale of the deal is that IBM can cross-sell Red Hat’s offerings to its customers, which is… quite a bit harder without the managed infrastructure services business, especially given this was the rationale highlighted in the investor presentation on the deal. But we will pull on that paradox’s thread soon enough.
What did IBM receive from Red Hat to cross-sell? Thankfully, Red Hat was publicly traded prior to the IBM acquisition, so we can delve into its financial profile by leveraging relatively recent data.
Red Hat is a predominately subscription business – it was 88% of fiscal year (FY) 2019 revenue15, growing by 14.6% year-over-year (y-o-y) – with the rest of revenue coming from “Training and Services.” The latter category is pretty self-explanatory and is growing at a decent rate (19.3% y-o-y), evidently due to customers needing some handholding around adopting OpenShift and Ansible.
Red Hat’s subscription revenue is primarily generated by “Infrastructure-related” subscriptions, which made up 72.3% of all subscription revenue as of FY 2019. The critical driver of the “Infrastructure-related” category is almost assuredly subscriptions for Red Hat Enterprise Linux (RHEL), a Linux operating system (OS) favored by enterprises over the numerous open source Linux OSes due to the support provided with the subscription. With that said, “Infrastructure-related offerings” also includes Red Hat Satellite and Red Hat Virtualization and they don’t provide a further revenue breakdown.
The remaining 27.7% of subscription revenue is from “Application Development-related and other emerging technology subscription,” which is a ridiculous mouthful and belies the importance of the offerings for Red Hat’s growth strategy. “Application Development-related” explicitly refers to Red Hat Middleware, whose most notable offering is JBoss, an open source Java application server. Red Hat’s definition of “Emerging Technology” explicitly defines it as tools to “build and manage hybrid IT computing environments,”16 and includes Red Hat OpenShift, Red Hat Cloud Infrastructure, Red Hat OpenStack Platform, Red Hat Ansible Automation, Red Hat CloudForms and Red Hat Storage technologies.
Of those emerging technologies, OpenShift, Ansible, and OpenStack are, by my estimation, the highest sources of revenue growth. For instance, infrastructure-related subscription revenue grew 9.3% y-o-y from FY 2018 to FY 2019 – which certainly doesn’t count as “high growth” – while app dev-related &amp; other emerging tech subscription grew 30.9% y-o-y (double Red Hat’s total revenue growth of 15.1% y-o-y). And it really is the emerging tech part of that second category fomenting that higher growth.
The “emerging technology” offerings are actually cannibalizing the “application development-related” offerings17. As more of the market leverages containerized environments for app development, they want a platform like OpenShift with orchestration capabilities, leading customers to replace JBoss spend with OpenShift spend18.
We now need to dig a little deeper into OpenShift to set the stage for spelunking through IBM’s hybrid cloud-dominated spinoff strategy. OpenShift is a container orchestration platform like Kubernetes, but provides professional support services around things like updates, patches, and integrations. Both OpenShift and Kubernetes manage clusters – groups of containers – to automatically handle the operations required to keep services running smoothly, like restarting failed containers, distributing network traffic across containers, mounting storage systems, optimizing resource usage, rolling out new container images, and so forth.
How does this fit in with “hybrid cloud”? Containers package all the stuff (like libraries, dependencies, configuration files, etc.) needed for an application to run. By putting all this stuff in one package, the application is no longer dependent on specific infrastructure and can be run in different computing environments. So, a containerized application can run just as well on-prem or in a private or public cloud and on top of any Linux distribution – which means that containers are sufficiently flexible to work with whatever mix of systems an organization operates that constitute their “hybrid cloud.”
This means OpenShift is perfect for “hybrid cloud,” right? Not quite. While Kubernetes works with any Linux distro and basically any cloud platform, OpenShift only works with CentOS19, Fedora, or Red Hat Enterprise Linux Atomic Host (RHELAH). Those distros are supported by AWS, Azure, and GCP, but it still obviously constrains one’s options and is discordant with the flexibility-first ethos of “hybrid cloud.” Similarly dissonant is the fact that OpenShift’s templates, a collection of files that define the resources needed to run an app, like a package manager, are largely unable to handle more complex deployments – and complex deployments are pretty common in a “hybrid cloud” environment.
There are notable benefits of OpenShift relative to Kubernetes – like better security defaults and container image management – but the dream orchestration solution for “hybrid cloud” it is not.
IBM’s Hybrid Cloud Chimera Now we arrive back to IBM’s grand ambitions around an unfettered IBM Hybrid Cloud business. In the “Strategic Update” presentation announcing the spinoff, IBM touts that they are “positioned for success in the hybrid cloud and AI market.”20 Given the rest of the presentation is almost exclusively focused on the hybrid cloud opportunity – for which they assign a $1 trillion total addressable market (TAM) – I suspect the “AI” portion of that proclamation is to save face regarding all the Watson investments.
Before we dissect their hybrid cloud reveries, let’s solidify our notion of “hybrid cloud” beyond buzzphrasedom. “Hybrid cloud” is an environment that uses a mix of infrastructure, like on-prem servers, private or public clouds, containers, serverless functions, etc. In the spirit of vagueness that plagues all buzzphrases, “hybrid cloud” greedily represents the spectrum between “purely on-prem and bare metal” to “purely public cloud and containers.”
IBM pretty clearly includes “multi-cloud” as part of the “hybrid cloud” opportunity given their mention of “regardless of vendor” in the press release. “Multi-cloud” refers to organizations using multiple cloud providers to support their software delivery – like using AWS S3 and EC2 with Google Compute Engine VMs with Azure Storage. As a beloved engineering VP friend quipped, “Only a masochist voluntarily goes multi-cloud.”21
So, what should we make of this $1 trillion TAM?22 As is tradition in glossy corporate investor decks, no sources are cited for that figure in IBM’s presentation. Nevertheless, their TAM calculation includes $450 billion for “Cloud Software &amp; Platforms” (addressed by Red Hat and Cloud Paks23), $300 billion for “Cloud Transformational Services” (addressed by “OpenShift Everywhere”), and $230 billion for “Cloud Infrastructure” (addressed by OpenShift on Z and Regulated Industry Clouds). Without any citations for those numbers, it’s impossible to tell whether this is truly the “hybrid cloud” opportunity or IBM performing Fantasy Math™24 with overall cloud market figures to kindle excitement among investors.
Interestingly and farcically enough, IBM’s hybrid cloud TAM has decreased by $200 billion25 from 2019 until now! The breakdown back in 2019 was $550 billion for “Services for Cloud,” $350 billion for “Cloud Software,” $150 billion for “Infrastructure,” and $100 billion for “Component tech sold to Cloud Service Providers.” If we compare to the more recent breakdown, the TAM for “Cloud Software” increased by $100 billion and added platforms to the mix, “Cloud Services” added transformational to its name and shrunk by $250 billion, “Infrastructure” grew by $80 billion, and “Component tech” disappeared or perhaps represents the $100 billion added into “Cloud Software.” As I said: this is Fantasy Math™, where facts are discouraged and hand-waving an artform.
To beleaguer the point, the TAM a still-independent Red Hat identified in 2018 for itself was $69 billion26, which is a teensy-weensy 6.9%27 of the TAM that IBM boasts for what is largely just Red Hat-rebranded-as-IBM-Cloud. Red Hat’s breakdown of that TAM28 included $18.0 billion for Middleware, $16.0 billion for Storage, $17.6 billion for Operating System, $5.8 billion for “Cloud Management (including OpenStack)”, $4.7 billion for PaaS, $4.8 billion for Virtualization, and $1.9 billion for Infrastructure Management.
Is the right lesson to learn here that the difference between a $30 billion company and a $110 billion company is held in your ability to abstract away market categories and inflate TAMs to the point of meaninglessness? Can you 3x your market cap by replacing “various analyst estimates” as the source of your TAM figures with no citations whatsoever? I’m sure we’ll see a pay-for-publication Forbes article about this inspiring finding soon.
IBM claims that a hybrid cloud approach provides 2.5x the client value of an approach with “public-only cloud structures” – without citing any source data again, naturally29. Red Hat OpenShift is called out specifically as the “hybrid cloud platform” that will deliver this “generational leap in client value.” This suggests that OpenShift is the essential element to secure success from the spinoff; unfortunately for IBM, I suspect that this element will be more like Unobtanium for them when executing on the opportunity. Even with Kubernetes’ deficiencies and the legitimate market need for a more enterprise-flavored orchestration solution, there is simply no way OpenShift can fulfill the astronomical aspirations a $1 trillion TAM imparts.
Even if we assume that OpenShift – or even Red Hat’s “emerging technology” offerings more broadly – is indeed sufficiently promising to seize the platform part of the hybrid cloud market opportunity, successful realization of those prospects rests upon the assumption of IBM not being IBM. And, well, perhaps the only thing that IBM consistently does well is being reliably IBM about things. Would it really be a surprise if the same fate befalls Red Hat that befell SoftLayer?
Undoing IBM’s calcified culture of cash grab contracts while minimizing engineering effort seems preposterously unlikely, despite it being an inherent necessity if they’re aiming to grab a sufficiently succulent bite of the $1 trillion total addressable market (TAM) pie30 they’ve plated for public market investors.
The assumption of efficient execution is also incongruent with IBM’s modus operandi of Standard Oil-style vertical integration – so do we expect them to deviate from their standard, stifling strategy? Because the success of this breakup – let alone the Red Hat acquisition – hinges upon that. If IBM Hybrid Cloud cannot play nicely in other clouds and with other tools – for instance, with organizations using Kubernetes instead of OpenShift – it seems reasonable to suggest that they will fall on their ass. And this, of course, is exacerbated by OpenShift’s critical importance to IBM in the hybrid cloud Thunderdome, because it helps them capitalize on companies wanting to fluctuate between the big three providers.
What Could Have Been If I had a time machine and eventually reached a mind-melting level of boredom (seems unlikely, but pretend with me), I could go back even a mere five years ago and give advice to those in charge of IBM’s cloud strategy, which would simply be: just be yourself! Lean into the fact that people buy you because you’re the safe choice rather than pretending like you’re going to be the face of the hybrid cloud or AI revolution.
There are plenty of organizations who want handholding, helmets, and full body armor just to tricycle their way into Baby’s First Container, and IBM can and should help them out rather than courting the sour, sulky engineer market who, like the stereotypical angsty teen, would rather, like, literally die than be seen in such dorky protective gear.
Aside from the “Bubble Boy, but cloud native” opportunity, serving the needs of regulated industries was and still is an appropriate opportunity. IBM did mention in the strategic announcement that it would pursue the “regulated industry clouds” opportunity and that feels like the (only) appropriate fit out of the $1 trillion TAM they outline. Organizations in regulated industries may have no choice but to shun public clouds, needing instead to store and process data on-prem, and IBM (primarily vis a vis OpenShift) could help them still “modernize” applications within those confines.
“It’s just a flesh wound!” With all that said, I think we need to take a step back and look at the splitting up of the two businesses in the first place… because is it really the right move? Are there any IBM Cloud customers who are there without IBM’s I.T. services involved? My hunch is that the managed services arm is IBM’s biggest lead gen for their cloud offerings – because product quality certainly isn’t generating the fledgling interest that exists – so what will they do when that arm is severed?
Despite IBM’s cloying marketing hype around the birth of IBM 2: Hybrid Cloud Boogaloo31, IBM isn’t fully amputating its services arm – nor even fully divesting the Global Technology Services (GTS) segment that will be deemed “New Co”. Despite the spinoff announcement theatrics boasting the sloughing of low-growth “managed infrastructure services,” the cloudified new IBM will still include:
Technology Support Services (TSS): a wing of GTS (yes, the one being spun off…) that offers support and maintenance services for IBM’s hardware and software offerings – think if you need installation or troubleshooting help Global Business Services: consultants help customers figure out how to build things, which gets implemented by GBS’s Systems Integration and Application Management Services arms Global Financing: the home of IBM Credit32, which helps customers figure out how to pay for the stuff they want (or that is being pushed on them by the consultants), including refurbished hardware33 Systems: not a services segment, but it’s the home of IBM’s mainframes, semiconductors, power systems, and other hardware – which is also decidedly not cloud-flavored34 If we put all the pieces of the “revitalized” IBM together, we can see the following year-over-year growth profile (from 2018 to 2019)35 and proportion of IBM Hybrid Cloud revenue made up of cloud software vs. services vs. hardware:
This is hardly what one would picture when envisioning a plan to “accelerate hybrid cloud growth strategy” and reveals the fragility of IBM’s spinoff justification – because it’s difficult to claim that this is a refocused IBM dedicated to growthy hybrid cloud software offerings. As if that isn’t lame enough, this anemic portfolio is disgraceful in context of the revenue growth profiles of real cloud companies during the same period of time (FY 2018 to FY 2019), like 84% at Alibaba Cloud36, 72% at Azure37, 53% at Google Cloud38, and 37% at AWS39.
It’s already plenty confusing from a strategic perspective that IBM is keeping a non-trivial amount of services for IBM Hybrid Cloud. But confounding matters further, the portion of the GTS business that IBM is severing seems like it was pretty critical in selling infrastructure solutions to customers, which, as foreshadowed earlier, was what IBM touted for ~*synergies*~ when buying RedHat.
With that services-as-sales-engine strategy kaput, IBM’s self-described alternative for creating more opportunities for Red Hat is by… &lt;checks notes&gt;40 … “strategic solutioning,” which is an awe-inspiring level of linguistic inanity41. To avoid the shame of dignifying such a vacuous and frivolous statement, I will simply say that “strategic solutioning” sounds like it begets neither strategy nor solutions.
Ultimately, it feels like IBM is both attempting to focus on Red Hat’s stuff – which is legitimately the most promising opportunity in IBM’s gargantuan portfolio – while still very much attempting to suckle at the frothy teat of customers via services the same way it always has. In the perspicacious words of Ron Swanson, “Never half-ass two things, whole-ass one thing.”42
In Conclusion Sometimes it’s difficult to tell if an organization is purposefully bamboozling external parties or just temerariously bamboozling themselves. In the case of this rebranded IBM, perhaps they actually believe that IBM Hybrid Cloud has a future once it can shed the leaden snakeskin of (some of) the legacy IBM business – like someone freed of a parasitic partner, IBM Hybrid Cloud can finally pursue their dreams and live, laugh, love, loathe, launder (money), liquidate (assets, not people) and whatever else substitutes for introspection in those wearisome Journey to Find Oneself stories43.
If IBM leadership doesn’t actually believe that this rebranded IBM has a real future, and it is indeed a bamboozle targeted at shareholders and whatever customers remain, then it’s a huge waste of everyone’s time and also a colossal amount of money, and I wish it was socially acceptable for them to be like,
“Yep, we totally foozled execution on this cloud thing and what we have sucks. This IBM Hybrid Cloud thing is just going to be RedHat and we’re going to let them run the show now because that’s really better for everyone involved. And you’ll probably make more money off it anyway than the continuously-burning money-filled gas crater of a business we would’ve created.”
However, I kind of feel like IBM’s best use case for the tech industry at this point is to keep milking its despondent legacy business for cash and use it towards corporate VC into startups that can execute more efficiently and effectively with the capital than a provably languid behemoth like IBM. But my even spicier take is that abandoning the IBM Hybrid Cloud endeavor entirely might engender the most net-positive utility on a societal scale.
There’s a macro point here that it’s kind of a shame there isn’t a feasible mechanism for companies to just give up because they realize continuing operations is pointless. Much like many of my ill-fated DIY projects, sometimes it’s far healthier to realize you are wholly unequipped and seriously outclassed by others’ skills44 so you can proceed to things that you can successfully execute rather than immolating more irreplaceable seconds of your life.
I’m not necessarily saying trying to make IBM Hybrid Cloud a real thing is like trying to redo your own bathroom plumbing… but in both cases, shit is likely to go wrong.
Thank you shoutouts to Halvar Flake and Ryan Petrich.
The closing price of IBM’s stock on October 7 (the day before the announcement) was $124.07. The closing price on December 11 was $124.27, which is only 0.16% higher. ↩︎
Pun very much intended. ↩︎
This pun is also very much intended. ↩︎
Frier, S. (2013, June 4). IBM to Buy Cloud-Computing Firm SoftLayer for $2 Billion. Bloomberg. https://www.bloomberg.com/news/articles/2013-06-04/ibm-to-acquire-cloud-computing-provider-softlayer-technologies ↩︎
IBM. IBM Updates Reporting Segments in 2019. https://www.ibm.com/investor/att/pdf/IBM_Updates_Reporting_Segments_March_2019.pdf ↩︎
Vellante, D. (2020, May 2). Big Blue in the cloud? IBM’s future rests on its innovation agenda. SiliconANGLE. https://siliconangle.com/2020/05/02/big-blue-cloud-ibms-future-rests-innovation-agenda/ ↩︎
Red Hat, Inc. (2019, March 25). Red Hat Reports Fourth Quarter and Fiscal Year 2019 Results [Press Release]. https://www.redhat.com/en/about/press-releases/red-hat-reports-fourth-quarter-and-fiscal-year-2019-results ↩︎
Dignan, L. (2013, January 7). Amazon’s AWS: $3.8 billion revenue in 2013, says analyst. ZDNet. https://www.zdnet.com/article/amazons-aws-3-8-billion-revenue-in-2013-says-analyst/ ↩︎
Amazon.com, Inc. (2020, January 30). Amazon.com Announces Fourth Quarter Sales up 21% to $87.4 Billion [Press Release]. https://press.aboutamazon.com/news-releases/news-release-details/amazoncom-announces-fourth-quarter-sales-21-874-billion ↩︎
Given “Cloud &amp; Cognitive Software” also includes “transaction processing platforms” and “cognitive applications” (which presumably are all the Watson things), SoftLayer’s revenue contribution is almost assuredly less than $1 billion. I suspect it is quite likely less than $650 million, which enters the territory of a sub-10% CAGR – the territory of industries like Broadcasting, Home Furnishings, Hotel &amp; Gaming, Shipbuilding &amp; Marine, etc. but certainly not software (which has an average CAGR of 30.9%). Professor Aswath Damodaran of NYU Stern helpfully hosts this page full of revenue and net income CAGRs across industries: http://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/histgr.html ↩︎
This is entirely shade at IBM, not the Trucking sector. The Trucking sector is vital to our economy, but is also not known for being high-growth. Thanks again to Professor Damodaran’s helpful page for these stats: http://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/histgr.html ↩︎
Pun intended. Source: Microsoft Corporation. (2020). Form 10-Q for the Quarter Ended September 30, 2020. https://view.officeapps.live.com/op/view.aspx?src=https://c.s-microsoft.com/en-us/CMSFiles/MSFT_FY21Q1_10Q.docx?version=e37388fe-99fe-6c5e-deb3-ae5b4fd8b16f (yes, Microsoft seems to now host their SEC filings as .docx rather than as PDFs like everyone else, because they think not enough people know about Microsoft Office???) ↩︎
Can’t stop, won’t stop with the cloud puns. ↩︎
I recalled from my earlier investment banking years that the average stock price premium is usually around 20%, and a variety of online sources seem to confirm that the typical range is 20% - 30%. For instance: https://merger.com/ma-question-dont/ ↩︎
Red Hat, Inc. (2019). Form 10-K for Fiscal Year 2019. https://www.sec.gov/ix?doc=/Archives/edgar/data/1087423/000108742319000012/rht-10kq4fy19.htm#s6FE105CDD1C05A3794E63D0DE6C598B5 ↩︎
&lt;foreshadowing intensifies&gt; ↩︎
While no doubt tinted by hindsight bias, the 2010 acquisition of Makara – which included the fledgling seeds of OpenShift – was a prescient move, given they would not need to hedge against the dwindling Middleware business for nearly a decade. ↩︎
Literal quote from Red Hat: “We believe revenue growth in our Middleware portfolio has moderated as customers shift their workloads from traditional Java deployments to containerized environments with middleware-as-a-service on OpenShift.” See citation 15 for source. ↩︎
The CentOS Project, owned by Red Hat (owned by IBM), recently announced that CentOS is being end-of-life’d (EoL’d) – so the distros that OpenShift supports will be even further restricted. CentOS Stream will take CentOS’s place, except rather than being a true replacement as a free alternative to RHEL, its new purpose will be to serve as the upstream branch of RHEL. Technically, this means CentOS 8 will be EoL before CentOS 7, presumably because IBM is confused by distributed systems and thus does not understand the importance of consistency in software, in life, in love. ↩︎
The appropriate meme for this proclamation is “Press X to doubt.” ↩︎
If you want elaboration on why I, my friend, and a good many others cringe at “multi-cloud,” I recommend Corey Quinn’s post, “Multi-Cloud is the Worst Practice.” ↩︎
Hopefully someone sent Russ Hanneman the memo that “quatro commas” is the new sexy. Or, perhaps, to keep the alliteration from “Tequila Tres Comas” consistent, Mr. Hanneman should consider “Cognac Quatre Virgules.” I am presuming Mr. Hanneman is unaware that non-English languages generally separate large numbers with a period or non-breaking space rather than a comma, given “tres comas.” ↩︎
I had no idea what Cloud Paks were and discovered one of the worst sentences written on the modern internet when I looked them up, which now, quite like the videotape in the film “The Ring”, I must share with others lest the curse take me: “IBM Cloud Pak® offerings are an integrated set of AI-infused software solutions for hybrid cloud that help you fully implement intelligent workflows in your business to accelerate digital transformation.” ↩︎
One of my guilty pleasures is referring to TAM calculations as Fantasy Math™. As a former i-banker, I can attest that TAMs are a game in which the goal is to produce as high a number as possible while preserving at least one silken microfiber of plausibility. I doubt anyone within IBM, nor any investor, actually believes this is IBM Cloud’s real TAM, but I acknowledge that humans routinely redefine the depths to which the bar of critical thinking can go. ↩︎
Page 14 of IBM’s Investor Briefing on the Red Hat deal that cites a $1.2 trillion TAM. ↩︎
Nice. ↩︎
Nice. ↩︎
Red Hat’s TAM breakdown can be found on page 8 of this “Red Hat Value Proposition” presentation: http://people.redhat.com/~duboyd/CO_RHUG/DEN/12_2016/RHT_Value_Prop_for_CORHUG.pdf ↩︎
IBM. (2020, October 8). IBM Strategic Update. https://www.ibm.com/investor/att/pdf/IBM-Strategic-Update-2020-charts.pdf ↩︎
Given this TAM is exaggerated to a near-childish extent, I presume IBM baked their hybrid cloud market pie in an Easy-Bake Oven. ↩︎
For the boomers among you, I am referencing this meme: https://knowyourmeme.com/memes/electric-boogaloo ↩︎
I like to think of IBM Credit as the payday loans of cloud computing. In the spirit of fairness, I must note that AWS also quietly offers bespoke contract structuring. ↩︎
Global Financing also includes cursed projects, such as this purposeless blockchain: https://github.com/IBM/global-financing-blockchain ↩︎
My headcanon is now that IBM retained the Systems division so the name International Business Machines still has relevance. ↩︎
IBM. (2019). 2019 Annual Report. https://www.ibm.com/annualreport/assets/downloads/IBM_Annual_Report_2019.pdf ↩︎
Alibaba Group Holding Limited. (2019). Form 20-F. https://otp.investis.com/clients/us/alibaba/SEC/sec-show.aspx?FilingId=13476929&amp;Cik=0001577552&amp;Type=PDF&amp;hasPdf=1 ↩︎
Microsoft Corporation. (2019). Form 10-K. https://microsoft.gcs-web.com/static-files/7c96b326-33bc-4b84-8abb-7afd7a517ea3 ↩︎
Alphabet Inc. (2019). Form 10-K. https://abc.xyz/investor/static/pdf/20200204_alphabet_10K.pdf?cache=cdd6dbf ↩︎
Amazon.com, Inc. (2019). Form 10-K. https://d18rn0p25nwr6d.cloudfront.net/CIK-0001018724/4d39f579-19d8-4119-b087-ee618abf82d6.pdf ↩︎
The notes I checked were, in fact, just a single PDF of IBM’s Investor Briefing 2019: https://www.ibm.com/investor/att/pdf/ibm-2019-investor-briefing-presentation.pdf ↩︎
Not to mention that a company who infamously collaborated with Nazis should probably avoid inventing new stupid phrases that abbreviate to “SS”. ↩︎
Parks and Recreation. (2019, November 12). Ron Tells Leslie “Never Half-Ass Two Things” - Parks and Recreation [Video]. YouTube. https://www.youtube.com/watch?v=k6hZ9KdG1QU ↩︎
True introspection does not come from diving into high-calorie food or being a spiritual tourist in a foreign land, see also https://tvtropes.org/pmwiki/pmwiki.php/Main/JourneyToFindOneself ↩︎
If you wanted a nuclear take in this post, here you go: Imagine if IBM could recapture the same ability to execute that they had when they helped the Nazis execute humans. ↩︎
</description>
            <atom:content type="html"><![CDATA[<figure>
    <img src="/blog/img/ibm-hybrid-cloud/alexander-mokhov-chimera.jpg"
         alt="Mythgard. Sideshow Chimera by Alexander Mokhov"/> <figcaption>
            <p><a href="https://www.artstation.com/artwork/6aVRB5">Mythgard. Sideshow Chimera by Alexander Mokhov</a></p>
        </figcaption>
</figure>
<p><a href="https://newsroom.ibm.com/2020-10-08-IBM-To-Accelerate-Hybrid-Cloud-Growth-Strategy-And-Execute-Spin-Off-Of-Market-Leading-Managed-Infrastructure-Services-Unit">IBM recently announced</a> that it is spinning off its “IT infrastructure services” unit so that it can streamline its focus solely towards IBM’s “open hybrid cloud platform” by the end of 2021. Leveraging this aggressive assertion, I’ll be referring to this rebranded IBM as “IBM Hybrid Cloud” (vs. current IBM as just “IBM”) throughout this post for clarity.</p>
<p>I think most people – whether tech workers or investors – would agree that cloud stuff in general is far more riveting than IT services both intellectually and from a growth prospects perspective. With that said, public investors seem relatively unimpressed; while there was <a href="https://www.cnbc.com/2020/10/08/ibm-shares-surge-on-plans-to-spin-off-unit-into-separate-publicly-traded-company-.html">an initial stock price bump upon the announcement</a>, the price is back to where it was pre-announcement<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. In this post, I want to explore the answer to the question that seems to be floating around the market mindshare: what is IBM Hybrid Cloud’s future, really?</p>
<p>The best place to start this peregrination is probably to evaluate the existing IBM Cloud offering in the context of the public cloud providers. IBM, as you all are assuredly aware, is not one of the “Big Three,” the moniker given to Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure due to their outsized share of the market. Perhaps most inauspiciously, Gartner places IBM Public Cloud on their Magic Quadrant (MQ) for Cloud IaaS in the pitiable &ldquo;niche&rdquo; quadrant. To add insult to injury, IBM rests behind Oracle Cloud both in “Ability to Execute” and in “Completeness of Vision” – and IBM’s position as a languishing laggard on the MQ <a href="https://twitter.com/QuinnyPig/status/1303409576587309056">hasn&rsquo;t budged for three years</a>.</p>
<p>Thus, we arrive at our next line of inquiry: What went wrong? How did IBM totally whiff execution on cloud offerings? Could they not discern the blindingly obvious writing in the stars<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>?</p>
<hr>
<h2 id="the-bare-metal-blunder">The Bare Metal Blunder</h2>
<p>At least one ingredient in IBM&rsquo;s stew of ineptitude was (and still is) their predilection for all things Watson, which clouded<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> their judgment around the IaaS market.</p>
<p>IBM simultaneously rebuffed public cloud by insisting on pursuing the bare metal opportunity – the market for single tenant (unshared) physical servers – while also neglecting it, attempting to eke out greater profitability by decreasing engineering spend at a time when competitors were firing their proverbial money cannons at the IaaS market. IBM pursued the bare metal opportunity vis a vis their acquisition of SoftLayer in 2013 for $2 billion, which perhaps added a salty sprinkle of <a href="https://en.wikipedia.org/wiki/Sunk_cost">sunk cost fallacy</a> to their clumsy calculus.</p>
<p>Unfortunately, IBM betting on SoftLayer was like buying a racehorse which you simultaneously neglect and “transform” via bureaucracy so irresponsibly that by the day of the race, its hoofs have downgraded into flippers and the horse is “racing” so slowly that the spectators are confused whether to laugh or cry.</p>
<p>Let’s set some quantitative context around this debacle. SoftLayer reportedly generated $335 million in revenue in 2012<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup> (pre-acquisition) and its revenue is now bundled into IBM Cloud’s “Infrastructure-as-a-Service” offering within the (relatively) newly defined “Cloud and Cognitive Software” segment<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>. Given the segment’s overall revenue was $4.2 billion in 2019<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup> and Red Hat’s most recent fiscal year revenue pre-acquisition (more on that deal in a bit) was $3.4 billion<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>, the maximum revenue SoftLayer could have contributed in 2019 was around $1 billion.</p>
<p><img src="/blog/img/ibm-hybrid-cloud/chart1.png" alt="A pie chart showing IBM&amp;rsquo;s Cloud and Cognitive revenue in 2019. 81% for Red Hat and 19% for SoftLayer? Watson? Underpants Gnomes?"></p>
<p>That is an impressively mediocre growth story. To wit, in 2013 (the same year as the SoftLayer acquisition), annual AWS revenues reached $3.8 billion<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup> and grew to $35 billion in 2019<sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup> — reflecting a healthy compound annual growth rate (CAGR) of 44.8%. IBM’s bare metal bet via SoftLayer reflects, at best, a 13.2% CAGR<sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup> — only 0.2% higher than the CAGR for the Trucking industry<sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup>. That is, as the kids say, a big oof.</p>
<hr>
<h2 id="red-hat-redemption">Red Hat Redemption?</h2>
<p>While IBM was off hobbling its racehorse, their repudiation of the cloud opportunity led to a power vacuum in the “trustworthy provider with reliable long-term support” domain, which Microsoft was voracious and primed to fill. This is partially why, despite chatter of Azure presenting inferior value across a variety of facets relative to AWS or GCP, sales are still quite stellar<sup id="fnref:12"><a href="#fn:12" class="footnote-ref" role="doc-noteref">12</a></sup>.</p>
<p>Presumably sensing their nebulous<sup id="fnref:13"><a href="#fn:13" class="footnote-ref" role="doc-noteref">13</a></sup> prospects in the cloud market, IBM completed the acquisition of Red Hat for a staggering <a href="https://www.redhat.com/en/about/press-releases/ibm-closes-landmark-acquisition-red-hat-34-billion-defines-open-hybrid-cloud-future">$34 billion</a> in summer 2019. This purchase price reflected a stock price premium of approximately 60%, which is double the typical premium<sup id="fnref:14"><a href="#fn:14" class="footnote-ref" role="doc-noteref">14</a></sup> and perhaps a rare moment of lucidity at IBM in recognizing that no one wants to be acquired by them.</p>
<p>The surface rationale of the deal is that IBM can cross-sell Red Hat&rsquo;s offerings to its customers, which is&hellip; quite a bit harder without the managed infrastructure services business, especially given this was the rationale highlighted in <a href="https://www.ibm.com/investor/att/pdf/ibm-2019-investor-briefing-presentation.pdf">the investor presentation on the deal</a>. But we will pull on that paradox’s thread soon enough.</p>
<p>What did IBM receive from Red Hat to cross-sell? Thankfully, Red Hat was publicly traded prior to the IBM acquisition, so we can delve into its financial profile by leveraging relatively recent data.</p>
<p>Red Hat is a predominately subscription business – it was 88% of fiscal year (FY) 2019 revenue<sup id="fnref:15"><a href="#fn:15" class="footnote-ref" role="doc-noteref">15</a></sup>, growing by 14.6% year-over-year (y-o-y) – with the rest of revenue coming from “Training and Services.” The latter category is pretty self-explanatory and is growing at a decent rate (19.3% y-o-y), evidently due to customers needing some handholding around adopting OpenShift and Ansible.</p>
<p>Red Hat’s subscription revenue is primarily generated by “Infrastructure-related” subscriptions, which made up 72.3% of all subscription revenue as of FY 2019. The critical driver of the “Infrastructure-related” category is almost assuredly subscriptions for Red Hat Enterprise Linux (RHEL), a Linux operating system (OS) favored by enterprises over the numerous open source Linux OSes due to the support provided with the subscription. With that said, “Infrastructure-related offerings” also includes Red Hat Satellite and Red Hat Virtualization and they don’t provide a further revenue breakdown.</p>
<p>The remaining 27.7% of subscription revenue is from “Application Development-related and other emerging technology subscription,” which is a ridiculous mouthful and belies the importance of the offerings for Red Hat’s growth strategy. “Application Development-related” explicitly refers to Red Hat Middleware, whose most notable offering is JBoss, an open source Java application server. Red Hat’s definition of “Emerging Technology” explicitly defines it as tools to “build and manage hybrid IT computing environments,”<sup id="fnref:16"><a href="#fn:16" class="footnote-ref" role="doc-noteref">16</a></sup> and includes Red Hat OpenShift, Red Hat Cloud Infrastructure, Red Hat OpenStack Platform, Red Hat Ansible Automation, Red Hat CloudForms and Red Hat Storage technologies.</p>
<p><img src="/blog/img/ibm-hybrid-cloud/chart2.png" alt="A pie chart showing Red Hat&amp;rsquo;s Fiscal Year 2019 revenue breakdown. 64% for infrastructure-releated subscriptions, 24% for app development and &amp;ldquo;emerging technology&amp;rdquo; subscriptions, and 12% for training and services."></p>
<p>Of those emerging technologies, OpenShift, Ansible, and OpenStack are, by my estimation, the highest sources of revenue growth. For instance, infrastructure-related subscription revenue grew 9.3% y-o-y from FY 2018 to FY 2019 – which certainly doesn’t count as “high growth” – while app dev-related &amp; other emerging tech subscription grew 30.9% y-o-y (double Red Hat&rsquo;s total revenue growth of 15.1% y-o-y). And it really is the emerging tech part of that second category fomenting that higher growth.</p>
<p>The “emerging technology” offerings are actually cannibalizing the “application development-related” offerings<sup id="fnref:17"><a href="#fn:17" class="footnote-ref" role="doc-noteref">17</a></sup>. As more of the market leverages containerized environments for app development, they want a platform like OpenShift with orchestration capabilities, leading customers to replace JBoss spend with OpenShift spend<sup id="fnref:18"><a href="#fn:18" class="footnote-ref" role="doc-noteref">18</a></sup>.</p>
<p>We now need to dig a little deeper into OpenShift to set the stage for spelunking through IBM’s hybrid cloud-dominated spinoff strategy. OpenShift is a container orchestration platform like Kubernetes, but provides professional support services around things like updates, patches, and integrations. Both OpenShift and Kubernetes manage clusters – groups of containers – to automatically handle the operations required to keep services running smoothly, like restarting failed containers, distributing network traffic across containers, mounting storage systems, optimizing resource usage, rolling out new container images, and so forth.</p>
<p>How does this fit in with “hybrid cloud”? Containers package all the stuff (like libraries, dependencies, configuration files, etc.) needed for an application to run. By putting all this stuff in one package, the application is no longer dependent on specific infrastructure and can be run in different computing environments. So, a containerized application can run just as well on-prem or in a private or public cloud and on top of any Linux distribution – which means that containers are sufficiently flexible to work with whatever mix of systems an organization operates that constitute their “hybrid cloud.”</p>
<p>This means OpenShift is perfect for “hybrid cloud,” right? Not quite. While Kubernetes works with any Linux distro and basically any cloud platform, OpenShift only works with CentOS<sup id="fnref:19"><a href="#fn:19" class="footnote-ref" role="doc-noteref">19</a></sup>, Fedora, or <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/installation_and_configuration_guide/introduction_to_atomic_host">Red Hat Enterprise Linux Atomic Host (RHELAH)</a>. Those distros are supported by AWS, Azure, and GCP, but it still obviously constrains one’s options and is discordant with the flexibility-first ethos of “hybrid cloud.” Similarly dissonant is the fact that OpenShift’s templates, a collection of files that define the resources needed to run an app, like a package manager, are largely unable to handle more complex deployments – and complex deployments are pretty common in a “hybrid cloud” environment.</p>
<p>There are notable benefits of OpenShift relative to Kubernetes – like better security defaults and container image management – but the dream orchestration solution for “hybrid cloud” it is not.</p>
<hr>
<h2 id="ibms-hybrid-cloud-chimera">IBM&rsquo;s Hybrid Cloud Chimera</h2>
<p>Now we arrive back to IBM’s grand ambitions around an unfettered IBM Hybrid Cloud business. In the <a href="https://www.ibm.com/investor/att/pdf/IBM-Strategic-Update-2020-charts.pdf">“Strategic Update” presentation</a> announcing the spinoff, IBM touts that they are “positioned for success in the hybrid cloud and AI market.”<sup id="fnref:20"><a href="#fn:20" class="footnote-ref" role="doc-noteref">20</a></sup> Given the rest of the presentation is almost exclusively focused on the hybrid cloud opportunity – for which they assign a $1 <em>trillion</em> total addressable market (TAM) – I suspect the “AI” portion of that proclamation is to save face regarding all the Watson investments.</p>
<p>Before we dissect their hybrid cloud reveries, let’s solidify our notion of “hybrid cloud” beyond buzzphrasedom. “Hybrid cloud” is an environment that uses a mix of infrastructure, like on-prem servers, private or public clouds, containers, serverless functions, etc. In the spirit of vagueness that plagues all buzzphrases, “hybrid cloud” greedily represents the spectrum between “purely on-prem and bare metal” to “purely public cloud and containers.”</p>
<p>IBM pretty clearly includes “multi-cloud” as part of the “hybrid cloud” opportunity given their mention of &ldquo;regardless of vendor&rdquo; in <a href="https://newsroom.ibm.com/2020-10-08-IBM-To-Accelerate-Hybrid-Cloud-Growth-Strategy-And-Execute-Spin-Off-Of-Market-Leading-Managed-Infrastructure-Services-Unit">the press release</a>. “Multi-cloud” refers to organizations using multiple cloud providers to support their software delivery – like using AWS S3 and EC2 with Google Compute Engine VMs with Azure Storage. As a beloved engineering VP friend quipped, &ldquo;Only a masochist voluntarily goes multi-cloud.&rdquo;<sup id="fnref:21"><a href="#fn:21" class="footnote-ref" role="doc-noteref">21</a></sup></p>
<p>So, what should we make of this $1 trillion TAM?<sup id="fnref:22"><a href="#fn:22" class="footnote-ref" role="doc-noteref">22</a></sup> As is tradition in glossy corporate investor decks, no sources are cited for that figure in IBM’s presentation. Nevertheless, their TAM calculation includes $450 billion for “Cloud Software &amp; Platforms” (addressed by Red Hat and Cloud Paks<sup id="fnref:23"><a href="#fn:23" class="footnote-ref" role="doc-noteref">23</a></sup>), $300 billion for “Cloud Transformational Services” (addressed by “OpenShift Everywhere”), and $230 billion for “Cloud Infrastructure” (addressed by OpenShift on Z and Regulated Industry Clouds). Without any citations for those numbers, it’s impossible to tell whether this is truly the “<em>hybrid</em> cloud” opportunity or IBM performing Fantasy Math™<sup id="fnref:24"><a href="#fn:24" class="footnote-ref" role="doc-noteref">24</a></sup> with overall cloud market figures to kindle excitement among investors.</p>
<p>Interestingly and farcically enough, IBM’s hybrid cloud TAM has decreased by $200 billion<sup id="fnref:25"><a href="#fn:25" class="footnote-ref" role="doc-noteref">25</a></sup> from 2019 until now! The breakdown back in 2019 was $550 billion for “Services for Cloud,” $350 billion for “Cloud Software,” $150 billion for “Infrastructure,” and $100 billion for “Component tech sold to Cloud Service Providers.” If we compare to the more recent breakdown, the TAM for “Cloud Software” increased by $100 billion and added <em>platforms</em> to the mix, “Cloud Services” added <em>transformational</em> to its name and shrunk by $250 billion, “Infrastructure” grew by $80 billion, and “Component tech” disappeared or perhaps represents the $100 billion added into “Cloud Software.” As I said: this is Fantasy Math™, where facts are discouraged and hand-waving an artform.</p>
<p>To beleaguer the point, the TAM a still-independent Red Hat identified in 2018 for itself was $69 billion<sup id="fnref:26"><a href="#fn:26" class="footnote-ref" role="doc-noteref">26</a></sup>, which is a teensy-weensy 6.9%<sup id="fnref:27"><a href="#fn:27" class="footnote-ref" role="doc-noteref">27</a></sup> of the TAM that IBM boasts for what is largely just Red Hat-rebranded-as-IBM-Cloud. Red Hat’s breakdown of that TAM<sup id="fnref:28"><a href="#fn:28" class="footnote-ref" role="doc-noteref">28</a></sup> included $18.0 billion for Middleware, $16.0 billion for Storage, $17.6 billion for Operating System, $5.8 billion for “Cloud Management (including OpenStack)”, $4.7 billion for PaaS, $4.8 billion for Virtualization, and $1.9 billion for Infrastructure Management.</p>
<p>Is the right lesson to learn here that the difference between a $30 billion company and a $110 billion company is held in your ability to abstract away market categories and inflate TAMs to the point of meaninglessness? Can you 3x your market cap by replacing “various analyst estimates” as the source of your TAM figures with no citations whatsoever? I’m sure we’ll see a pay-for-publication Forbes article about this inspiring finding soon.</p>
<p>IBM claims that a hybrid cloud approach provides 2.5x the client value of an approach with “public-only cloud structures” – without citing any source data again, naturally<sup id="fnref:29"><a href="#fn:29" class="footnote-ref" role="doc-noteref">29</a></sup>. Red Hat OpenShift is called out specifically as the “hybrid cloud platform” that will deliver this “generational leap in client value.” This suggests that OpenShift is the essential element to secure success from the spinoff; unfortunately for IBM, I suspect that this element will be more like Unobtanium for them when executing on the opportunity. Even with Kubernetes’ deficiencies and the legitimate market need for a more enterprise-flavored orchestration solution, there is simply no way OpenShift can fulfill the astronomical aspirations a $1 trillion TAM imparts.</p>
<p>Even if we assume that OpenShift – or even Red Hat’s “emerging technology” offerings more broadly – is indeed sufficiently promising to seize the platform part of the hybrid cloud market opportunity, successful realization of those prospects rests upon the assumption of IBM not being IBM. And, well, perhaps the only thing that IBM consistently does well is being reliably IBM about things. Would it really be a surprise if the same fate befalls Red Hat that befell SoftLayer?</p>
<p>Undoing IBM’s calcified culture of cash grab contracts while minimizing engineering effort seems preposterously unlikely, despite it being an inherent necessity if they’re aiming to grab a sufficiently succulent bite of the $1 trillion total addressable market (TAM) pie<sup id="fnref:30"><a href="#fn:30" class="footnote-ref" role="doc-noteref">30</a></sup> <a href="https://www.ibm.com/investor/att/pdf/IBM-Strategic-Update-2020-charts.pdf">they&rsquo;ve plated</a> for public market investors.</p>
<p>The assumption of efficient execution is also incongruent with IBM’s modus operandi of Standard Oil-style vertical integration – so do we expect them to deviate from their standard, stifling strategy? Because the success of this breakup – let alone the Red Hat acquisition – hinges upon that. If IBM Hybrid Cloud cannot play nicely in other clouds and with other tools – for instance, with organizations using Kubernetes instead of OpenShift – it seems reasonable to suggest that they will fall on their ass. And this, of course, is exacerbated by OpenShift&rsquo;s critical importance to IBM in the hybrid cloud Thunderdome, because it helps them capitalize on companies wanting to fluctuate between the big three providers.</p>
<hr>
<h2 id="what-could-have-been">What Could Have Been</h2>
<p>If I had a time machine and eventually reached a mind-melting level of boredom (seems unlikely, but pretend with me), I could go back even a mere five years ago and give advice to those in charge of IBM&rsquo;s cloud strategy, which would simply be: just be yourself! Lean into the fact that people buy you because you’re the safe choice rather than pretending like you’re going to be the face of the hybrid cloud or AI revolution.</p>
<p>There are plenty of organizations who want handholding, helmets, and full body armor just to tricycle their way into Baby’s First Container, and IBM can and should help them out rather than courting the sour, sulky engineer market who, like the stereotypical angsty teen, would rather, like, literally die than be seen in such dorky protective gear.</p>
<p>Aside from the “Bubble Boy, but cloud native” opportunity, serving the needs of regulated industries was and still is an appropriate opportunity. IBM did mention in the strategic announcement that it would pursue the “regulated industry clouds” opportunity and that feels like the (only) appropriate fit out of the $1 trillion TAM they outline. Organizations in regulated industries may have no choice but to shun public clouds, needing instead to store and process data on-prem, and IBM (primarily vis a vis OpenShift) could help them still “modernize” applications within those confines.</p>
<hr>
<h2 id="its-just-a-flesh-wound">&ldquo;It&rsquo;s just a flesh wound!&rdquo;</h2>
<p>With all that said, I think we need to take a step back and look at the splitting up of the two businesses in the first place… because is it really the right move? Are there any IBM Cloud customers who are there without IBM’s I.T. services involved? My hunch is that the managed services arm is IBM’s biggest lead gen for their cloud offerings – because product quality certainly isn’t generating the fledgling interest that exists – so what will they do when that arm is severed?</p>
<p>Despite IBM’s cloying marketing hype around the birth of IBM 2: Hybrid Cloud Boogaloo<sup id="fnref:31"><a href="#fn:31" class="footnote-ref" role="doc-noteref">31</a></sup>, IBM <em>isn’t</em> fully amputating its services arm – nor even fully divesting the Global Technology Services (GTS) segment that will be deemed “New Co”. Despite the spinoff announcement theatrics boasting the sloughing of low-growth “managed infrastructure services,” the cloudified new IBM will still include:</p>
<ul>
<li>Technology Support Services (TSS): a wing of GTS (yes, the one being spun off…) that offers support and maintenance services for IBM’s hardware and software offerings – think if you need installation or troubleshooting help</li>
<li>Global Business Services: consultants help customers figure out how to build things, which gets implemented by GBS’s Systems Integration and Application Management Services arms</li>
<li>Global Financing: the home of IBM Credit<sup id="fnref:32"><a href="#fn:32" class="footnote-ref" role="doc-noteref">32</a></sup>, which helps customers figure out how to pay for the stuff they want (or that is being pushed on them by the consultants), including refurbished hardware<sup id="fnref:33"><a href="#fn:33" class="footnote-ref" role="doc-noteref">33</a></sup></li>
<li>Systems: not a services segment, but it’s the home of IBM’s mainframes, semiconductors, power systems, and other hardware – which is also decidedly not cloud-flavored<sup id="fnref:34"><a href="#fn:34" class="footnote-ref" role="doc-noteref">34</a></sup></li>
</ul>
<p>If we put all the pieces of the “revitalized” IBM together, we can see the following year-over-year growth profile (from 2018 to 2019)<sup id="fnref:35"><a href="#fn:35" class="footnote-ref" role="doc-noteref">35</a></sup> and proportion of IBM Hybrid Cloud revenue made up of cloud software vs. services vs. hardware:</p>
<p><img src="/blog/img/ibm-hybrid-cloud/chart3.png" alt="A bar chart showing the rebranded IBM&amp;rsquo;s FY 2018 to 2019 revenue growth, in billions. Cloud &amp; Cognitive Software grew 4.5% year over year. Global Business Services grew 0.2% year over year. Systems grew negative 5.4% year over year. TSS grew negative 4.8% year over year. Global Financing grew negative 11.9% year over year."></p>
<p><img src="/blog/img/ibm-hybrid-cloud/chart4.png" alt="A pie chart showing the rebranded IBM&amp;rsquo;s FY 2019 revenue breakdown. 46% for Snoozville Services, 42% for Cloud and Cognitive, 12% for Old-skool Hardware."></p>
<p>This is hardly what one would picture when envisioning a plan to “accelerate hybrid cloud growth strategy” and reveals the fragility of IBM’s spinoff justification – because it’s difficult to claim that this is a refocused IBM dedicated to growthy hybrid cloud software offerings. As if that isn’t lame enough, this anemic portfolio is disgraceful in context of the revenue growth profiles of real cloud companies during the same period of time (FY 2018 to FY 2019), like 84% at Alibaba Cloud<sup id="fnref:36"><a href="#fn:36" class="footnote-ref" role="doc-noteref">36</a></sup>, 72% at Azure<sup id="fnref:37"><a href="#fn:37" class="footnote-ref" role="doc-noteref">37</a></sup>, 53% at Google Cloud<sup id="fnref:38"><a href="#fn:38" class="footnote-ref" role="doc-noteref">38</a></sup>, and 37% at AWS<sup id="fnref:39"><a href="#fn:39" class="footnote-ref" role="doc-noteref">39</a></sup>.</p>
<p>It’s already plenty confusing from a strategic perspective that IBM is keeping a non-trivial amount of services for IBM Hybrid Cloud. But confounding matters further, the portion of the GTS business that IBM <em>is</em> severing seems like it was pretty critical in selling infrastructure solutions to customers, which, as foreshadowed earlier, was what IBM touted for ~*synergies*~ when buying RedHat.</p>
<p>With that services-as-sales-engine strategy kaput, IBM’s self-described alternative for creating more opportunities for Red Hat is by&hellip; &lt;checks notes&gt;<sup id="fnref:40"><a href="#fn:40" class="footnote-ref" role="doc-noteref">40</a></sup> &hellip; “strategic solutioning,” which is an awe-inspiring level of linguistic inanity<sup id="fnref:41"><a href="#fn:41" class="footnote-ref" role="doc-noteref">41</a></sup>. To avoid the shame of dignifying such a vacuous and frivolous statement, I will simply say that “strategic solutioning” sounds like it begets neither strategy nor solutions.</p>
<p>Ultimately, it feels like IBM is both attempting to focus on Red Hat’s stuff – which is legitimately the most promising opportunity in IBM’s gargantuan portfolio – while still very much attempting to suckle at the frothy teat of customers via services the same way it always has. In the perspicacious words of Ron Swanson, “Never half-ass two things, whole-ass one thing.”<sup id="fnref:42"><a href="#fn:42" class="footnote-ref" role="doc-noteref">42</a></sup></p>
<hr>
<h2 id="in-conclusion">In Conclusion</h2>
<p>Sometimes it’s difficult to tell if an organization is purposefully bamboozling external parties or just temerariously bamboozling themselves. In the case of this rebranded IBM, perhaps they actually believe that IBM Hybrid Cloud has a future once it can shed the leaden snakeskin of (some of) the legacy IBM business – like someone freed of a parasitic partner, IBM Hybrid Cloud can finally pursue their dreams and live, laugh, love, loathe, launder (money), liquidate (assets, not people) and whatever else substitutes for introspection in those wearisome Journey to Find Oneself stories<sup id="fnref:43"><a href="#fn:43" class="footnote-ref" role="doc-noteref">43</a></sup>.</p>
<p>If IBM leadership doesn’t actually believe that this rebranded IBM has a real future, and it is indeed a bamboozle targeted at shareholders and whatever customers remain, then it’s a huge waste of everyone’s time and also a colossal amount of money, and I wish it was socially acceptable for them to be like,</p>
<blockquote>
<p>“Yep, we totally foozled execution on this cloud thing and what we have sucks. This IBM Hybrid Cloud thing is just going to be RedHat and we’re going to let them run the show now because that’s really better for everyone involved. And you’ll probably make more money off it anyway than the continuously-burning money-filled <a href="https://en.wikipedia.org/wiki/Darvaza_gas_crater">gas crater</a> of a business we would’ve created.”</p>
</blockquote>
<p>However, I kind of feel like IBM’s best use case for the tech industry at this point is to keep milking its despondent legacy business for cash and use it towards corporate VC into startups that can execute more efficiently and effectively with the capital than a provably languid behemoth like IBM. But my even spicier take is that abandoning the IBM Hybrid Cloud endeavor entirely might engender the most net-positive utility on a societal scale.</p>
<p>There’s a macro point here that it’s kind of a shame there isn’t a feasible mechanism for companies to just give up because they realize continuing operations is pointless. Much like many of my ill-fated DIY projects, sometimes it’s far healthier to realize you are wholly unequipped and seriously outclassed by others’ skills<sup id="fnref:44"><a href="#fn:44" class="footnote-ref" role="doc-noteref">44</a></sup> so you can proceed to things that you can successfully execute rather than immolating more irreplaceable seconds of your life.</p>
<p>I’m not necessarily saying trying to make IBM Hybrid Cloud a real thing is like trying to redo your own bathroom plumbing&hellip; but in both cases, shit is likely to go wrong.</p>
<hr>
<p>Thank you shoutouts to Halvar Flake and Ryan Petrich.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>The closing price of IBM’s stock on October 7 (the day before the announcement) was $124.07. The closing price on December 11 was $124.27, which is only 0.16% higher.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Pun very much intended.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>This pun is also very much intended.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Frier, S. (2013, June 4). IBM to Buy Cloud-Computing Firm SoftLayer for $2 Billion. <em>Bloomberg</em>. <a href="https://www.bloomberg.com/news/articles/2013-06-04/ibm-to-acquire-cloud-computing-provider-softlayer-technologies">https://www.bloomberg.com/news/articles/2013-06-04/ibm-to-acquire-cloud-computing-provider-softlayer-technologies</a>&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>IBM. <em>IBM Updates Reporting Segments in 2019</em>. <a href="https://www.ibm.com/investor/att/pdf/IBM_Updates_Reporting_Segments_March_2019.pdf">https://www.ibm.com/investor/att/pdf/IBM_Updates_Reporting_Segments_March_2019.pdf</a>&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Vellante, D. (2020, May 2). Big Blue in the cloud? IBM’s future rests on its innovation agenda. <em>SiliconANGLE</em>. <a href="https://siliconangle.com/2020/05/02/big-blue-cloud-ibms-future-rests-innovation-agenda/">https://siliconangle.com/2020/05/02/big-blue-cloud-ibms-future-rests-innovation-agenda/</a>&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>Red Hat, Inc. (2019, March 25). <em>Red Hat Reports Fourth Quarter and Fiscal Year 2019 Results</em> [Press Release]. <a href="https://www.redhat.com/en/about/press-releases/red-hat-reports-fourth-quarter-and-fiscal-year-2019-results">https://www.redhat.com/en/about/press-releases/red-hat-reports-fourth-quarter-and-fiscal-year-2019-results</a>&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>Dignan, L. (2013, January 7). Amazon&rsquo;s AWS: $3.8 billion revenue in 2013, says analyst. <em>ZDNet</em>. <a href="https://www.zdnet.com/article/amazons-aws-3-8-billion-revenue-in-2013-says-analyst/">https://www.zdnet.com/article/amazons-aws-3-8-billion-revenue-in-2013-says-analyst/</a>&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>Amazon.com, Inc. (2020, January 30). <em>Amazon.com Announces Fourth Quarter Sales up 21% to $87.4 Billion</em> [Press Release]. <a href="https://press.aboutamazon.com/news-releases/news-release-details/amazoncom-announces-fourth-quarter-sales-21-874-billion">https://press.aboutamazon.com/news-releases/news-release-details/amazoncom-announces-fourth-quarter-sales-21-874-billion</a>&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:10">
<p>Given “Cloud &amp; Cognitive Software” also includes “transaction processing platforms” and “cognitive applications” (which presumably are all the Watson things), SoftLayer’s revenue contribution is almost assuredly less than $1 billion. I suspect it is quite likely less than $650 million, which enters the territory of a sub-10% CAGR – the territory of industries like Broadcasting, Home Furnishings, Hotel &amp; Gaming, Shipbuilding &amp; Marine, etc. but certainly not software (which has an average CAGR of 30.9%). Professor Aswath Damodaran of NYU Stern helpfully hosts this page full of revenue and net income CAGRs across industries: <a href="http://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/histgr.html">http://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/histgr.html</a>&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>This is entirely shade at IBM, not the Trucking sector. The Trucking sector is vital to our economy, but is also not known for being high-growth. Thanks again to Professor Damodaran’s helpful page for these stats: <a href="http://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/histgr.html">http://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/histgr.html</a>&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:12">
<p>Pun intended. Source: Microsoft Corporation. (2020). <em>Form 10-Q for the Quarter Ended September 30, 2020</em>. <a href="https://view.officeapps.live.com/op/view.aspx?src=https://c.s-microsoft.com/en-us/CMSFiles/MSFT_FY21Q1_10Q.docx?version=e37388fe-99fe-6c5e-deb3-ae5b4fd8b16f">https://view.officeapps.live.com/op/view.aspx?src=https://c.s-microsoft.com/en-us/CMSFiles/MSFT_FY21Q1_10Q.docx?version=e37388fe-99fe-6c5e-deb3-ae5b4fd8b16f</a> (yes, Microsoft seems to now host their SEC filings as .docx rather than as PDFs like everyone else, because they think not enough people know about Microsoft Office???)&#160;<a href="#fnref:12" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:13">
<p>Can’t stop, won’t stop with the cloud puns.&#160;<a href="#fnref:13" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:14">
<p>I recalled from my earlier investment banking years that the average stock price premium is usually around 20%, and a variety of online sources seem to confirm that the typical range is 20% - 30%. For instance: <a href="https://merger.com/ma-question-dont/">https://merger.com/ma-question-dont/</a>&#160;<a href="#fnref:14" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:15">
<p>Red Hat, Inc. (2019). <em>Form 10-K for Fiscal Year 2019</em>. <a href="https://www.sec.gov/ix?doc=/Archives/edgar/data/1087423/000108742319000012/rht-10kq4fy19.htm#s6FE105CDD1C05A3794E63D0DE6C598B5">https://www.sec.gov/ix?doc=/Archives/edgar/data/1087423/000108742319000012/rht-10kq4fy19.htm#s6FE105CDD1C05A3794E63D0DE6C598B5</a>&#160;<a href="#fnref:15" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:16">
<p>&lt;foreshadowing intensifies&gt;&#160;<a href="#fnref:16" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:17">
<p>While no doubt tinted by hindsight bias, the 2010 acquisition of Makara – which included the fledgling seeds of OpenShift – was a prescient move, given they would not need to hedge against the dwindling Middleware business for nearly a decade.&#160;<a href="#fnref:17" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:18">
<p>Literal quote from Red Hat: “We believe revenue growth in our Middleware portfolio has moderated as customers shift their workloads from traditional Java deployments to containerized environments with middleware-as-a-service on OpenShift.” See citation 15 for source.&#160;<a href="#fnref:18" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:19">
<p>The CentOS Project, owned by Red Hat (owned by IBM), <a href="https://blog.centos.org/2020/12/future-is-centos-stream/">recently announced</a> that CentOS is being end-of-life’d <a href="https://en.wikipedia.org/wiki/End-of-life_product">(EoL’d)</a> – so the distros that OpenShift supports will be even further restricted. CentOS Stream will take CentOS’s place, except rather than being a true replacement as a free alternative to RHEL, its new purpose will be to serve as the upstream branch of RHEL. Technically, this means CentOS 8 will be EoL before CentOS 7, presumably because IBM is confused by distributed systems and thus does not understand the importance of consistency in software, in life, in love.&#160;<a href="#fnref:19" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:20">
<p>The appropriate meme for this proclamation is <a href="https://knowyourmeme.com/memes/la-noire-doubt-press-x-to-doubt">“Press X to doubt.”</a>&#160;<a href="#fnref:20" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:21">
<p>If you want elaboration on why I, my friend, and a good many others cringe at “multi-cloud,” I recommend Corey Quinn’s post, <a href="https://www.lastweekinaws.com/blog/multi-cloud-is-the-worst-practice/">“Multi-Cloud is the Worst Practice.”</a>&#160;<a href="#fnref:21" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:22">
<p>Hopefully someone sent Russ Hanneman the memo that “quatro commas” is the new sexy. Or, perhaps, to keep the alliteration from “Tequila Tres Comas” consistent, Mr. Hanneman should consider “Cognac Quatre Virgules.” I am presuming Mr. Hanneman is unaware that non-English languages generally separate large numbers with a period or non-breaking space rather than a comma, given “tres comas.”&#160;<a href="#fnref:22" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:23">
<p>I had no idea what Cloud Paks were and discovered one of the worst sentences written on the modern internet when I <a href="https://www.ibm.com/cloud/paks">looked them up</a>, which now, quite like the videotape in the film &ldquo;The Ring&rdquo;, I must share with others lest the curse take me: “IBM Cloud Pak® offerings are an integrated set of AI-infused software solutions for hybrid cloud that help you fully implement intelligent workflows in your business to accelerate digital transformation.”&#160;<a href="#fnref:23" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:24">
<p>One of my guilty pleasures is referring to TAM calculations as Fantasy Math™. As a former i-banker, I can attest that TAMs are a game in which the goal is to produce as high a number as possible while preserving at least one silken microfiber of plausibility. I doubt anyone within IBM, nor any investor, actually believes this is IBM Cloud’s real TAM, but I acknowledge that humans routinely redefine the depths to which the bar of critical thinking can go.&#160;<a href="#fnref:24" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:25">
<p>Page 14 of <a href="https://www.ibm.com/investor/att/pdf/ibm-2019-investor-briefing-presentation.pdf">IBM’s Investor Briefing</a> on the Red Hat deal that cites a $1.2 trillion TAM.&#160;<a href="#fnref:25" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:26">
<p>Nice.&#160;<a href="#fnref:26" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:27">
<p>Nice.&#160;<a href="#fnref:27" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:28">
<p>Red Hat’s TAM breakdown can be found on page 8 of this “Red Hat Value Proposition” presentation: <a href="http://people.redhat.com/~duboyd/CO_RHUG/DEN/12_2016/RHT_Value_Prop_for_CORHUG.pdf">http://people.redhat.com/~duboyd/CO_RHUG/DEN/12_2016/RHT_Value_Prop_for_CORHUG.pdf</a>&#160;<a href="#fnref:28" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:29">
<p>IBM. (2020, October 8). <em>IBM Strategic Update</em>. <a href="https://www.ibm.com/investor/att/pdf/IBM-Strategic-Update-2020-charts.pdf">https://www.ibm.com/investor/att/pdf/IBM-Strategic-Update-2020-charts.pdf</a>&#160;<a href="#fnref:29" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:30">
<p>Given this TAM is exaggerated to a near-childish extent, I presume IBM baked their hybrid cloud market pie in an Easy-Bake Oven.&#160;<a href="#fnref:30" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:31">
<p>For the boomers among you, I am referencing this meme: <a href="https://knowyourmeme.com/memes/electric-boogaloo">https://knowyourmeme.com/memes/electric-boogaloo</a>&#160;<a href="#fnref:31" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:32">
<p>I like to think of IBM Credit as the payday loans of cloud computing. In the spirit of fairness, I must note that AWS also quietly offers <a href="https://podcasts.google.com/?feed=aHR0cHM6Ly9mZWVkcy50cmFuc2lzdG9yLmZtL3NjcmVhbWluZy1pbi10aGUtY2xvdWQ&amp;ep=14&amp;episode=NTgzNjhhMzQtM2FkNi00ODMwLTk4NTEtMTc3NzJlOTMxMjIw&amp;pe=1&amp;pep=0">bespoke contract structuring</a>.&#160;<a href="#fnref:32" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:33">
<p>Global Financing also includes cursed projects, such as this purposeless blockchain: <a href="https://github.com/IBM/global-financing-blockchain">https://github.com/IBM/global-financing-blockchain</a>&#160;<a href="#fnref:33" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:34">
<p>My headcanon is now that IBM retained the Systems division so the name International Business <em>Machines</em> still has relevance.&#160;<a href="#fnref:34" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:35">
<p>IBM. (2019). <em>2019 Annual Report</em>. <a href="https://www.ibm.com/annualreport/assets/downloads/IBM_Annual_Report_2019.pdf">https://www.ibm.com/annualreport/assets/downloads/IBM_Annual_Report_2019.pdf</a>&#160;<a href="#fnref:35" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:36">
<p>Alibaba Group Holding Limited. (2019). <em>Form 20-F</em>. <a href="https://otp.investis.com/clients/us/alibaba/SEC/sec-show.aspx?FilingId=13476929&amp;Cik=0001577552&amp;Type=PDF&amp;hasPdf=1">https://otp.investis.com/clients/us/alibaba/SEC/sec-show.aspx?FilingId=13476929&amp;Cik=0001577552&amp;Type=PDF&amp;hasPdf=1</a>&#160;<a href="#fnref:36" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:37">
<p>Microsoft Corporation. (2019). <em>Form 10-K</em>. <a href="https://microsoft.gcs-web.com/static-files/7c96b326-33bc-4b84-8abb-7afd7a517ea3">https://microsoft.gcs-web.com/static-files/7c96b326-33bc-4b84-8abb-7afd7a517ea3</a>&#160;<a href="#fnref:37" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:38">
<p>Alphabet Inc. (2019). <em>Form 10-K</em>. <a href="https://abc.xyz/investor/static/pdf/20200204_alphabet_10K.pdf?cache=cdd6dbf">https://abc.xyz/investor/static/pdf/20200204_alphabet_10K.pdf?cache=cdd6dbf</a>&#160;<a href="#fnref:38" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:39">
<p>Amazon.com, Inc. (2019). <em>Form 10-K</em>. <a href="https://d18rn0p25nwr6d.cloudfront.net/CIK-0001018724/4d39f579-19d8-4119-b087-ee618abf82d6.pdf">https://d18rn0p25nwr6d.cloudfront.net/CIK-0001018724/4d39f579-19d8-4119-b087-ee618abf82d6.pdf</a>&#160;<a href="#fnref:39" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:40">
<p>The notes I checked were, in fact, just a single PDF of IBM’s <em>Investor Briefing 2019</em>: <a href="https://www.ibm.com/investor/att/pdf/ibm-2019-investor-briefing-presentation.pdf">https://www.ibm.com/investor/att/pdf/ibm-2019-investor-briefing-presentation.pdf</a>&#160;<a href="#fnref:40" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:41">
<p>Not to mention that a company who infamously collaborated with Nazis should probably avoid inventing new stupid phrases that abbreviate to “SS”.&#160;<a href="#fnref:41" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:42">
<p>Parks and Recreation. (2019, November 12). <em>Ron Tells Leslie &ldquo;Never Half-Ass Two Things&rdquo; - Parks and Recreation</em> [Video]. YouTube. <a href="https://www.youtube.com/watch?v=k6hZ9KdG1QU">https://www.youtube.com/watch?v=k6hZ9KdG1QU</a>&#160;<a href="#fnref:42" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:43">
<p>True introspection does not come from diving into high-calorie food or being a spiritual tourist in a foreign land, see also <a href="https://tvtropes.org/pmwiki/pmwiki.php/Main/JourneyToFindOneself">https://tvtropes.org/pmwiki/pmwiki.php/Main/JourneyToFindOneself</a>&#160;<a href="#fnref:43" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:44">
<p>If you wanted a nuclear take in this post, here you go: Imagine if IBM could recapture the same ability to execute that they had when they helped the Nazis execute humans.&#160;<a href="#fnref:44" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>On YOLOsec and FOMOsec</title>
            <link>https://kellyshortridge.com/blog/posts/on-yolosec-and-fomosec/</link>
            <pubDate>Tue, 22 Sep 2020 08:00:41 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/on-yolosec-and-fomosec/</guid>
            <description>Possibly my finest contribution to the infosec industry is introducing the concept of #yolosec, first discussed when I introduced decision trees as a threat modelling device in my Black Hat talk back in 20171. Not one to let a solid shitpost go to waste2, I want to expand and expound on that concept, and introduce its ideological opposite: #fomosec. Most security efforts are hilariously inefficient, but only one end of the spectrum (#yolosec) is typically called out. That changes today.
This post will explore why both YOLO security (YOLOsec) and FOMO security (FOMOsec) are pernicious disservices to infosec defense and how you can spot them so that you may yeet them from your organization’s security strategy.
The tl;dr is that #yolosec and #fomosec are disconnected from the goals and needs of the business, forsaking pragmatism and prudence in favor of fanatical flavors of recklessness. YOLOsec reflects a security strategy driven by a “you only live once” mentality – one that emboldens people to ignore future concerns around security to achieve today’s gratification. FOMOsec reflects a security strategy driven by a fear of missing out – one that frightens people into misallocating resources towards what makes them feel better about their security efforts.
If you imagine your organization as a sea-faring vessel, infosec’s goal is to ensure the boat can survive krakens or canon-wielding pirates and successfully complete its journey. If you ignore the existence of sea terrors (#yolosec), you may not make it to your destination unless Poseidon grants you merciful passage. If you prioritize defense above your vessel’s mission (#fomosec), you will find yourself aboard a battleship that is entirely inadequate for transporting revenue-generating cargo.
Kraken by Russell Marks
First, does security matter? Before we dig into defining #yolosec and #fomosec, I want to establish the appropriate context for these concepts. The potential peril inherent in these two “strategic” approaches rests on understanding security’s relevance to private-sector organizations3. The not-so-dirty and not-so-secret dirty secret is that information security does not matter nearly as much as the infosec industry proselytizes. In the grand scheme of business risks, it is solidly in the bottom half, if not the bottom quartile4.
Your organization is far more concerned with attracting and retaining customers, successfully competing in an evolving market, macroeconomic factors relevant to their industry (especially right now, amid the COVID-19 slowdown), operational interruptions and downtime, commodity price fluctuations, failure to maintain brand image and public perception, inadequate financial forecasting, changes in product mix impacting profitability, maintaining relationships with supply chain partners, impacts of seasonality, their litigation and regulatory risk profile, changes in international trade relations, climate change impacts, exchange and interest rate fluctuations, inability to access external financing, ability to anticipate consumer preferences, and, well, you hopefully get the idea by now5.
As far as tech stuff goes, organizations are primarily concerned about the interruption or inadequacy of IT systems, since those systems power ongoing business operations and, to varying degrees, fuel their revenue growth. To the ops readers among you, congrats, you are critical to modern business operations across most industries! To my infosec readers, I am certainly not saying you are unimportant, but, rather, that it is vital to be self-aware of one’s influence on the reality around you6.
With that said, infosec is not completely useless7. Attackers can absolutely cause operational interruption and downtime, most obviously through things like DDoS attacks, ransomware, or overloading cloud compute to eke out computercoins. Beyond those examples, security incidents in general necessitate recovery and response efforts which require money, time, and, frequently, system downtime (so money, money, and frequently money). There is limited evidence that security incidents lead to damaged public perception8 or public market valuation9.
Therefore, one can consider infosec important to organizations insofar that it either: 1) minimizes negative operational impact engendered by attacker actions 2) enhances qualities that improve business operations, such as the speed, stability, or scale of IT systems. To be clear, there is scant evidence of infosec achieving this second axiom in any meaningful fashion. Security teams encouraging the adoption of standardized APIs or base images is perhaps the only strongly justifiable example. Nevertheless, it remains an equally important, albeit contemporarily theoretical, justification for infosec’s relevancy to businesses.
Now we can explore #yolosec and #fomosec and why their manifestations are so magnificently monstrous.
What is YOLOsec? Parkour by Mile Mićić
YOLO is an acronym for “You Only Live Once,” the modern carpe diem and mostly ironic millennial catchphrase meant to express the unbridled living of life to its fullest in the present, believing the current moment to be vital and unique10, with scant regard to the future. YOLO-driven actions tend to manifest as risk-seeking activities, such as skydiving or, in the case of Napoleon Bonaparte’s Hundred Days, sneaking into France during exile, ripping your coat open and daring your former troops to shoot you, marching on Paris with said troops, reclaiming your title as emperor, engaging in a war against Europe’s major powers, losing at Waterloo, and returning to exile.
YOLOsec, and my irony-flavored hashtag #yolosec, is a term meant to describe a security strategy that embodies the “you only live once” mentality. A yolosec strategy says, “Setting our S3 bucket full of customer data to public will let us deploy our service faster, what could go wrong?” YOLOsec whispers sweet deceits in your ear, telling you that basic security countermeasures like privilege separation and access control are tomorrow problems – knowing full well that tomorrow will distend into months or years. And this temptation can metastasize across your systems and organization.
#yolosec is rarely instigated by a refutation of security’s importance; its wellspring is often found in an arguably myopic attention on specific business goals that are more easily or quickly achieved by ignoring or dismissing security considerations11. True to Hanlon’s razor, #yolosec is almost assuredly due to incompetence rather than malice.
For instance, developers are not specifically aiming to write code so riddled with bugs that swamps are jealous, nor are they storing API keys in plaintext as an expression of their love for hackers – although both constitute #yolosec. Or, in organizations with high turnover, fresh engineering teams may barely understand how a legacy system works, rendering the exercise of upgrading or migrating it from its current insecure conditions clearly intimidating 12.
It is thus understandable, albeit undesirable, that the default state of engineering teams is to overlook or neglect infosec concerns when performing their work. This is rarely due to succumbence to temptation, but simply the dearth of pragmatic security wisdom among engineering teams13.
What is FOMOsec? WHERE_AM_I by Patrycja Wójcik
FOMO is an acronym for “Fear of Missing Out,” the modern “keeping up with the Joneses” meant to express the anxiety and regret borne from not participating in experiences in which others are involved – usually examined in the context of witnessing those experiences via social media. FOMO revolves around the basic human desire to understand what is going on, especially the impulse to stay connected with other humans’ experiences14.
FOMO can represent a sensation that others are living life better than you are, that you are outside of a social loop, that you are behind in life relative to others, or that everything is beautiful and nothing hurts15 for everyone but you. Human brains are wired to judge outcomes relative to a perceived status quo16 and to feel bad when experiencing a perceived loss17, so FOMO quite unfortunately presents a “buy one cognitive bias, get one free” deal. In a nutshell, there are two primary dimensions driving FOMO: a desire for belonging18 and anxiety about isolation19.
FOMOsec, and the corresponding hashtag #fomosec, is a term meant to describe a security strategy that is driven by a fear of missing out and its psychological underpinnings20. A #fomosec strategy says, “If you aren’t perfectly protecting literally all the things, what are you even doing?” FOMOsec cackles in your face, mocking your impotent control over the security of your organization’s systems and the flaccidity of your defense relative to the potency of your adversaries and the adulations showered upon your I.T. peers in engineering and operations.
Prioritization and pragmatism fade into the background under FOMOsec; what gains the spotlight is escaping the feeling of inadequacy – regaining a sense of autonomy and control irrespective of outcomes. Under #fomosec, you cry happy tears as your teeth clench and your knuckles whiten from the domspace ecstasy of gripping the wheel, euphorically ignoring that the wheel is not attached to anything and that your supposed steering is relegating you to stagnation.
Defenders, from security engineers to CISOs, are not deliberately sabotaging and impeding organizational operations because of a hatred for business growth or improvement. Every human longs to belong21. Defenders are not immune to this basic human need nor immune to its capacity to desecrate strategic thinking22.
The human desire for approval and acceptance from groups who share their social identity is what most foments FOMO – seeking inclusion is even more powerful than avoiding exclusion23. Both urges result in largely the same outcomes, however, as humans who feel excluded aim to strengthen their connections with social groups and more tightly enmesh their group membership with their self-identity24. Ultimately, FOMO drives humans to alter their own behavior to imitate others within their chosen social group25, regardless of specific underlying motivator.
Even mere tourists of the infosec industry are likely aware of the shockingly borgish tendencies of its constituents, culminating in boldly defined shared identities that glut themselves on in-group signaling mechanisms. Whether the identity of the misunderstood Nostradamus who must save the feeble users from themselves or those who treat a piece of software as “completely broken” if there is a vulnerability requiring local access, special configuration settings, and dolphins jumping through ring 0, the nature of infosec culture and cliques certainly suggests the presence of imitation towards the aim of cementing group identity and gaining group approval. And this, in turn, supports the credibility of #fomosec’s existence.
Envy &#43; FOMO Security Matte Painting by Mong Cherng Lee
I believe envy waters the roots of #fomosec. Envy is best described as the painful feeling of hostility, inferiority, and resentment resting upon a foundation of admiration.26 When you admire or respect someone else’s situation and compare it against your own, FOMO and envy mix together into an especially potent poison27.
The targets of infosec’s envy are attackers and software engineers – that both possess measurable and meaningful goals that result in tangibly meaningful work. For attackers, the obvious goal is “did you get in?” For engineers, the obvious goal is “did you deliver software customers will buy and use?” Offense attains swaggering victory and software engineers attain lucrative accolades. Infosec’s goals are nebulous or self-serving, its metrics either non-existent or inconsequential, its success abstract and bittersweet at best.
In response to my own work28, I have witnessed infosec professionals bristle at the notion of adopting ops metrics like mean time to recovery (MTTR) to inform their own work. Infosec seemingly wants its own special metrics, despite the obvious logic of adopting metrics that align with operational objectives. This palpably inefficient priority of feeling special over pursuing more meaningful work is not only driven by FOMO-via-envy, but FOMO-via-social-identity, too.
Envy is made even stronger by a need to belong29. Social identity can even be thought of as blossoming from FOMO, which is also made stronger by the longing for belonging30. Extending this to infosec, FOMOsec is perhaps the catalyst for the stark, shared identities found across the industry. In fact, the infosec community, in many ways, is not unlike online gaming communities – featuring guilds (like CISO cliques and SecEng sects), server-wide events (like conferences), and highly active chat channels (like Twitter and Slack groups). And, much like online gaming addiction, the human need to belong perhaps fuels infosec’s obsession with adhering to the shared identity of “outsider.”
FOMO Security Budgets Unfortunately, #fomosec discourages practitioners from pragmatic budget decisions towards choices that make them feel accepted by their desired social group, whether fellow CISOs or security engineers. This desire for praise and prestige from others leads to consumption behavior based an expectation of how others will perceive the consumption, rather than prioritizing product quality31. Driven by the fundamental need for social inclusion, humans purchase and use products that are symbolic of the groups with which they desire connection – and they are willing to sacrifice “personal and financial well-being for the sake of social well-being.”32
The purchasing of tools such as threat hunting, fancy threat intel reports, or protection against niche, nation-state threats can be thought of as luxury goods that serve as costly signaling mechanisms to generate interpersonal acceptance33. Adopting frameworks trendy among the in-group – such as MITRE ATT&amp;CK is currently – is a less expensive signaling mechanism, until you factor in opportunity cost. Security engineers building their own SIEM, rivalling children’s attempts at building majestic towers with popsicle sticks and glue sticks, is costly both in people hours and opportunity cost incurred by the organization. However, it represents a feat worthy of admiration from their peers despite the substantial downsides, true to #fomosec’s essence.
FOMO not only drives people to spend excessively and forget their true needs, but also leads people to consult their peers when making purchasing decisions for goods or services – the combination of both leading to impulse purchases that are far from strategic.34 While there are few studies on how security leaders make purchases, anecdata suggests that peers are one of the stronger influences in decision-making, especially if you include indirect peer influence through research analysis firms.
As a result, #fomosec creates the consummate conditions for snakeoilism to spread. FOMOsec germinates from defenders’ fears of being the outcast sheep of the I.T. family, fears of always being one step behind of attackers, fears of their work being meaningless in light of the inevitability of failure, and fears of looking foolish to peers when an incident is emblazoned in public headlines. Rather than promote mindfulness on business objectives, the industry encourages their dismissal, shaming and guilting and goading defenders into throwing away budget towards products that pursue perfection – the unattainable ideal that tacitly stokes the ego’s lust for heroism.
Horseshoe Theory &amp; FOYO Security35 Despite #yolosec suggesting a blistering lack of attention on security and, on the other end of the spectrum, #fomosec suggesting a desperate and egoistic obsession on security, they both result in poignantly poor security outcomes. I argue that they represent the two ends of a Security Strategy Horseshoe, and, in their extreme forms, are nearly indistinguishable in their outcomes.
When you FOMOsec, you are prone to treat the security of all assets, and threats to those assets, equally – or worse, overcorrect for niche threats (like 0day or nation state actors) under the “gotta catch ‘em all” mentality. In the former case, even the largest teams with the highest budgets cannot perfectly secure all systems against all types of incidents. One result is spreading efforts far too thinly in order to maximize breadth of coverage or concentrating on what feels like the biggest gap and neglecting others.
Desperate for Data They were all in love with data
They were drinking from a fountain
That was pouring like an avalanche
Coming down the mountain36
Avalanche death race by Louise Meijer
Those who #fomosec believe that one must collect all of the data possible, as missing the one clue indicating an incident will be catastrophic, embarrassing, or result in some other ill-defined tragedy37. There is a shared, somewhat histrionic belief across the industry that attackers just need to discover one flaw to win, while defenders must cover all flaws to win. From the assumption that attackers possess an (unfair) information advantage, it can flow that gaining an advantage comes from rebalancing the pervading information asymmetry. That is, defenders can elevate their status relative to their adversaries by accumulating enough data, where the quantification of “enough” is persistently vague.
Through this lens of data accumulation, the end results of FOMOsec-driven behavior look an awful lot like those generated by YOLOsec. To quote Professor Netzer of Columbia University (invoking Andrew Lang), “A lot of people are using data like a drunk man uses a lamppost, for support rather than illumination.”38 Doing so is a decidedly YOLO vibe, even if it is fostered by FOMO.
When FOMOsec ignores the basic wisdom of the central limit theorem and the reality of diminishing returns on data set size in improving performance and reducing errors39, it wraps around closer to YOLOsec. Data is a tool for improving outcomes when faced with the unknown, but resolving uncertainty presents finite benefits40 – and thus data presents finite and diminishing returns.
Like a dragon slowly burying itself in treasure, FOMOsec growls, “We need to hoard all the data…” and YOLOsec roars, “…and who cares if it causes operational distractions and management headaches in the future?” The FOMOsec-distorted cost / benefit model not only overstates the benefits of data accumulation but also misses the costs of handling all that data going forward41in a classically myopic YOLOsec fashion.
FOMOsec tells you that you desperately need to collect all the things (and to buy fancy tech that can help you do so) because otherwise you are not in the know, and YOLOsec tells you to collect all the things just because you can. These impulses are nearly indistinguishable in flavor, and equally as damaging. You should not measure things just because you can42 as it will lead to a form of self-sabotage via information overload, which leads to cognitive overload43, which leads to a variety of issues that can be summarized as significant human performance degradation44.
The social element of FOMO manifests in infoxication45, too. The giveaway that data accumulation is not actually about better business outcomes is found in infosec teams refusing to leverage data sources and tools deployed by operations teams, which would streamline budget and promote collaboration. Instead, security teams seemingly refuse to let go. Budget is viewed as a status signal, and security leaders in the vice grip of FOMOsec are disincentivized from taking actions that make them feel less influential, even if it is the right move for their organization and team.
Defenders who #fomosec seek out approval and praise from other defenders as well as their organization – and performing challenging engineering feats helps fulfill that impulse. As one study looking at Amazon’s big data practices unearthed, the accumulation of “big data” is mostly viewed as an engineering challenge rather than providing tangible modeling benefits46. Additionally, most of this “big data” is wasted, with potentially as little as 0.1% of the data treasure hoard being used to power decision-support systems, as in Google’s case47.
The mythical “data feedback loop” does not bear out in practice, but it can certainly help defenders burdened with FOMOsec feel like they are in the know, that they are performing prestigious work, and, besides, everyone else seems to be doing it as part of their security strategy, so mimicry feels right, too. But, just as your mother warned you once upon a time, jumping off a bridge just because everyone else is doing it is a decidedly YOLO course of action.
All Aboard the Vulnerability Hypetrain Train by Aleksandr Chernobai
The infosec industry is firmly strapped onboard the vulnerability hypetrain: the flurry of media attention and industry panic that explodes upon publication of previously unknown flaws in software, known as zero-day vulnerabilities (or 0day, as the kids say), that often come with their own branding and public relations strategy48. Each new, provocatively-named vulnerability adds a stop on the interminable journey. The engine of the vulnerability hypetrain is #fomosec and its exhaust is #yolosec.
Aboard this train, security leaders roleplay as special agents and muse through their tinted Morphean shades about “threat actors,” presenting idle speculation about how geopolitical events shape their firewall policies. The names of vulnerabilities hold special power, like an eldritch deity lurking in the forests surrounding a village, to whom blood sacrifices must be made each full moon lest it devour any newborns in their cribs. The truth that is lost among these rituals of the status quo is that vulnerabilities, and their monikers, should not be given more thought than the names of hurricanes that threaten power or data availability.
Wherefore this pestilent paradigm, then? Each vulnerability with its own PR campaign is a chance to trigger #fomosec, which leads to money or attention (so money or indirectly money)49. Constantly stimulating the FOMOsec response leads defenders to adopt a vulnerability-centric approach to security that merges into the unkempt path of YOLOsec. YOLOsec curls around you like an anaconda, obscuring your vision until you can only see the industry headlines screaming about the newest cyberweapon or threat group, the peripheral sliding away until the more relevant factors that contribute to security failure, like misconfigurations, are overlooked.
Overly permissive access controls will not receive a fancy name like RootRipper or DefaultDesecration but will make an attacker’s job much easier. Thus, when #fomosec panics about missing the presence of the latest heralded vulnerability in your organization’s environment, #yolosec high fives its partner-in-crime and springs into action to beleaguer your colleagues with the false positives and intractable UIs of vulnerability scanners while the attacker stumbles upon a publicly exposed k8s management dashboard and takes control of prod50.
The stated motivation for the vulnerability hypetrain is to protect users in the surrounding countryside. But, well, COVID-19 was not named LungTempest, and we do not see pharmaceutical companies publishing blog posts by self-proclaimed rockstars about how to improve the scalability or functionality of LungTempest so amateurs can DIY their own virus with a bit of copy pasting and tweaking51.
We would all rightfully be outraged if pharma researchers were publishing posts about leet bioweapons online for fun and profit, about how to bypass a competitor vendor’s vaccine (after an oh-so-generous 90 day window for them to fix the vaccine), or with technical details that dramatically overstated the potential severity of the virus in order to raise funding for a new miracle drug.
Alas, #yolosec relishes the joie de vivre of dropping 0day to thunderous applause and #fomosec drinks deeply of it – the shimmering waters of an oasis in the lonely desert under the blisteringly hot sun of irrelevancy. Defenders thirst for significance and acceptance. And researchers (and the vendors who employ them) are more than happy to provide a means of feeling “in the know” and phantasmic progression towards solving the frustratingly contumacious security problem. I leave it up to the reader to evaluate whether this is symbiosis or parasitism.
FOMO Security Fosters YOLO Security Hacker’s temporary hideout by Minjeong Kim
The desire to acquire the sexy, shiny security toys that seemingly signal membership in the Cool Kids Club is incited by FOMOsec52. Equifax deployed FireEye to protect against advanced threats and yet: 1) failed to patch a vuln in their database within their own mandated time frame of 48 hours (it was more like four months)53; 2) neglected to update the security certificate in their network traffic monitoring tool for 19 months, rendering it useless54.
Equifax simultaneously FOMOsec’d and YOLOsec’d, demonstrating the conceptual compatibility of the horseshoe’s ends. The same security team can both be like, “We need to stop nation states!” and also completely fail to patch their shit.
I argue the general case that #fomosec almost necessarily engenders #yolosec elsewhere, not unlike life outside of security. An obsession with the perceived inadequacy of your own life in light of the perceived excellence of others’ lives (FOMO) is likely to lead you to take extreme action to “prove” how exciting and fun your own life is (YOLO). For instance, college students who experience more FOMO also are more willing to place themselves in riskier social situations and make impulsive, embarrassing, or physically harmful decisions55.
The yearning beget by FOMO to belong to a social group and receive praise from it leads people to pursue novel experiences, with the expectation that these experiences will arouse approval from others56. That is to say, FOMOsec is quite likely to lead defenders to make YOLOsec-flavored decisions, sprinting down a path of myopia filled with seemingly impressive feats – whether buying sexy tech, paying out the nose for “exclusive” threat exposés, being on an advisory board of a hot infosec startup, attending VIP conference parties, and so forth – that are entirely uncoupled with what is required to ensure business operations are pragmatically protected.
This may be a shock to some security readers, whose self-image might shatter at the thought that they could allow – let alone foster – #yolosec. But, when you allow #fomosec, when you want no security stones left unturned, when you demand security approvals on every last bit of new code, or when you lust for security gaining a sacred seat at the Big Kids’ Business Table, you are losing sight of your organization’s priorities and thus inherently routing limited resources in suboptimal directions. You fight your eng org to integrate the vulnerability scanning tool made by the company who let you meet Mr. Robot at RSAC into your organization’s code repo, and now you gain the glorious outcome of developers ignoring the tool’s findings and resenting you – thereby deciding to stay quiet about security issues they do find – while your precious security budget is six figures lighter. You did it!
And before you think that this could not possibly apply to you, consider this: you could be under the influence of FOMO as you read this and be unfeignedly unaware of how it is negatively impacting your work57.
Conclusion If security must shun both YOLOsec and FOMOsec, how should it look instead? To simultaneously alleviate a longing for belonging, envy, and myopia, infosec defenders must seek out and share the identity of “builder”58 with software engineers59. Aligning infosec metrics to software delivery metrics facilitates the alignment of infosec work to software delivery work. Acting upon this alignment – not just paying lip service – engenders the opportunity for security teams to more tangibly connect the work they perform with value and meaning produced.
If you can understand nuance in security problems, you will absolutely be valued by your organization. If you can support the customer experiences required to facilitate business success while ensuring ongoing operational sustainability, it is difficult to imagine your organization viewing you as a nuisance or cost center. FOMOsec poisons missions away from achieving business goals, while YOLOsec erodes the prospect of ongoing sustainability.
Perhaps what is most needed is to shed the label of “security” entirely to encourage a restructuring towards “resilience.” Organizations do not need professionals who self-identify as critics or “breakers”; they need professionals who self-identify as builders but who take pride in building robust systems that can quickly adapt when exposed to any sort of incident – whether an outage caused by an attacker or a performance bug.
That, I think, is the easiest way to kill #fomosec and #yolosec in one fell swoop: the recognition that outcomes are everything and that the differentiation between performance and security concerns in the context of resilience is an unnecessary, outdated construct. #yolosec cannot thrive if engineers are accountable for minimizing instability, regardless of its source. #fomosec cannot thrive if security concerns are treated equally to performance concerns, subject to the same pragmatic prioritization.
The infosec industry would hate it (how many billions of dollars less would vendors make?) and I would lose a multitude of industry insanities to explore… but how much time, money, user pain, and wasted fucks given would we save? I think we should keep an open mind.
Thank you shoutouts to Dr. Nicole Forsgren, Camille Fournier, Kyle Kingsbury, Ryan Petrich, Andrew Ruef, and James Turnbull.
Shortridge, K. (2017). Big Game Theory Hunting: The Peculiarities of Human Behavior in the InfoSec Game. Presented at Black Hat USA, Las Vegas, N.V. ↩︎
To be fair, I leveraged yolosec previously for my educational shitpost “Darth Jar Jar: a Model for Infosec Innovation.” ↩︎
I think this argument can be extended to public sector organizations, but it is not a hill on which I am willing to die. My hot take would be that the mission of defense and intelligence agencies is inherently one of national security, and thus enhanced investment into infosec does not constitute #fomosec, as does not obstruct organizational goals and needs (they are actually quite aligned!). It likely goes without saying that #yolosec is incontrovertibly relevant to the public sector; if you are a crayon freebaser and disagree, you should consider the case of the OPM data breach back in 2015. ↩︎
Sure, yeah, this is a hot take, but I know of at least one report coming out with stats to support this, and you can peruse the 10-K filings of Fortune 500 companies and see how far down the Risk Factors section you must go to see something specifically concerning cyberattacks. ↩︎
As foreshadowed by Footnote 4, I chose five Fortune 500 companies across technology, agriculture, healthcare, logistics, and retail to compile the above sampling of risk factors enumerated in their 10-K filings – all of which come before any mention of data breaches. If you find my sampling lazy (which it definitely is), then I warmly welcome your forthcoming analysis across a more meaningful subset of the Fortune 500. ↩︎
Listen to my homeboy Dostoevsky, plz: “Above all, don’t lie to yourself. The man who lies to himself and listens to his own lie comes to such a pass that he cannot distinguish the truth within him, or around him, and so loses all respect for himself and for others.” (from The Brothers Karamazov). ↩︎
I was initially going to say that infosec is not the Juicero of enterprise IT, but, upon pondering that analogy, I realized that it actually is quite a bit like Juicero. Juicero required a fancy machine to squeeze juice packets which one could squeeze with one’s own hands, and I am of the belief that software engineers could perform an awful lot of what security teams perform today, with far greater efficiency and without salivating over blinky boxes or viewing vuln research rockstars as senpai, and, further, that the infosec market is incredibly inflated relative to its material importance, which is not dissimilar from Juicero’s own engorged valuation once upon a time. ↩︎
Makridis, C. (2020). Do Data Breaches Damage Reputation? Evidence from 43 Companies Between 2002 and 2018. ↩︎
This is a common, self-serving myth peddled by infosec vendors and security practitioners alike. The reality is that stock prices tend to slightly dip immediately in response to a data breach, but quickly recover. See: Kvochko, E., &amp; Pant, R. (2015). Why data breaches don’t hurt stock prices. Harvard Business Review, 31. and Hilary, G., Segal, B., &amp; Zhang, M. H. (2016). Cyber-risk disclosure: Who cares?. Georgetown McDonough School of Business Research Paper, (2852519). ↩︎
Sobol-Kwapinska, M., Jankowski, T., &amp; Przepiorka, A. (2016). What do we gain by adding time perspective to mindfulness? Carpe Diem and mindfulness in a temporal framework. Personality and Individual Differences, 93, 112-117. ↩︎
Ignorance of security issues can also be a source, but it is less plausible of an explanation when considering organizations beyond small businesses possessing an IT org of less than ten people. ↩︎
Although I will elaborate on the Equifax breach later in this post in the context of yolo- and fomo-sec, an example of this point is found in the testimony of David Webb, Equifax’s CIO, during the Congressional hearing regarding the breach: “It was not a cost concern. It was–really, if there is a–if there’s a constraint, it’s the domain expertise required to refactor the application, because you need experts who understand what the application does in order to put it in a new environment and do the same thing.” ↩︎
For some proposed solutions to this problem, I will self-servingly recommend reading the forthcoming O’Reilly report on Security Chaos Engineering, of which I am co-author. ↩︎
Wegmann, E., Oberst, U., Stodt, B., &amp; Brand, M. (2017). Online-specific fear of missing out and Internet-use expectancies contribute to symptoms of Internet-communication disorder. Addictive Behaviors Reports, 5, 33-42. ↩︎
Borrowing from one of my favorites, Slaughterhouse-Five by Vonnegut. ↩︎
See the concept of “Reference Dependence,” as first exhibited in the OG paper on Prospect Theory by Kahneman and Tversky. Kahneman, D., &amp; Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263-291. Additionally, see a case study on marathon runners and reference dependence in: Markle, A., Wu, G., White, R., &amp; Sackett, A. (2018). Goals as reference points in marathon running: A novel test of reference dependence. Journal of Risk and Uncertainty, 56(1), 19-50. ↩︎
Tversky, A., &amp; Kahneman, D. (1991). Loss aversion in riskless choice: A reference-dependent model. The quarterly journal of economics, 106(4), 1039-1061. Additionally, see a case study on house sellers and loss aversion in: Genesove, D., &amp; Mayer, C. (2001). Loss aversion and seller behavior: Evidence from the housing market. The quarterly journal of economics, 116(4), 1233-1260. ↩︎
Abel, J. P., Buff, C. L., &amp; Burr, S. A. (2016). Social media and the fear of missing out: Scale development and assessment. Journal of Business &amp; Economics Research (JBER), 14(1), 33-44. ↩︎
More specifically, these two dimensions manifest as desiring connectedness and approval from others vs. wanting to avoid feeling alienated and ignored. ↩︎
As it is my term, I find it acceptable to broaden the definition beyond strictly FOMO to also include the desire for belonging, anxiety about isolation, underlying envy, and so forth. It is, perhaps, a YOLO move to do so. ↩︎
Baumeister, R. F., &amp; Leary, M. R. (1995). The need to belong: desire for interpersonal attachments as a fundamental human motivation. Psychological bulletin, 117(3), 497. ↩︎
This is true despite protestations by some members of the infosec community that they are more enlightened than the general human population because they do not make “dumb” security mistakes, and despite a non-trivial portion of infosec conference attendees residing in the bottom quintile of hygiene standards. ↩︎
Lai, C., Altavilla, D., Ronconi, A., &amp; Aceto, P. (2016). Fear of missing out (FOMO) is associated with activation of the right middle temporal gyrus during inclusion social cue. Computers in Human Behavior, 61, 516-521. ↩︎
Knowles, M. L., &amp; Gardner, W. L. (2008). Benefits of membership: The activation and amplification of group identities in response to social rejection. Personality and Social Psychology Bulletin, 34(9), 1200-1213. ↩︎
Lakin, J. L., Chartrand, T. L., &amp; Arkin, R. M. (2008). I am too just like you: Nonconscious mimicry as an automatic behavioral response to social exclusion. Psychological science, 19(8), 816-822. ↩︎
Smith, R. H., &amp; Kim, S. H. (2007). Comprehending envy. Psychological bulletin, 133(1), 46. ↩︎
Menon, T., &amp; Thompson, L. (2010). Envy at work. Harvard business review, 88(4), 74-79. ↩︎
Shortridge, K., &amp; Forsgren, N. (2019, August). Controlled Chaos: The Inevitable Marriage of DevOps &amp; Security. Presented at Black Hat USA, Las Vegas, N.V. ↩︎
Yin, L., Wang, P., Nie, J., Guo, J., Feng, J., &amp; Lei, L. (2019). Social networking sites addiction and FoMO: The mediating role of envy and the moderating role of need to belong. Current Psychology, 1-9. ↩︎
Duman, H., &amp; Ozkara, B. Y. (2019). The impact of social identity on online game addiction: the mediating role of the fear of missing out (FoMO) and the moderating role of the need to belong. Current Psychology, 1-10. ↩︎
Kang, I., Cui, H., &amp; Son, J. (2019). Conformity consumption behavior and FoMO. Sustainability, 11(17), 4734. ↩︎
Mead, N. L., Baumeister, R. F., Stillman, T. F., Rawn, C. D., &amp; Vohs, K. D. (2011). Social exclusion causes people to spend and consume strategically in the service of affiliation. Journal of consumer research, 37(5), 902-919. ↩︎
Practitioners who are more secure in their social standing and group ties may be more immune to this kind of consumption. If only people spent as much for therapy as they do for conference passes. ↩︎
Aydin, H. (2018). A Systematic Review on the Use of FoMO as a Social Marketing Trend in Marketing Area. İzmir Katip Çelebi Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi, 1(1), 1-9. ↩︎
I hope someone figures out a way to turn FOYO Security into “FROYO Security.” It would really enhance infosec culture (ba dum tss). ↩︎
A spoof on: Butthole Surfers (1996). Pepper. On Electriclarryland. Capitol Records. ↩︎
While I am loathe to paint so broad a brush as to call these fears histrionic, it has always struck me as strange how often security leaders seem worried that the one signal they miss will end their world, and yet are seemingly content remaining in the dark as far as establishing outcome-aligned success measurements, understanding why the humans in their organization are “failing” to adhere to security policies, or learning the persuasive communication skills necessary to better foster consensus in their organization – all of which are far more likely to guarantee their successful tenure. ↩︎
Netzer, O. (2017, May 26). More Data Isn’t Always the Answer [Blog post]. ↩︎
Lerner, A. V. (2014). The role of ‘big data’ in online platform competition. ↩︎
Veldkamp, L., &amp; Chung, C. (2019, October). Data and the aggregate economy. In Annual Meeting Plenary (No. 2019-1). Society for Economic Dynamics. ↩︎
Davenport, T. H., &amp; Beck, J. C. (2001). The attention economy. Ubiquity, 2001(May), 1-es. ↩︎
Geri, N., &amp; Geri, Y. (2011). The Information Age Measurement Paradox: Collecting Too Much Data. Informing Sci. Int. J. an Emerg. Transdiscipl., 14, 47-59. ↩︎
Woods, D. D., Patterson, E. S., &amp; Roth, E. M. (2002). Can we ever escape from data overload? A cognitive systems diagnosis. Cognition, Technology &amp; Work, 4(1), 22-36. ↩︎
Kirsh, D. (2000). A few thoughts on cognitive overload. ↩︎
A portmanteau of information and intoxication. ↩︎
Bajari, P., Chernozhukov, V., Hortaçsu, A., &amp; Suzuki, J. (2018). The impact of big data on firm performance: An empirical investigation (No. w24334). National Bureau of Economic Research. ↩︎
Varian, H. R. (2014). Big data: New tricks for econometrics. Journal of Economic Perspectives, 28(2), 3-28. ↩︎
Although a few years out of date, this presents a nice recap of some of the major vulnerability branding campaigns: Power, J. Celebrity vulnerabilities: A short history of bug branding [Blog post]. ↩︎
The gross bamboozling of the general public by defenders (which involved bamboozling themselves) of the vulnerability hypetrain was called out two decades ago by the Anti-sec Movement. Unfortunately for end users, the movement failed. At this point, I do not think there is any hope of returning the hypetrain back to the station, as there are too many people who profit off from its perpetuation. ↩︎
Frazelle, J. (2019, July 23). The Business Executive’s Guide to Kubernetes [Blog Post]. ↩︎
Again, as noted by the Anti-sec Movement many years ago, it seems somewhat ridiculous that we hand over exploits to skidiots* who barely know what they are doing. As someone who I have completely forgotten suggested, the first major “cybergeddon” attack against critical infrastructure will likely be at the hands of a script kiddy who stumbled upon the system via Shodan and does not even realize its importance before trying some cool shit out with Metasploit. (*credits to @r00tkillah for the term “skidiots”). ↩︎
To be clear, software engineers are not immune from this phenomenon, which could be called FOMOps or #fomodev, although it is out of scope of this blog post. Consider engineers who see a new shiny library or other trendy software thingamajigger on HackerNews and decide that their current systems are now so tragically unfashionable that a makeover is required ASAP, despite “legacy” options offering a superior fit with the organization’s operational needs. ↩︎
Federal Trade Commission. (2019, July 22). Equifax to Pay $575 Million as Part of Settlement with FTC, CFPB, and States Related to 2017 Data Breach [Press Release]. ↩︎
U.S. House of Representatives Committee on Oversight and Government Reform. (2018, December). The Equifax Data Breach: Majority Staff Report, 115th Congress [Report]. ↩︎
Riordan, B. C., Flett, J. A., Hunter, J. A., Scarf, D., &amp; Conner, T. S. (2015). Fear of missing out (FoMO): The relationship between FoMO, alcohol use, and alcohol-related consequences in college students. Annals of Neuroscience and Psychology, 2(7), 1-7. ↩︎
Przybylski, A. K., Murayama, K., DeHaan, C. R., &amp; Gladwell, V. (2013). Motivational, emotional, and behavioral correlates of fear of missing out. Computers in Human Behavior, 29(4), 1841-1848. ↩︎
Budnick, C. J., Rogers, A. P., &amp; Barber, L. K. (2020). The fear of missing out at work: examining costs and benefits to employee health and motivation. Computers in Human Behavior, 104, 106161. ↩︎
To clarify, the identity of “builder” is explicitly not about taking pride in building your own SIEM, or log ingestion pipeline, or whatever other wheel security people maintain a predilection for reinventing. ↩︎
The next most obvious alternative is to co-opt the identity of “breaker” with attackers and vulnerability researchers. This is likely seen more frequently than the co-opting of “builder,” perhaps as evidenced by attempts at building red teams at organizations who have yet to master security “basics” as well as the legions of security engineers who lament the defensive parts of their role and yearn for more offense research time. Breaking can be valuable with the appropriate feedback loops in place, but an honest appraisal of infosec professionals’ desire to break things would assuredly surface interest- and ego-based motivations rather than a motivation to improve software quality internally. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>Possibly my finest contribution to the infosec industry is introducing the concept of #yolosec, first discussed when I introduced decision trees as a threat modelling device in my Black Hat talk back in 2017<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. Not one to let a solid shitpost go to waste<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>, I want to expand and expound on that concept, and introduce its ideological opposite: #fomosec. Most security efforts are hilariously inefficient, but only one end of the spectrum (#yolosec) is typically called out. That changes today.</p>
<p>This post will explore why both YOLO security (YOLOsec) and FOMO security (FOMOsec) are pernicious disservices to infosec defense and how you can spot them so that you may yeet them from your organization’s security strategy.</p>
<p>The tl;dr is that #yolosec and #fomosec are disconnected from the goals and needs of the business, forsaking pragmatism and prudence in favor of fanatical flavors of recklessness. YOLOsec reflects a security strategy driven by a “you only live once” mentality – one that emboldens people to ignore future concerns around security to achieve today’s gratification. FOMOsec reflects a security strategy driven by a fear of missing out – one that frightens people into misallocating resources towards what makes them <em>feel</em> better about their security efforts.</p>
<p>If you imagine your organization as a sea-faring vessel, infosec’s goal is to ensure the boat can survive krakens or canon-wielding pirates and successfully complete its journey. If you ignore the existence of sea terrors (#yolosec), you may not make it to your destination unless Poseidon grants you merciful passage. If you prioritize defense above your vessel’s mission (#fomosec), you will find yourself aboard a battleship that is entirely inadequate for transporting revenue-generating cargo.</p>
<figure>
    <img src="/blog/img/foyo/russell-marks-kraken-by-russellmarks-d98vq57.jpg"
         alt="Kraken by Russell Marks"/> <figcaption>
            <p><a href="https://www.artstation.com/artwork/XNdkY">Kraken by Russell Marks</a></p>
        </figcaption>
</figure>
<h2 id="first-does-security-matter">First, does security matter?</h2>
<p>Before we dig into defining #yolosec and #fomosec, I want to establish the appropriate context for these concepts. The potential peril inherent in these two “strategic” approaches rests on understanding security’s relevance to private-sector organizations<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>. The not-so-dirty and not-so-secret dirty secret is that information security does not matter nearly as much as the infosec industry proselytizes. In the grand scheme of business risks, it is solidly in the bottom half, if not the bottom quartile<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>.</p>
<p>Your organization is far more concerned with attracting and retaining customers, successfully competing in an evolving market, macroeconomic factors relevant to their industry (especially right now, amid the COVID-19 slowdown), operational interruptions and downtime, commodity price fluctuations, failure to maintain brand image and public perception, inadequate financial forecasting, changes in product mix impacting profitability, maintaining relationships with supply chain partners, impacts of seasonality, their litigation and regulatory risk profile, changes in international trade relations, climate change impacts, exchange and interest rate fluctuations, inability to access external financing, ability to anticipate consumer preferences, and, well, you hopefully get the idea by now<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>.</p>
<p>As far as tech stuff goes, organizations are primarily concerned about the interruption or inadequacy of IT systems, since those systems power ongoing business operations and, to varying degrees, fuel their revenue growth. To the ops readers among you, congrats, you are critical to modern business operations across most industries! To my infosec readers, I am certainly not saying you are unimportant, but, rather, that it is vital to be self-aware of one’s influence on the reality around you<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>.</p>
<p>With that said, infosec is not completely useless<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>. Attackers can absolutely cause operational interruption and downtime, most obviously through things like DDoS attacks, ransomware, or overloading cloud compute to eke out computercoins. Beyond those examples, security incidents in general necessitate recovery and response efforts which require money, time, and, frequently, system downtime (so money, money, and frequently money). There is limited evidence that security incidents lead to damaged public perception<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup> or public market valuation<sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup>.</p>
<p>Therefore, one can consider infosec important to organizations insofar that it either: 1) minimizes negative operational impact engendered by attacker actions 2) enhances qualities that improve business operations, such as the speed, stability, or scale of IT systems. To be clear, there is scant evidence of infosec achieving this second axiom in any meaningful fashion. Security teams encouraging the adoption of standardized APIs or base images is perhaps the only strongly justifiable example. Nevertheless, it remains an equally important, albeit contemporarily theoretical, justification for infosec’s relevancy to businesses.</p>
<p>Now we can explore #yolosec and #fomosec and why their manifestations are so magnificently monstrous.</p>
<h2 id="what-is-yolosec">What is YOLOsec?</h2>
<figure>
    <img src="/blog/img/foyo/mile-micic-parkour-zalazak.jpg"
         alt="Parkour by Mile Mićić"/> <figcaption>
            <p><a href="https://www.artstation.com/artwork/6eRd0">Parkour by Mile Mićić</a></p>
        </figcaption>
</figure>
<p><a href="https://en.wikipedia.org/wiki/YOLO_(aphorism)">YOLO</a> is an acronym for “You Only Live Once,” the modern <em>carpe diem</em> and mostly ironic millennial catchphrase meant to express the unbridled living of life to its fullest in the present, believing the current moment to be vital and unique<sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup>, with scant regard to the future. YOLO-driven actions tend to manifest as risk-seeking activities, such as skydiving or, in the case of Napoleon Bonaparte’s <a href="https://en.wikipedia.org/wiki/Hundred_Days">Hundred Days</a>, sneaking into France during exile, ripping your coat open and daring your former troops to shoot you, marching on Paris with said troops, reclaiming your title as emperor, engaging in a war against Europe’s major powers, losing at Waterloo, and returning to exile.</p>
<p>YOLOsec, and my irony-flavored hashtag #yolosec, is a term meant to describe a security strategy that embodies the “you only live once” mentality. A yolosec strategy says, “Setting our S3 bucket full of customer data to public will let us deploy our service faster, what could go wrong?” YOLOsec whispers sweet deceits in your ear, telling you that basic security countermeasures like privilege separation and access control are <em>tomorrow</em> problems – knowing full well that tomorrow will distend into months or years. And this temptation can metastasize across your systems and organization.</p>
<p>#yolosec is rarely instigated by a refutation of security’s importance; its wellspring is often found in an arguably myopic attention on specific business goals that are more easily or quickly achieved by ignoring or dismissing security considerations<sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup>. True to <a href="https://en.wikipedia.org/wiki/Hanlon%27s_razor">Hanlon’s razor</a>, #yolosec is almost assuredly due to incompetence rather than malice.</p>
<p>For instance, developers are not specifically aiming to write code so riddled with bugs that swamps are jealous, nor are they storing API keys in plaintext as an expression of their love for hackers – although both constitute #yolosec. Or, in organizations with high turnover, fresh engineering teams may barely understand how a legacy system works, rendering the exercise of upgrading or migrating it from its current insecure conditions clearly intimidating <sup id="fnref:12"><a href="#fn:12" class="footnote-ref" role="doc-noteref">12</a></sup>.</p>
<p>It is thus understandable, albeit undesirable, that the default state of engineering teams is to overlook or neglect infosec concerns when performing their work. This is rarely due to succumbence to temptation, but simply the dearth of pragmatic security wisdom among engineering teams<sup id="fnref:13"><a href="#fn:13" class="footnote-ref" role="doc-noteref">13</a></sup>.</p>
<h2 id="what-is-fomosec">What is FOMOsec?</h2>
<figure>
    <img src="/blog/img/foyo/patrycja-wojcik-where-am-i-final.jpg"
         alt="WHERE_AM_I by Patrycja Wójcik"/> <figcaption>
            <p><a href="https://www.artstation.com/artwork/xwK1m">WHERE_AM_I by Patrycja Wójcik</a></p>
        </figcaption>
</figure>
<p><a href="https://en.wikipedia.org/wiki/Fear_of_missing_out">FOMO</a> is an acronym for “Fear of Missing Out,” the modern <em>“keeping up with the Joneses”</em> meant to express the anxiety and regret borne from not participating in experiences in which others are involved – usually examined in the context of witnessing those experiences via social media. FOMO revolves around the basic human desire to understand what is going on, especially the impulse to stay connected with other humans’ experiences<sup id="fnref:14"><a href="#fn:14" class="footnote-ref" role="doc-noteref">14</a></sup>.</p>
<p>FOMO can represent a sensation that others are living life better than you are, that you are outside of a social loop, that you are behind in life relative to others, or that everything is beautiful and nothing hurts<sup id="fnref:15"><a href="#fn:15" class="footnote-ref" role="doc-noteref">15</a></sup> for everyone but you. Human brains are wired to judge outcomes relative to a perceived status quo<sup id="fnref:16"><a href="#fn:16" class="footnote-ref" role="doc-noteref">16</a></sup> and to feel bad when experiencing a perceived loss<sup id="fnref:17"><a href="#fn:17" class="footnote-ref" role="doc-noteref">17</a></sup>, so FOMO quite unfortunately presents a “buy one cognitive bias, get one free” deal. In a nutshell, there are two primary dimensions driving FOMO: a desire for belonging<sup id="fnref:18"><a href="#fn:18" class="footnote-ref" role="doc-noteref">18</a></sup> and anxiety about isolation<sup id="fnref:19"><a href="#fn:19" class="footnote-ref" role="doc-noteref">19</a></sup>.</p>
<p>FOMOsec, and the corresponding hashtag #fomosec, is a term meant to describe a security strategy that is driven by a fear of missing out and its psychological underpinnings<sup id="fnref:20"><a href="#fn:20" class="footnote-ref" role="doc-noteref">20</a></sup>. A #fomosec strategy says, “If you aren’t perfectly protecting literally all the things, what are you even doing?” FOMOsec cackles in your face, mocking your impotent control over the security of your organization’s systems and the flaccidity of your defense relative to the potency of your adversaries and the adulations showered upon your I.T. peers in engineering and operations.</p>
<p>Prioritization and pragmatism fade into the background under FOMOsec; what gains the spotlight is escaping the <em>feeling</em> of inadequacy – regaining a sense of autonomy and control irrespective of outcomes. Under #fomosec, you cry happy tears as your teeth clench and your knuckles whiten from the domspace ecstasy of gripping the wheel, euphorically ignoring that the wheel is not attached to anything and that your supposed steering is relegating you to stagnation.</p>
<p>Defenders, from security engineers to CISOs, are not deliberately sabotaging and impeding organizational operations because of a hatred for business growth or improvement. Every human longs to belong<sup id="fnref:21"><a href="#fn:21" class="footnote-ref" role="doc-noteref">21</a></sup>. Defenders are not immune to this basic human need nor immune to its capacity to desecrate strategic thinking<sup id="fnref:22"><a href="#fn:22" class="footnote-ref" role="doc-noteref">22</a></sup>.</p>
<p>The human desire for approval and acceptance from groups who share their social identity is what most foments FOMO – seeking inclusion is even more powerful than avoiding exclusion<sup id="fnref:23"><a href="#fn:23" class="footnote-ref" role="doc-noteref">23</a></sup>. Both urges result in largely the same outcomes, however, as humans who feel excluded aim to strengthen their connections with social groups and more tightly enmesh their group membership with their self-identity<sup id="fnref:24"><a href="#fn:24" class="footnote-ref" role="doc-noteref">24</a></sup>. Ultimately, FOMO drives humans to alter their own behavior to imitate others within their chosen social group<sup id="fnref:25"><a href="#fn:25" class="footnote-ref" role="doc-noteref">25</a></sup>, regardless of specific underlying motivator.</p>
<p>Even mere tourists of the infosec industry are likely aware of the shockingly borgish tendencies of its constituents, culminating in boldly defined shared identities that glut themselves on in-group signaling mechanisms. Whether the identity of the misunderstood Nostradamus who must save the feeble users from themselves or those who treat a piece of software as “completely broken” if there is a vulnerability requiring local access, special configuration settings, and dolphins jumping through ring 0, the nature of infosec culture and cliques certainly suggests the presence of imitation towards the aim of cementing group identity and gaining group approval. And this, in turn, supports the credibility of #fomosec’s existence.</p>
<h3 id="envy--fomo-security">Envy + FOMO Security</h3>
<figure>
    <img src="/blog/img/foyo/mong-cherng-lee-l5-a5-final-web.jpg"
         alt="Matte Painting by Mong Cherng Lee"/> <figcaption>
            <p><a href="https://www.artstation.com/artwork/b2gDn">Matte Painting by Mong Cherng Lee</a></p>
        </figcaption>
</figure>
<p>I believe envy waters the roots of #fomosec. Envy is best described as the painful feeling of hostility, inferiority, and resentment resting upon a foundation of admiration.<sup id="fnref:26"><a href="#fn:26" class="footnote-ref" role="doc-noteref">26</a></sup> When you admire or respect someone else’s situation and compare it against your own, FOMO and envy mix together into an especially potent poison<sup id="fnref:27"><a href="#fn:27" class="footnote-ref" role="doc-noteref">27</a></sup>.</p>
<p>The targets of infosec’s envy are attackers and software engineers – that both possess measurable and meaningful goals that result in tangibly meaningful work. For attackers, the obvious goal is “did you get in?” For engineers, the obvious goal is “did you deliver software customers will buy and use?” Offense attains swaggering victory and software engineers attain lucrative accolades. Infosec’s goals are nebulous or self-serving, its metrics either non-existent or inconsequential, its success abstract and bittersweet at best.</p>
<p>In response to my own work<sup id="fnref:28"><a href="#fn:28" class="footnote-ref" role="doc-noteref">28</a></sup>, I have witnessed infosec professionals bristle at the notion of adopting ops metrics like mean time to recovery (MTTR) to inform their own work. Infosec seemingly wants its own special metrics, despite the obvious logic of adopting metrics that align with operational objectives. This palpably inefficient priority of feeling special over pursuing more meaningful work is not only driven by FOMO-via-envy, but FOMO-via-social-identity, too.</p>
<p>Envy is made even stronger by a need to belong<sup id="fnref:29"><a href="#fn:29" class="footnote-ref" role="doc-noteref">29</a></sup>. Social identity can even be thought of as blossoming from FOMO, which is also made stronger by the longing for belonging<sup id="fnref:30"><a href="#fn:30" class="footnote-ref" role="doc-noteref">30</a></sup>. Extending this to infosec, FOMOsec is perhaps the catalyst for the stark, shared identities found across the industry. In fact, the infosec community, in many ways, is not unlike online gaming communities – featuring guilds (like CISO cliques and SecEng sects), server-wide events (like conferences), and highly active chat channels (like Twitter and Slack groups). And, much like online gaming addiction, the human need to belong perhaps fuels infosec’s obsession with adhering to the shared identity of “outsider.”</p>
<h3 id="fomo-security-budgets">FOMO Security Budgets</h3>
<p>Unfortunately, #fomosec discourages practitioners from pragmatic budget decisions towards choices that make them feel accepted by their desired social group, whether fellow CISOs or security engineers. This desire for praise and prestige from others leads to consumption behavior based an expectation of how others will perceive the consumption, rather than prioritizing product quality<sup id="fnref:31"><a href="#fn:31" class="footnote-ref" role="doc-noteref">31</a></sup>. Driven by the fundamental need for social inclusion, humans purchase and use products that are symbolic of the groups with which they desire connection – and they are willing to sacrifice “personal and financial well-being for the sake of social well-being.”<sup id="fnref:32"><a href="#fn:32" class="footnote-ref" role="doc-noteref">32</a></sup></p>
<p>The purchasing of tools such as threat hunting, fancy threat intel reports, or protection against niche, nation-state threats can be thought of as luxury goods that serve as costly signaling mechanisms to generate interpersonal acceptance<sup id="fnref:33"><a href="#fn:33" class="footnote-ref" role="doc-noteref">33</a></sup>. Adopting frameworks trendy among the in-group – such as MITRE ATT&amp;CK is currently – is a less expensive signaling mechanism, until you factor in opportunity cost. Security engineers building their own SIEM, rivalling children’s attempts at building majestic towers with popsicle sticks and glue sticks, is costly both in people hours and opportunity cost incurred by the organization. However, it represents a feat worthy of admiration from their peers despite the substantial downsides, true to #fomosec’s essence.</p>
<p>FOMO not only drives people to spend excessively and forget their true needs, but also leads people to consult their peers when making purchasing decisions for goods or services – the combination of both leading to impulse purchases that are far from strategic.<sup id="fnref:34"><a href="#fn:34" class="footnote-ref" role="doc-noteref">34</a></sup> While there are few studies on how security leaders make purchases, anecdata suggests that peers are one of the stronger influences in decision-making, especially if you include indirect peer influence through research analysis firms.</p>
<p>As a result, #fomosec creates the consummate conditions for snakeoilism to spread. FOMOsec germinates from defenders’ fears of being the outcast sheep of the I.T. family, fears of always being one step behind of attackers, fears of their work being meaningless in light of the inevitability of failure, and fears of looking foolish to peers when an incident is emblazoned in public headlines. Rather than promote mindfulness on business objectives, the industry encourages their dismissal, shaming and guilting and goading defenders into throwing away budget towards products that pursue perfection – the unattainable ideal that tacitly stokes the ego’s lust for heroism.</p>
<h2 id="horseshoe-theory--foyo-security35">Horseshoe Theory &amp; FOYO Security<sup id="fnref:35"><a href="#fn:35" class="footnote-ref" role="doc-noteref">35</a></sup></h2>
<p><img src="/blog/img/foyo/horseshoe-theory-security-strategy.png" alt="The Horseshoe Theory of Security Strategy by Kelly Shortridge"></p>
<p>Despite #yolosec suggesting a blistering lack of attention on security and, on the other end of the spectrum, #fomosec suggesting a desperate and egoistic obsession on security, they both result in poignantly poor security outcomes. I argue that they represent the two ends of a Security Strategy Horseshoe, and, in their extreme forms, are nearly indistinguishable in their outcomes.</p>
<p>When you FOMOsec, you are prone to treat the security of all assets, and threats to those assets, equally – or worse, overcorrect for niche threats (like 0day or nation state actors) under the “gotta catch ‘em all” mentality. In the former case, even the largest teams with the highest budgets cannot perfectly secure all systems against all types of incidents. One result is spreading efforts far too thinly in order to maximize breadth of coverage or concentrating on what <em>feels</em> like the biggest gap and neglecting others.</p>
<h3 id="desperate-for-data">Desperate for Data</h3>
<blockquote>
<p><em>They were all in love with data</em><br>
<em>They were drinking from a fountain</em><br>
<em>That was pouring like an avalanche</em><br>
<em>Coming down the mountain<sup id="fnref:36"><a href="#fn:36" class="footnote-ref" role="doc-noteref">36</a></sup></em></p>
</blockquote>
<figure>
    <img src="/blog/img/foyo/louise-meijer-deathrace5.jpg"
         alt="Avalanche death race by Louise Meijer"/> <figcaption>
            <p><a href="https://www.artstation.com/artwork/A9QwYN">Avalanche death race by Louise Meijer</a></p>
        </figcaption>
</figure>
<p>Those who #fomosec believe that one must collect all of the data possible, as missing the one clue indicating an incident will be catastrophic, embarrassing, or result in some other ill-defined tragedy<sup id="fnref:37"><a href="#fn:37" class="footnote-ref" role="doc-noteref">37</a></sup>. There is a shared, somewhat histrionic belief across the industry that attackers just need to discover one flaw to win, while defenders must cover all flaws to win. From the assumption that attackers possess an (unfair) information advantage, it can flow that gaining an advantage comes from rebalancing the pervading information asymmetry. That is, defenders can elevate their status relative to their adversaries by accumulating enough data, where the quantification of “enough” is persistently vague.</p>
<p>Through this lens of data accumulation, the end results of FOMOsec-driven behavior look an awful lot like those generated by YOLOsec. To quote Professor Netzer of Columbia University (invoking Andrew Lang), “A lot of people are using data like a drunk man uses a lamppost, for support rather than illumination.”<sup id="fnref:38"><a href="#fn:38" class="footnote-ref" role="doc-noteref">38</a></sup> Doing so is a decidedly YOLO vibe, even if it is fostered by FOMO.</p>
<p>When FOMOsec ignores the basic wisdom of the <a href="https://en.wikipedia.org/wiki/Central_limit_theorem">central limit theorem</a> and the reality of <a href="https://en.wikipedia.org/wiki/Diminishing_returns">diminishing returns</a> on data set size in improving performance and reducing errors<sup id="fnref:39"><a href="#fn:39" class="footnote-ref" role="doc-noteref">39</a></sup>, it wraps around closer to YOLOsec. Data is a tool for improving outcomes when faced with the unknown, but resolving uncertainty presents finite benefits<sup id="fnref:40"><a href="#fn:40" class="footnote-ref" role="doc-noteref">40</a></sup> – and thus data presents finite and diminishing returns.</p>
<p>Like a dragon slowly burying itself in treasure, FOMOsec growls, “We need to hoard all the data…” and YOLOsec roars, “…and who cares if it causes operational distractions and management headaches in the future?” The FOMOsec-distorted cost / benefit model not only overstates the benefits of data accumulation but also misses the costs of handling all that data going forward<sup id="fnref:41"><a href="#fn:41" class="footnote-ref" role="doc-noteref">41</a></sup>in a classically myopic YOLOsec fashion.</p>
<p>FOMOsec tells you that you desperately need to collect all the things (and to buy fancy tech that can help you do so) because otherwise you are not in the know, and YOLOsec tells you to collect all the things just because you can. These impulses are nearly indistinguishable in flavor, and equally as damaging. You should not measure things just because you can<sup id="fnref:42"><a href="#fn:42" class="footnote-ref" role="doc-noteref">42</a></sup> as it will lead to a form of self-sabotage via information overload, which leads to cognitive overload<sup id="fnref:43"><a href="#fn:43" class="footnote-ref" role="doc-noteref">43</a></sup>, which leads to a variety of issues that can be summarized as significant human performance degradation<sup id="fnref:44"><a href="#fn:44" class="footnote-ref" role="doc-noteref">44</a></sup>.</p>
<p>The social element of FOMO manifests in infoxication<sup id="fnref:45"><a href="#fn:45" class="footnote-ref" role="doc-noteref">45</a></sup>, too. The giveaway that data accumulation is not actually about better business outcomes is found in infosec teams refusing to leverage data sources and tools deployed by operations teams, which would streamline budget and promote collaboration. Instead, security teams seemingly refuse to let go. Budget is viewed as a status signal, and security leaders in the vice grip of FOMOsec are disincentivized from taking actions that make them feel <em>less</em> influential, even if it is the right move for their organization and team.</p>
<p>Defenders who #fomosec seek out approval and praise from other defenders as well as their organization – and performing challenging engineering feats helps fulfill that impulse. As one study looking at Amazon’s big data practices unearthed, the accumulation of “big data” is mostly viewed as an engineering challenge rather than providing tangible modeling benefits<sup id="fnref:46"><a href="#fn:46" class="footnote-ref" role="doc-noteref">46</a></sup>. Additionally, most of this “big data” is wasted, with potentially as little as 0.1% of the data treasure hoard being used to power decision-support systems, as in Google’s case<sup id="fnref:47"><a href="#fn:47" class="footnote-ref" role="doc-noteref">47</a></sup>.</p>
<p>The mythical “data feedback loop” does not bear out in practice, but it can certainly help defenders burdened with FOMOsec <em>feel</em> like they are in the know, that they are performing prestigious work, and, besides, everyone else seems to be doing it as part of their security strategy, so mimicry feels right, too. But, just as your mother warned you once upon a time, jumping off a bridge just because everyone else is doing it is a decidedly YOLO course of action.</p>
<h3 id="all-aboard-the-vulnerability-hypetrain">All Aboard the Vulnerability Hypetrain</h3>
<figure>
    <img src="/blog/img/foyo/aleksandr-chernobai-kamaha-draft-32-2.jpg"
         alt="Train by Aleksandr Chernobai"/> <figcaption>
            <p><a href="https://www.artstation.com/artwork/yb1NXJ">Train by Aleksandr Chernobai</a></p>
        </figcaption>
</figure>
<p>The infosec industry is firmly strapped onboard the vulnerability hypetrain: the flurry of media attention and industry panic that explodes upon publication of previously unknown flaws in software, known as zero-day vulnerabilities (or 0day, as the kids say), that often come with their own branding and public relations strategy<sup id="fnref:48"><a href="#fn:48" class="footnote-ref" role="doc-noteref">48</a></sup>. Each new, provocatively-named vulnerability adds a stop on the interminable journey. The engine of the vulnerability hypetrain is #fomosec and its exhaust is #yolosec.</p>
<p>Aboard this train, security leaders roleplay as special agents and muse through their tinted Morphean shades about “threat actors,” presenting idle speculation about how geopolitical events shape their firewall policies. The names of vulnerabilities hold special power, like an eldritch deity lurking in the forests surrounding a village, to whom blood sacrifices must be made each full moon lest it devour any newborns in their cribs. The truth that is lost among these rituals of the status quo is that vulnerabilities, and their monikers, should not be given more thought than the names of hurricanes that threaten power or data availability.</p>
<p>Wherefore this pestilent paradigm, then? Each vulnerability with its own PR campaign is a chance to trigger #fomosec, which leads to money or attention (so money or indirectly money)<sup id="fnref:49"><a href="#fn:49" class="footnote-ref" role="doc-noteref">49</a></sup>. Constantly stimulating the FOMOsec response leads defenders to adopt a vulnerability-centric approach to security that merges into the unkempt path of YOLOsec. YOLOsec curls around you like an anaconda, obscuring your vision until you can only see the industry headlines screaming about the newest cyberweapon or threat group, the peripheral sliding away until the more relevant factors that contribute to security failure, like misconfigurations, are overlooked.</p>
<p>Overly permissive access controls will not receive a fancy name like RootRipper or DefaultDesecration but will make an attacker’s job much easier. Thus, when #fomosec panics about missing the presence of the latest heralded vulnerability in your organization’s environment, #yolosec high fives its partner-in-crime and springs into action to beleaguer your colleagues with the false positives and intractable UIs of vulnerability scanners while the attacker stumbles upon a publicly exposed k8s management dashboard and takes control of prod<sup id="fnref:50"><a href="#fn:50" class="footnote-ref" role="doc-noteref">50</a></sup>.</p>
<p>The stated motivation for the vulnerability hypetrain is to protect users in the surrounding countryside. But, well, COVID-19 was not named LungTempest, and we do not see pharmaceutical companies publishing blog posts by self-proclaimed rockstars about how to improve the scalability or functionality of LungTempest so amateurs can DIY their own virus with a bit of copy pasting and tweaking<sup id="fnref:51"><a href="#fn:51" class="footnote-ref" role="doc-noteref">51</a></sup>.</p>
<p>We would all rightfully be outraged if pharma researchers were publishing posts about leet bioweapons online for fun and profit, about how to bypass a competitor vendor’s vaccine (after an oh-so-generous 90 day window for them to fix the vaccine), or with technical details that dramatically overstated the potential severity of the virus in order to raise funding for a new miracle drug.</p>
<p>Alas, #yolosec relishes the <em>joie de vivre</em> of dropping 0day to thunderous applause and #fomosec drinks deeply of it – the shimmering waters of an oasis in the lonely desert under the blisteringly hot sun of irrelevancy. Defenders thirst for significance and acceptance. And researchers (and the vendors who employ them) are more than happy to provide a means of feeling “in the know” and phantasmic progression towards solving the frustratingly contumacious security problem. I leave it up to the reader to evaluate whether this is symbiosis or parasitism.</p>
<h3 id="fomo-security-fosters-yolo-security">FOMO Security Fosters YOLO Security</h3>
<figure>
    <img src="/blog/img/foyo/minjeong-kim-171223-3.jpg"
         alt="Hacker&amp;rsquo;s temporary hideout by Minjeong Kim"/> <figcaption>
            <p><a href="https://www.artstation.com/artwork/k0Vy2">Hacker&rsquo;s temporary hideout by Minjeong Kim</a></p>
        </figcaption>
</figure>
<p>The desire to acquire the sexy, shiny security toys that seemingly signal membership in the Cool Kids Club is incited by FOMOsec<sup id="fnref:52"><a href="#fn:52" class="footnote-ref" role="doc-noteref">52</a></sup>. <a href="https://twitter.com/swagitda_/status/906113918283730944?s=20">Equifax deployed FireEye to protect against advanced threats</a> and yet: 1) failed to patch a vuln in their database within their own mandated time frame of 48 hours (it was more like four months)<sup id="fnref:53"><a href="#fn:53" class="footnote-ref" role="doc-noteref">53</a></sup>; 2) neglected to update the security certificate in their network traffic monitoring tool for <em>19 months</em>, rendering it useless<sup id="fnref:54"><a href="#fn:54" class="footnote-ref" role="doc-noteref">54</a></sup>.</p>
<p>Equifax simultaneously FOMOsec’d and YOLOsec’d, demonstrating the conceptual compatibility of the horseshoe’s ends. The same security team can both be like, “We need to stop nation states!” and also completely fail to patch their shit.</p>
<p>I argue the general case that #fomosec almost necessarily engenders #yolosec elsewhere, not unlike life outside of security. An obsession with the perceived inadequacy of your own life in light of the perceived excellence of others’ lives (FOMO) is likely to lead you to take extreme action to “prove” how exciting and fun your own life is (YOLO). For instance, college students who experience more FOMO also are more willing to place themselves in riskier social situations and make impulsive, embarrassing, or physically harmful decisions<sup id="fnref:55"><a href="#fn:55" class="footnote-ref" role="doc-noteref">55</a></sup>.</p>
<p>The yearning beget by FOMO to belong to a social group and receive praise from it leads people to pursue novel experiences, with the expectation that these experiences will arouse approval from others<sup id="fnref:56"><a href="#fn:56" class="footnote-ref" role="doc-noteref">56</a></sup>. That is to say, FOMOsec is quite likely to lead defenders to make YOLOsec-flavored decisions, sprinting down a path of myopia filled with seemingly impressive feats – whether buying sexy tech, paying out the nose for “exclusive” threat exposés, being on an advisory board of a hot infosec startup, attending VIP conference parties, and so forth – that are entirely uncoupled with what is required to ensure business operations are pragmatically protected.</p>
<p>This may be a shock to some security readers, whose self-image might shatter at the thought that they could allow – let alone foster – #yolosec. But, when you allow #fomosec, when you want no security stones left unturned, when you demand security approvals on every last bit of new code, or when you lust for security gaining a sacred seat at the Big Kids’ Business Table, you are losing sight of your organization’s priorities and thus inherently routing limited resources in suboptimal directions. You fight your eng org to integrate the vulnerability scanning tool made by the company who let you meet Mr. Robot at RSAC into your organization’s code repo, and now you gain the glorious outcome of developers ignoring the tool’s findings and resenting you – thereby deciding to stay quiet about security issues they do find – while your precious security budget is six figures lighter. You did it!</p>
<p>And before you think that this could not possibly apply to you, consider this: you could be under the influence of FOMO as you read this and be unfeignedly unaware of how it is negatively impacting your work<sup id="fnref:57"><a href="#fn:57" class="footnote-ref" role="doc-noteref">57</a></sup>.</p>
<h2 id="conclusion">Conclusion</h2>
<p>If security must shun both YOLOsec and FOMOsec, how should it look instead? To simultaneously alleviate a longing for belonging, envy, and myopia, infosec defenders must seek out and share the identity of “builder”<sup id="fnref:58"><a href="#fn:58" class="footnote-ref" role="doc-noteref">58</a></sup> with software engineers<sup id="fnref:59"><a href="#fn:59" class="footnote-ref" role="doc-noteref">59</a></sup>. Aligning infosec metrics to software delivery metrics facilitates the alignment of infosec work to software delivery work. Acting upon this alignment – not just paying lip service – engenders the opportunity for security teams to more tangibly connect the work they perform with value and meaning produced.</p>
<p>If you can understand nuance in security problems, you will absolutely be valued by your organization. If you can support the customer experiences required to facilitate business success while ensuring ongoing operational sustainability, it is difficult to imagine your organization viewing you as a nuisance or cost center. FOMOsec poisons missions away from achieving business goals, while YOLOsec erodes the prospect of ongoing sustainability.</p>
<p>Perhaps what is most needed is to shed the label of “security” entirely to encourage a restructuring towards “resilience.” Organizations do not need professionals who self-identify as critics or “breakers”; they need professionals who self-identify as builders but who take pride in building robust systems that can quickly adapt when exposed to any sort of incident – whether an outage caused by an attacker or a performance bug.</p>
<p>That, I think, is the easiest way to kill #fomosec and #yolosec in one fell swoop: the recognition that outcomes are everything and that the differentiation between performance and security concerns in the context of resilience is an unnecessary, outdated construct. #yolosec cannot thrive if engineers are accountable for minimizing instability, regardless of its source. #fomosec cannot thrive if security concerns are treated equally to performance concerns, subject to the same pragmatic prioritization.</p>
<p>The infosec industry would hate it (how many billions of dollars less would vendors make?) and I would lose a multitude of industry insanities to explore… but how much time, money, user pain, and wasted fucks given would we save? I think we should keep an open mind.</p>
<hr>
<p>Thank you shoutouts to Dr. Nicole Forsgren, Camille Fournier, Kyle Kingsbury, Ryan Petrich, Andrew Ruef, and James Turnbull.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>Shortridge, K. (2017). <a href="/speaking/us-17-Shortridge-Big-Game-Theory-Hunting.pdf">Big Game Theory Hunting: The Peculiarities of Human Behavior in the InfoSec Game</a>. Presented at Black Hat USA, Las Vegas, N.V.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>To be fair, I leveraged yolosec previously for my educational shitpost <a href="/blog/posts/darth-jar-jar-model-infosec-innovation/">“Darth Jar Jar: a Model for Infosec Innovation.”</a>&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>I think this argument can be extended to public sector organizations, but it is not a hill on which I am willing to die. My hot take would be that the mission of defense and intelligence agencies is inherently one of national security, and thus enhanced investment into infosec does not constitute #fomosec, as does not obstruct organizational goals and needs (they are actually quite aligned!). It likely goes without saying that #yolosec is incontrovertibly relevant to the public sector; if you are a crayon freebaser and disagree, you should consider <a href="https://en.wikipedia.org/wiki/Office_of_Personnel_Management_data_breach#Warnings">the case of the OPM data breach back in 2015</a>.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Sure, yeah, this is a hot take, but I know of at least one report coming out with stats to support this, and you can peruse the 10-K filings of Fortune 500 companies and see how far down the Risk Factors section you must go to see something specifically concerning cyberattacks.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>As foreshadowed by Footnote 4, I chose five Fortune 500 companies across technology, agriculture, healthcare, logistics, and retail to compile the above sampling of risk factors enumerated in their 10-K filings – all of which come before any mention of data breaches. If you find my sampling lazy (which it definitely is), then I warmly welcome your forthcoming analysis across a more meaningful subset of the Fortune 500.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Listen to my homeboy Dostoevsky, plz: <em>“Above all, don&rsquo;t lie to yourself. The man who lies to himself and listens to his own lie comes to such a pass that he cannot distinguish the truth within him, or around him, and so loses all respect for himself and for others.”</em> (from <em>The Brothers Karamazov</em>).&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>I was initially going to say that infosec is not <a href="https://www.theguardian.com/technology/2017/sep/01/juicero-silicon-valley-shutting-down">the Juicero</a> of enterprise IT, but, upon pondering that analogy, I realized that it actually is quite a bit like Juicero. <a href="https://www.youtube.com/watch?v=_Cp-BGQfpHQ">Juicero required a fancy machine</a> to squeeze juice packets which one could squeeze with one’s own hands, and I am of the belief that software engineers could perform an awful lot of what security teams perform today, with far greater efficiency and without salivating over blinky boxes or viewing vuln research rockstars as senpai, and, further, that the infosec market is incredibly inflated relative to its material importance, which is not dissimilar from Juicero’s own engorged valuation once upon a time.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>Makridis, C. (2020). <a href="https://ssrn.com/abstract=3596933">Do Data Breaches Damage Reputation? Evidence from 43 Companies Between 2002 and 2018</a>.&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>This is a common, self-serving myth peddled by infosec vendors and security practitioners alike. The reality is that stock prices tend to slightly dip immediately in response to a data breach, but quickly recover. See: Kvochko, E., &amp; Pant, R. (2015). <a href="https://hbr.org/2015/03/why-data-breaches-dont-hurt-stock-prices">Why data breaches don’t hurt stock prices</a>. <em>Harvard Business Review, 31</em>. and Hilary, G., Segal, B., &amp; Zhang, M. H. (2016). <a href="https://www.fordham.edu/download/downloads/id/8180/cyber-risk_disclosure_who_cares.pdf">Cyber-risk disclosure: Who cares?</a>. <em>Georgetown McDonough School of Business Research Paper</em>, (2852519).&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:10">
<p>Sobol-Kwapinska, M., Jankowski, T., &amp; Przepiorka, A. (2016). <a href="https://www.researchgate.net/profile/Tomasz_Jankowski/publication/281437827_What_do_we_gain_by_adding_time_perspective_to_mindfulness_Carpe_Diem_and_mindfulness_in_a_temporal_framework/links/5a1813404585155c26a7c3d1/What-do-we-gain-by-adding-time-perspective-to-mindfulness-Carpe-Diem-and-mindfulness-in-a-temporal-framework.pdf">What do we gain by adding time perspective to mindfulness? Carpe Diem and mindfulness in a temporal framework</a>. <em>Personality and Individual Differences, 93</em>, 112-117.&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>Ignorance of security issues can also be a source, but it is less plausible of an explanation when considering organizations beyond small businesses possessing an IT org of less than ten people.&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:12">
<p>Although I will elaborate on the Equifax breach later in this post in the context of yolo- and fomo-sec, an example of this point is found in the testimony of David Webb, Equifax’s CIO, during the Congressional hearing regarding the breach: <em>“It was not a cost concern. It was–really, if there is a–if there’s a constraint, it’s the domain expertise required to refactor the application, because you need experts who understand what the application does in order to put it in a new environment and do the same thing.”</em>&#160;<a href="#fnref:12" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:13">
<p>For some proposed solutions to this problem, I will self-servingly recommend reading the forthcoming O’Reilly report on Security Chaos Engineering, of which I am co-author.&#160;<a href="#fnref:13" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:14">
<p>Wegmann, E., Oberst, U., Stodt, B., &amp; Brand, M. (2017). Online-specific fear of missing out and Internet-use expectancies contribute to symptoms of Internet-communication disorder. <em>Addictive Behaviors Reports, 5</em>, 33-42.&#160;<a href="#fnref:14" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:15">
<p>Borrowing from one of my favorites, <em>Slaughterhouse-Five</em> by Vonnegut.&#160;<a href="#fnref:15" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:16">
<p>See the concept of “Reference Dependence,” as first exhibited in the OG paper on Prospect Theory by Kahneman and Tversky. Kahneman, D., &amp; Tversky, A. (1979). <a href="https://www.uzh.ch/cmsssl/suz/dam/jcr:00000000-64a0-5b1c-0000-00003b7ec704/10.05-kahneman-tversky-79.pdf">Prospect theory: An analysis of decision under risk</a>. <em>Econometrica, 47</em>, 263-291. Additionally, see a case study on marathon runners and reference dependence in: Markle, A., Wu, G., White, R., &amp; Sackett, A. (2018). <a href="https://ir.stthomas.edu/cgi/viewcontent.cgi?article=1055&amp;context=ocbmktgpub">Goals as reference points in marathon running: A novel test of reference dependence</a>. <em>Journal of Risk and Uncertainty, 56</em>(1), 19-50.&#160;<a href="#fnref:16" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:17">
<p>Tversky, A., &amp; Kahneman, D. (1991). <a href="https://pdfs.semanticscholar.org/86af/5b4ce3324624bbb499eb79ee0901d6375df9.pdf">Loss aversion in riskless choice: A reference-dependent model</a>. <em>The quarterly journal of economics, 106</em>(4), 1039-1061. Additionally, see a case study on house sellers and loss aversion in: Genesove, D., &amp; Mayer, C. (2001). <a href="https://www.nber.org/papers/w8143.pdf">Loss aversion and seller behavior: Evidence from the housing market</a>. <em>The quarterly journal of economics, 116</em>(4), 1233-1260.&#160;<a href="#fnref:17" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:18">
<p>Abel, J. P., Buff, C. L., &amp; Burr, S. A. (2016). <a href="https://www.clutejournals.com/index.php/JBER/article/download/9554/9632">Social media and the fear of missing out: Scale development and assessment</a>. <em>Journal of Business &amp; Economics Research (JBER), 14</em>(1), 33-44.&#160;<a href="#fnref:18" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:19">
<p>More specifically, these two dimensions manifest as desiring connectedness and approval from others vs. wanting to avoid feeling alienated and ignored.&#160;<a href="#fnref:19" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:20">
<p>As it is my term, I find it acceptable to broaden the definition beyond strictly FOMO to also include the desire for belonging, anxiety about isolation, underlying envy, and so forth. It is, perhaps, a YOLO move to do so.&#160;<a href="#fnref:20" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:21">
<p>Baumeister, R. F., &amp; Leary, M. R. (1995). <a href="https://www.researchgate.net/publication/15420847_The_Need_to_Belong_Desire_for_Interpersonal_Attachments_as_a_Fundamental_Human_Motivation">The need to belong: desire for interpersonal attachments as a fundamental human motivation</a>. <em>Psychological bulletin, 117</em>(3), 497.&#160;<a href="#fnref:21" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:22">
<p>This is true despite protestations by some members of the infosec community that they are more enlightened than the general human population because they do not make “dumb” security mistakes, and despite a non-trivial portion of infosec conference attendees residing in the bottom quintile of hygiene standards.&#160;<a href="#fnref:22" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:23">
<p>Lai, C., Altavilla, D., Ronconi, A., &amp; Aceto, P. (2016). Fear of missing out (FOMO) is associated with activation of the right middle temporal gyrus during inclusion social cue. <em>Computers in Human Behavior, 61</em>, 516-521.&#160;<a href="#fnref:23" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:24">
<p>Knowles, M. L., &amp; Gardner, W. L. (2008). <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.830.5393&amp;rep=rep1&amp;type=pdf">Benefits of membership: The activation and amplification of group identities in response to social rejection</a>. <em>Personality and Social Psychology Bulletin, 34</em>(9), 1200-1213.&#160;<a href="#fnref:24" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:25">
<p>Lakin, J. L., Chartrand, T. L., &amp; Arkin, R. M. (2008). <a href="https://faculty.fuqua.duke.edu/~tlc10/bio/TLC_articles/2008/Lakin_Chartrand_Arkin_2008.pdf">I am too just like you: Nonconscious mimicry as an automatic behavioral response to social exclusion</a>. <em>Psychological science, 19</em>(8), 816-822.&#160;<a href="#fnref:25" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:26">
<p>Smith, R. H., &amp; Kim, S. H. (2007). Comprehending envy. <em>Psychological bulletin, 133</em>(1), 46.&#160;<a href="#fnref:26" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:27">
<p>Menon, T., &amp; Thompson, L. (2010). <a href="https://hbr.org/2010/04/envy-at-work">Envy at work</a>. <em>Harvard business review, 88</em>(4), 74-79.&#160;<a href="#fnref:27" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:28">
<p>Shortridge, K., &amp; Forsgren, N. (2019, August). <a href="/speaking/us-19-Shortridge-Forsgren-Controlled-Chaos-the-Inevitable-Marriage-of-DevOps-and-Security.pdf">Controlled Chaos: The Inevitable Marriage of DevOps &amp; Security</a>. Presented at Black Hat USA, Las Vegas, N.V.&#160;<a href="#fnref:28" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:29">
<p>Yin, L., Wang, P., Nie, J., Guo, J., Feng, J., &amp; Lei, L. (2019). <a href="https://www.researchgate.net/profile/Pengcheng_Wang22/publication/334173863_Social_networking_sites_addiction_and_FoMO_The_mediating_role_of_envy_and_the_moderating_role_of_need_to_belong/links/5d53c07092851c93b62e6914/Social-networking-sites-addiction-and-FoMO-The-mediating-role-of-envy-and-the-moderating-role-of-need-to-belong.pdf">Social networking sites addiction and FoMO: The mediating role of envy and the moderating role of need to belong</a>. <em>Current Psychology, 1-9</em>.&#160;<a href="#fnref:29" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:30">
<p>Duman, H., &amp; Ozkara, B. Y. (2019). <a href="http://acikerisim.rumeli.edu.tr:6060/xmlui/bitstream/handle/1/139/The%20impact%20of%20social%20identity%20on%20online%20game%20addiction.pdf?sequence=1&amp;isAllowed=y">The impact of social identity on online game addiction: the mediating role of the fear of missing out (FoMO) and the moderating role of the need to belong</a>. <em>Current Psychology, 1-10</em>.&#160;<a href="#fnref:30" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:31">
<p>Kang, I., Cui, H., &amp; Son, J. (2019). <a href="https://www.mdpi.com/2071-1050/11/17/4734/pdf">Conformity consumption behavior and FoMO</a>. <em>Sustainability, 11</em>(17), 4734.&#160;<a href="#fnref:31" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:32">
<p>Mead, N. L., Baumeister, R. F., Stillman, T. F., Rawn, C. D., &amp; Vohs, K. D. (2011). <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.459.4993&amp;rep=rep1&amp;type=pdf">Social exclusion causes people to spend and consume strategically in the service of affiliation</a>. <em>Journal of consumer research, 37</em>(5), 902-919.&#160;<a href="#fnref:32" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:33">
<p>Practitioners who are more secure in their social standing and group ties may be more immune to this kind of consumption. If only people spent as much for therapy as they do for conference passes.&#160;<a href="#fnref:33" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:34">
<p>Aydin, H. (2018). <a href="https://dergipark.org.tr/en/download/article-file/565502">A Systematic Review on the Use of FoMO as a Social Marketing Trend in Marketing Area</a>. <em>İzmir Katip Çelebi Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi, 1</em>(1), 1-9.&#160;<a href="#fnref:34" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:35">
<p>I hope someone figures out a way to turn FOYO Security into “FROYO Security.” It would really enhance infosec culture (ba dum tss).&#160;<a href="#fnref:35" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:36">
<p>A spoof on: Butthole Surfers (1996). <a href="https://www.youtube.com/watch?v=G8sGmSEehi4">Pepper</a>. On <em>Electriclarryland</em>. Capitol Records.&#160;<a href="#fnref:36" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:37">
<p>While I am loathe to paint so broad a brush as to call these fears histrionic, it has always struck me as strange how often security leaders seem worried that the one signal they miss will end their world, and yet are seemingly content remaining in the dark as far as establishing outcome-aligned success measurements, understanding why the humans in their organization are “failing” to adhere to security policies, or learning the persuasive communication skills necessary to better foster consensus in their organization – all of which are far more likely to guarantee their successful tenure.&#160;<a href="#fnref:37" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:38">
<p>Netzer, O. (2017, May 26). <a href="https://www8.gsb.columbia.edu/articles/ideas-work/more-data-isn-t-always-answer">More Data Isn’t Always the Answer</a> [Blog post].&#160;<a href="#fnref:38" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:39">
<p>Lerner, A. V. (2014). <a href="http://awa2015.concurrences.com/IMG/pdf/big.pdf">The role of &lsquo;big data&rsquo; in online platform competition</a>.&#160;<a href="#fnref:39" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:40">
<p>Veldkamp, L., &amp; Chung, C. (2019, October). <a href="https://www0.gsb.columbia.edu/faculty/lveldkamp/papers/JEL_MacroDataLV_v7.pdf">Data and the aggregate economy</a>. In <em>Annual Meeting Plenary</em> (No. 2019-1). Society for Economic Dynamics.&#160;<a href="#fnref:40" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:41">
<p>Davenport, T. H., &amp; Beck, J. C. (2001). The attention economy. <em>Ubiquity, 2001</em>(May), 1-es.&#160;<a href="#fnref:41" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:42">
<p>Geri, N., &amp; Geri, Y. (2011). <a href="http://www.inform.nu/Articles/Vol14/ISJv14p047-059Geri587.pdf">The Information Age Measurement Paradox: Collecting Too Much Data</a>. <em>Informing Sci. Int. J. an Emerg. Transdiscipl., 14</em>, 47-59.&#160;<a href="#fnref:42" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:43">
<p>Woods, D. D., Patterson, E. S., &amp; Roth, E. M. (2002). <a href="https://apps.dtic.mil/sti/pdfs/ADA371142.pdf">Can we ever escape from data overload? A cognitive systems diagnosis</a>. <em>Cognition, Technology &amp; Work, 4</em>(1), 22-36.&#160;<a href="#fnref:43" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:44">
<p>Kirsh, D. (2000). <a href="https://philarchive.org/archive/KIRAFT">A few thoughts on cognitive overload</a>.&#160;<a href="#fnref:44" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:45">
<p>A portmanteau of information and intoxication.&#160;<a href="#fnref:45" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:46">
<p>Bajari, P., Chernozhukov, V., Hortaçsu, A., &amp; Suzuki, J. (2018). <em><a href="https://www.nber.org/papers/w24334.pdf">The impact of big data on firm performance: An empirical investigation</a></em> (No. w24334). National Bureau of Economic Research.&#160;<a href="#fnref:46" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:47">
<p>Varian, H. R. (2014). <a href="https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.28.2.3?&amp;utm_content=buffera0009&amp;utm_medium=social&amp;utm_source=twitter.com&amp;utm_campaign=buffer">Big data: New tricks for econometrics</a>. <em>Journal of Economic Perspectives, 28</em>(2), 3-28.&#160;<a href="#fnref:47" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:48">
<p>Although a few years out of date, this presents a nice recap of some of the major vulnerability branding campaigns: Power, J. <a href="https://medium.com/threat-intel/bug-branding-heartbleed-14ef1a64047f">Celebrity vulnerabilities: A short history of bug branding</a> [Blog post].&#160;<a href="#fnref:48" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:49">
<p>The gross bamboozling of the general public by defenders (which involved bamboozling themselves) of the vulnerability hypetrain was called out two decades ago by <a href="https://en.wikipedia.org/wiki/Antisec_Movement">the Anti-sec Movement</a>. Unfortunately for end users, the movement failed. At this point, I do not think there is any hope of returning the hypetrain back to the station, as there are too many people who profit off from its perpetuation.&#160;<a href="#fnref:49" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:50">
<p>Frazelle, J. (2019, July 23). <a href="https://blog.jessfraz.com/post/the-business-executives-guide-to-kubernetes/">The Business Executive&rsquo;s Guide to Kubernetes</a> [Blog Post].&#160;<a href="#fnref:50" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:51">
<p>Again, as noted by the Anti-sec Movement many years ago, it seems somewhat ridiculous that we hand over exploits to skidiots* who barely know what they are doing. As someone who I have completely forgotten suggested, the first major “cybergeddon” attack against critical infrastructure will likely be at the hands of a script kiddy who stumbled upon the system via Shodan and does not even realize its importance before trying some cool shit out with Metasploit. (*credits to @r00tkillah for the term “skidiots”).&#160;<a href="#fnref:51" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:52">
<p>To be clear, software engineers are not immune from this phenomenon, which could be called FOMOps or #fomodev, although it is out of scope of this blog post. Consider engineers who see a new shiny library or other trendy software thingamajigger on HackerNews and decide that their current systems are now so tragically unfashionable that a makeover is required ASAP, despite “legacy” options offering a superior fit with the organization’s operational needs.&#160;<a href="#fnref:52" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:53">
<p>Federal Trade Commission. (2019, July 22). <em><a href="https://www.ftc.gov/news-events/press-releases/2019/07/equifax-pay-575-million-part-settlement-ftc-cfpb-states-related">Equifax to Pay $575 Million as Part of Settlement with FTC, CFPB, and States Related to 2017 Data Breach</a></em> [Press Release].&#160;<a href="#fnref:53" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:54">
<p>U.S. House of Representatives Committee on Oversight and Government Reform. (2018, December). <em><a href="https://republicans-oversight.house.gov/wp-content/uploads/2018/12/Equifax-Report.pdf">The Equifax Data Breach: Majority Staff Report, 115th Congress</a></em> [Report].&#160;<a href="#fnref:54" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:55">
<p>Riordan, B. C., Flett, J. A., Hunter, J. A., Scarf, D., &amp; Conner, T. S. (2015). <a href="https://www.researchgate.net/profile/Damian_Scarf/publication/283298745_Fear_of_Missing_Out_FoMO_The_relationship_between_FoMO_alcohol_use_and_alcohol-related_consequences_in_college_students/links/5653fb7608aefe619b1979dc/Fear-of-Missing-Out-FoMO-The-relationship-between-FoMO-alcohol-use-and-alcohol-related-consequences-in-college-students.pdf">Fear of missing out (FoMO): The relationship between FoMO, alcohol use, and alcohol-related consequences in college students</a>. <em>Annals of Neuroscience and Psychology, 2</em>(7), 1-7.&#160;<a href="#fnref:55" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:56">
<p>Przybylski, A. K., Murayama, K., DeHaan, C. R., &amp; Gladwell, V. (2013). <a href="https://d1wqtxts1xzle7.cloudfront.net/44255710/Motivational_emotional_and_behavioral_co20160331-27397-tg2tl4.pdf?1459425071=&amp;response-content-disposition=inline%3B+filename%3DMotivational_emotional_and_behavioral_co.pdf&amp;Expires=1599944939&amp;Signature=RT2B0JbUIFMyn1HNjMa73qXkeiwX7ZTudu3557AnHsF4Pq8lilkcWW9SfxfONxj3E9L2uP5d9hBzeF~aYyapP0YmEDrzH0Fh4MCITn81x~xdQq9766j1QcN76QBzcGwOSIjNskPSQSG1M63Ug3kKk8xjizwJcPbWDya2aCycEZFMaumyYNtdoLGHBq0DbDtRjo9Ma9VSa6f~stesO35vb~CEysTtVYt1YPbNT02F47tBzgVxOHJf4cqbHKdPC0K7HX-q6FKTRr~nWJ5aulOvc8j2uiYl8O6t9tOYd~d1za87FIvrdmAtQu9TPNn4O934zScXb9hntVvbl6CAArBniw__&amp;Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA">Motivational, emotional, and behavioral correlates of fear of missing out</a>. <em>Computers in Human Behavior, 29</em>(4), 1841-1848.&#160;<a href="#fnref:56" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:57">
<p>Budnick, C. J., Rogers, A. P., &amp; Barber, L. K. (2020). The fear of missing out at work: examining costs and benefits to employee health and motivation. <em>Computers in Human Behavior, 104</em>, 106161.&#160;<a href="#fnref:57" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:58">
<p>To clarify, the identity of “builder” is explicitly <em>not</em> about taking pride in building your own SIEM, or log ingestion pipeline, or whatever other wheel security people maintain a predilection for reinventing.&#160;<a href="#fnref:58" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:59">
<p>The next most obvious alternative is to co-opt the identity of “breaker” with attackers and vulnerability researchers. This is likely seen more frequently than the co-opting of “builder,” perhaps as evidenced by attempts at building red teams at organizations who have yet to master security “basics” as well as the legions of security engineers who lament the defensive parts of their role and yearn for more offense research time. Breaking can be valuable with the appropriate feedback loops in place, but an honest appraisal of infosec professionals’ desire to break things would assuredly surface interest- and ego-based motivations rather than a motivation to improve software quality internally.&#160;<a href="#fnref:59" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Cyber Buzzword Bingo: All Editions</title>
            <link>https://kellyshortridge.com/blog/posts/buzzword-bingo-all-editions/</link>
            <pubDate>Wed, 05 Aug 2020 12:47:37 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/buzzword-bingo-all-editions/</guid>
            <description>Every year, I attempt to distill the infosec zeitgeist into a bingo card filled with all the obnoxious, flamboyant, and inane buzzwords permeating the industry. This blog post includes all editions of the bingo card for your amusement and / or existential crises, and will be updated annually.
2023 Cyber Startup Buzzword Bingo Card See my blog post for detailed analysis.
2022 Infosec Buzzword Bingo Card See my blog post for detailed analysis.
2020 Infosec Buzzword Bingo Card See my article in VICE Motherboard for detailed analysis.
2019 Infosec Buzzword Bingo Card See my blog post for detailed analysis.
2018 Infosec Buzzword Bingo Card 2017 Infosec Buzzword Bingo Card It all started with a Tweet. </description>
            <atom:content type="html"><![CDATA[<p>Every year, I attempt to distill the infosec zeitgeist into a bingo card filled with all the obnoxious, flamboyant, and inane buzzwords permeating the industry. This blog post includes all editions of the bingo card for your amusement and / or existential crises, and will be updated annually.</p>
<h2 id="2023-cyber-startup-buzzword-bingo-card">2023 Cyber Startup Buzzword Bingo Card</h2>
<p>See <a href="https://kellyshortridge.com/blog/posts/cyber-startup-buzzword-bingo-2023/">my blog post</a> for detailed analysis.</p>
<p><img src="/blog/img/cyber-startup-buzzword-bingo-2023.png" alt="The 2023 edition of the cyber buzzword bingo card. The background is terrible cyber art that is roughly a head made up of code with glowing digital bits swirling around it. The head hovers over a purple and pink checkered floor; it&amp;rsquo;s all very cyber. In order from left to right, starting on the upper left side, the buzzwords included on the card are as follows. Posture. Engine. Coverage. DevSecOps. Sensitive. Faster. Discover. Full. API. Control. Modern. Complete. Platform, which is the center word of the bingo card. Workflows. Trusted. Prioritize. Zero Trust. Exposure. CI/CD. Surface. Developers. Single. Cloud. Insights. Supply Chain."></p>
<h2 id="2022-infosec-buzzword-bingo-card">2022 Infosec Buzzword Bingo Card</h2>
<p>See <a href="https://kellyshortridge.com/blog/posts/infosec-buzzword-bingo-2022/">my blog post</a> for detailed analysis.</p>
<p><img src="/blog/img/infosec-startup-buzzword-bingo-2022.png" alt="Infosec Startup Buzzword Bingo card for 2022. The background is terrible cyber art that is roughly a translucent caution sign resting above vaporwave colored cubes that are meant to look vaguely like a circuit board. In order from left to right, starting on the upper left side, the buzzwords included on the card are as follows. Automated. Discover. Cloud-native. Engine. Visibility. API. AI / ML. Seamless. Posture. One-click. Powerful. Dynamic. Continuous, which is the center word of the bingo card. Deep. Accelerate. Simple. Zero Trust. Agentless. Native. Lifecycle. Platform. Accurate. Enforce. Real-time. Context."></p>
<h2 id="2020-infosec-buzzword-bingo-card">2020 Infosec Buzzword Bingo Card</h2>
<p>See <a href="https://www.vice.com/en_us/article/epgp9j/infosec-buzzword-bingo-2020-edition">my article in VICE Motherboard</a> for detailed analysis.</p>
<p><img src="/blog/img/infosec-startup-buzzword-bingo-2020.png" alt="Infosec Startup Buzzword Bingo card for 2020. The buzzwords, from top left to bottom right include: Real-time. Complex. Comprehensive. Flexible. Machine Learning and AI. Modern. Unique. Deep. Accurate. Leading. Seamless. Accelerate. Visibility (the center square). Next-gen. Actionable. Sophisticated. Simplify. Proactive. Gaps. Empower. Continuous. Unknown. Insights. Robust. Advanced"></p>
<h2 id="2019-infosec-buzzword-bingo-card">2019 Infosec Buzzword Bingo Card</h2>
<p>See <a href="/blog/posts/infosec-buzzword-bingo-2019-edition/">my blog post</a> for detailed analysis.</p>
<p><img src="/blog/img/infosec-startup-bingo-2019.png" alt="Infosec Startup Buzzword Bingo card for 2019. The buzzwords, from top left to bottom right, include: Threat. Context. Machine Learning and AI. Discover. Real-time. Cyber. Insights. Seamless. Comprehensive. Simplify. Intelligence. Scalable. Automation (the center square). Actionable. Platform. Deep. Orchestrated. Empower. Prioritization. Behavioral. Visibility. Workflow. Advanced. Unknown. Continuous."></p>
<h2 id="2018-infosec-buzzword-bingo-card">2018 Infosec Buzzword Bingo Card</h2>
<p><img src="/blog/img/infosec-startup-buzzword-bingo-2018.png" alt="Infosec Startup Buzzword Bingo card for 2018. The buzzwords, from top left to bottom right, include: Automated. Context. Advanced and Unknown. Actionable. AI. Insights. Continuous. Sophisticated. Comprehensive. Behavioral. Platform. Proactive. Threat (the center square). Purpose-built. Machine learning. Next-gen. Prioritize. Intelligence. Scalable. Predictive. Visibility. Seamless. Hunt. Unparalleled and Unprecedented. Real-time."></p>
<h2 id="2017-infosec-buzzword-bingo-card">2017 Infosec Buzzword Bingo Card</h2>
<p>It all started with <a href="https://twitter.com/swagitda_/status/912494974779973632">a Tweet</a>.
<img src="/blog/img/infosec-startup-buzzword-bingo-2017.png" alt="Infosec Startup Buzzword Bingo card for 2017. The buzzwords, from top left to bottom right, include: Continuous. Next-gen. Names with Cy, Threat, or Lock in them. Machine learning. Purpose-built. Visibility. Best-of-breed. Ex-intelligence community founders. AI. Real-time. Thwarts. Unknown or Advanced Threats. Cyber (the center square). Behavioral. Automation. Actionable. Military-grade. Proactive. Anomalies. Single-pane-of-glass. Intelligence. Sophisticated. Context-aware. Deep web. End-to-end."></p>
]]></atom:content>
        </item>
        
        <item>
            <title>Resilience in Security 101</title>
            <link>https://kellyshortridge.com/blog/posts/resilience-in-security-101/</link>
            <pubDate>Mon, 18 May 2020 20:00:41 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/resilience-in-security-101/</guid>
            <description>This post highlights the key things you need to know about what resilience means in an information security context. It’s intended to serve as a 101 for people involved in building, maintaining, and securing systems.
It’s also meant to serve as an antidote for the recent twisting of the term by self-serving vendors and #thotleaders who are blisteringly unaware of the decades of research outside of infosec on the topic.
tl;dr – Resilience in security means a flexible system that can absorb an attack and reorganize around the threat.
The basics of resilience in infosec Resilience in information security (and in other domains) consists of three characteristics: robustness, adaptability, and transformability.1
Robustness = withstanding and resisting a negative event. Think of this as your systems’ ability to prevent or block pwnage.
Adaptability = managing your current systems’ response to negative events. Think of this as minimizing event impact in your systems, optimizing for system recovery, and minimizing friction involved in changing your systems.
Transformability = reorganizing your system in response to updated assumptions. Think of this as moving to a new system configuration or trajectory when existing conditions are indefensible.
Once more, but less abstractly Let’s walk through a brief practical example to show how each of these characteristics of resilience work together.
Your organization delivers a PHP application to customers. You use inline PHP code within the HTML of your web apps to perform database queries. An injection vulnerability is found in an instance of this inline PHP code.
Robustness manifests as implementation of a web application firewall to stop exploitation of the vuln, along with patching that instance of the vuln. These actions return things to “normal” – the prevailing status quo before discovery of the vuln (the negative event).
Adaptability manifests as removing inline queries from the application, then replacing them with one class that accesses the database and is responsible for all sanitization. This reduces potential incident impact and, by only requiring issues to be fixed in one place, minimizes friction involved in changing the system going forward.
Transformability manifests as re-architecting the application in a different language, like Java, to solve PHP’s endemic security problems. Evidence from application monitoring or publicly-disclosed vulnerabilities in PHP leads to updated assumptions about PHP’s suitability for the application. These assumptions motivate revision of previous choices that result in reorganization of the system to optimize for the evolving reality.
Piecing it together, these activities result in a more flexibly-designed application that can absorb an attack and be reorganized based on its evolving threat context.
What most of the industry gets wrong Robustness does not equal resilience. Robustness is insufficient on its own to foster resilience. Prevention is not resilience. Prevention is insufficient on its own to foster resilience. Being able to block or withstand an attack is not resilience. Being able to block or withstand an attack is insufficient on its own to foster resilience.
I am spelling this out as plainly as I can because I see this misconception all the time now. Resilience is being used as a substitution for robustness, prevention, and “stopping threats” by marketing teams to make their stale pitches feel fresh again. Don’t be bamboozled by these misleading claims.
If you want to understand more why this is such a dangerous conflation, read the Robustness section in my keynote on resilience.
Resilience is not a tech stack or set of tools. Given people still think DevOps is achieved by implementing a certain tech stack, I don’t hold much faith that people will let go of similar notions about resilience. Just like DevOps, resilience is a set of principles and practices. It’s an approach that aims to help you evolve your security as your environment and context evolves.
Certain tools can support your resilience practices – such as transformability via migrating systems from on-prem to the cloud – but you will not be able to “do resilience” just by purchasing tech.
In Conclusion I’ll repeat my definition again: Resilience in security means a flexible system that can absorb an attack and reorganize around the threat.
I firmly believe resilience is the future of information security. Alas, the term is actively being bastardized by some vendors to serve their pestilent purposes. To the practitioners reading (whether you build systems or secure systems), remember that attackers are continuously evolving their methods based on changing context, and it is imperative for you to evolve your methods, too.
Adopting a resilience approach will help you survive – and hopefully even thrive – in this dynamic digital world of ours.
Thanks to my engineering and security friends for their feedback on drafts, especially Camille, James, and Vladimir!
I recommend reading the seminal paper “Resilience, adaptability and transformability in social–ecological systems” by B Walker, CS Holling, SR Carpenter, A Kinzig (2004).
For a longer discussion about resilience in infosec, check out the full text of my keynote on the topic from 2017. ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>This post highlights the key things you need to know about what resilience means in an information security context. It’s intended to serve as a 101 for people involved in building, maintaining, and securing systems.</p>
<p>It&rsquo;s also meant to serve as an antidote for the recent twisting of the term by self-serving vendors and #thotleaders who are blisteringly unaware of the decades of research outside of infosec on the topic.</p>
<blockquote>
<p>tl;dr &ndash; Resilience in security means a flexible system that can absorb an attack and reorganize around the threat.</p>
</blockquote>
<hr>
<h2 id="the-basics-of-resilience-in-infosec">The basics of resilience in infosec</h2>
<p>Resilience in information security (and in other domains) consists of three characteristics: robustness, adaptability, and transformability.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p><strong>Robustness</strong> = withstanding and resisting a negative event. Think of this as your systems’ ability to prevent or block pwnage.</p>
<p><strong>Adaptability</strong> = managing your current systems&rsquo; response to negative events. Think of this as minimizing event impact in your systems, optimizing for system recovery, and minimizing friction involved in changing your systems.</p>
<p><strong>Transformability</strong> = reorganizing your system in response to updated assumptions. Think of this as moving to a new system configuration or trajectory when existing conditions are indefensible.</p>
<hr>
<h2 id="once-more-but-less-abstractly">Once more, but less abstractly</h2>
<p>Let’s walk through a brief practical example to show how each of these characteristics of resilience work together.</p>
<p>Your organization delivers a PHP application to customers. You use inline PHP code within the HTML of your web apps to perform database queries. An injection vulnerability is found in an instance of this inline PHP code.</p>
<p><strong>Robustness</strong> manifests as implementation of a web application firewall to stop exploitation of the vuln, along with patching that instance of the vuln. These actions return things to “normal” – the prevailing status quo before discovery of the vuln (the negative event).</p>
<p><strong>Adaptability</strong> manifests as removing inline queries from the application, then replacing them with one class that accesses the database and is responsible for all sanitization. This reduces potential incident impact and, by only requiring issues to be fixed in one place, minimizes friction involved in changing the system going forward.</p>
<p><strong>Transformability</strong> manifests as re-architecting the application in a different language, like Java, to solve PHP’s endemic security problems. Evidence from application monitoring or publicly-disclosed vulnerabilities in PHP leads to updated assumptions about PHP’s suitability for the application. These assumptions motivate revision of previous choices that result in reorganization of the system to optimize for the evolving reality.</p>
<p>Piecing it together, these activities result in a more flexibly-designed application that can absorb an attack and be reorganized based on its evolving threat context.</p>
<hr>
<h2 id="what-most-of-the-industry-gets-wrong">What most of the industry gets wrong</h2>
<ol>
<li>
<p><strong>Robustness does not equal resilience.</strong> Robustness is insufficient on its own to foster resilience. Prevention is not resilience. Prevention is insufficient on its own to foster resilience. Being able to block or withstand an attack is not resilience. Being able to block or withstand an attack is insufficient on its own to foster resilience.</p>
<p>I am spelling this out as plainly as I can because I see this misconception all the time now. Resilience is being used as a substitution for robustness, prevention, and “stopping threats” by marketing teams to make their stale pitches feel fresh again. Don’t be bamboozled by these misleading claims.</p>
<p>If you want to understand more why this is such a dangerous conflation, read <a href="/blog/posts/red-pill-of-resilience-infosec/#robustness">the Robustness section</a> in my keynote on resilience.</p>
</li>
<li>
<p><strong>Resilience is not a tech stack or set of tools.</strong> Given people still think DevOps is achieved by implementing a certain tech stack, I don’t hold much faith that people will let go of similar notions about resilience. Just like DevOps, resilience is a set of principles and practices. It’s an approach that aims to help you evolve your security as your environment and context evolves.</p>
<p>Certain tools can support your resilience practices &ndash; such as transformability via migrating systems from on-prem to the cloud &ndash; but you will not be able to “do resilience” just by purchasing tech.</p>
</li>
</ol>
<hr>
<h2 id="in-conclusion">In Conclusion</h2>
<p>I’ll repeat my definition again: Resilience in security means a flexible system that can absorb an attack and reorganize around the threat.</p>
<p>I firmly believe resilience is the future of information security. Alas, the term is actively being bastardized by some vendors to serve their pestilent purposes. To the practitioners reading (whether you build systems or secure systems), remember that attackers are continuously evolving their methods based on changing context, and it is imperative for you to evolve your methods, too.</p>
<p>Adopting a resilience approach will help you survive – and hopefully even thrive – in this dynamic digital world of ours.</p>
<hr>
<hr>
<p>Thanks to my engineering and security friends for their feedback on drafts, especially Camille, James, and Vladimir!</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>I recommend reading the seminal paper <a href="https://www.ecologyandsociety.org/vol9/iss2/art5/">&ldquo;Resilience, adaptability and transformability in social–ecological systems&rdquo;</a> by B Walker, CS Holling, SR Carpenter, A Kinzig (2004).</p>
<p>For a longer discussion about resilience in infosec, check out the <a href="/blog/posts/red-pill-of-resilience-infosec/">full text of my keynote</a> on the topic from 2017.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Kelly&#39;s Hierarchy of Security Product Needs &amp; Vendor Selection v1.0</title>
            <link>https://kellyshortridge.com/blog/posts/kellys-hierarchy-of-security-product-needs/</link>
            <pubDate>Tue, 05 May 2020 07:57:42 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/kellys-hierarchy-of-security-product-needs/</guid>
            <description>
What more is there to say, really?
Thanks to Chris Hoff in 2013 for the original inspiration.
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/hierarchy-infosec-product-needs-2020.png" alt="A triangle, similar to Maslow&amp;rsquo;s hierarchy of needs, but for infosec. Starting at the bottom, the first row includes: single PAIN of glass, AI / Deep Learning / Galactic Magic, less than 100 percent false positive rate, STOPS ALL ZERO DAY, introduces 0day, founded in Silly Valley or Tel Aviv. The second row includes: celeb meet and greet at RSA, Gartner sycophant (crossed out) vassal (crossed out) Magic Quadrant Leader, NBAD / SINBAD. The third row includes: acquired by Cisco or Palo Alto Networks, PERIMETER DEFENSE BUT CLOUD, friend is on the advisory board, sent you a free Apple Watch for a demo. The fourth row includes: published a branded vulnerability, business plan or child&amp;rsquo;s fingerpaint art?, AI less accurate than nested if statements, feature as a product. The fifth row includes: security posture chiropractor, solves the OWASP top 10, less painful than a swarm of literal wasps, the Miss Cleo of Cyber. The sixth row includes: 60 percent of the time, it works every time. The seventh row includes: MITRE ATT&amp;CK namedrop, whatever tf platform means, outlining clouds in crayon as asset management. The eighth row includes: has a zero trust whitepaper, greater than 10 percent not-a-white-dudes on the leadership team. The ninth row includes: next-gen, &amp;ldquo;Shift Left,&amp;rdquo; enterprise-grade. The tenth row includes: threat model being addressed isn&amp;rsquo;t from a DMT trip, subscription model. The eleventh row includes: will be obsolete after re:Invent or Cloud Next. The twelfth row includes: FUD-free marketing, UX that isn&amp;rsquo;t just demoware. The thirteenth row includes: IAM as the new perimeter. The fourteenth row includes: doesn&amp;rsquo;t cause a kernel panic. The fifteenth row includes: helps satisfy SOC2. The sixteenth row includes: resilience. The seventeenth row includes: developers don&amp;rsquo;t hate it. The eighteenth row includes: product works. There is also an eye with a hot pink iris and a hexagram pupil at the top, for flare."></p>
<p>What more is there to say, really?</p>
<hr>
<p>Thanks to Chris Hoff in 2013 for the <a href="https://www.rationalsurvivability.com/blog/2013/11/maslows-hierarchy-of-security-product-needs-vendor-selection/">original inspiration</a>.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Shall We Play a Coordination Game?</title>
            <link>https://kellyshortridge.com/blog/posts/shall-we-play-a-coordination-game/</link>
            <pubDate>Wed, 08 Apr 2020 20:22:57 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/shall-we-play-a-coordination-game/</guid>
            <description>
“The seasons change; and both of us lose our harvests for want of mutual confidence and security.” – David Hume, A Treatise on Human Nature
As I expounded before, security should be treated as a product – as “something created through a process that provides benefits” to the organization. Every product has a purpose, something it is trying to help its users accomplish. If security is a product, what is its purpose? What is it trying to help its users – the organization – accomplish? Without this purpose, security can become aimless – falling into the wretched trap of “security for its own sake.”
If we treat security instead as a business enabler, what results? In many tech organizations, the most critical business enabler is software delivery performance. Therefore, security should cooperate with relevant stakeholders who focus on software delivery performance, especially the engineering organization (colloquially known as the DevOps function).
As is well-discussed in the industry, the relationship between security and DevOps is typically described as fraught, icy, or adversarial – a far cry from cooperative, let alone collaborative. There are cultural reasons for this, but I will not be covering them in this post. Instead, I am going to draw on behavioral economics, looking both to cooperation games within game theory and the concept of moral hazard as a lens through which we can better understand the security and DevOps relationship.
So, shall we play a coordination game? Let’s dive in.
Barriers to Cooperation Cooperation Games Most relationships in life can be considered through the concept of “games,” behavioral relationships between decision-makers involving certain rules or conditions. It is worth exploring the different potential attributes of games as a backdrop for how to think about the game infosec plays with its DevOps peers.
Games can involve cooperation or non-cooperation. Non-cooperative games involve competition between the game’s players, without any sort of external authority to enforce cooperation between the players – resulting in no chance for alliance. The game between attackers and defenders can be considered a non-cooperative game. Cooperative games, unsurprisingly, involve cooperation rather than competition. Players can form coalitions to coordinate their strategies and share potential payoffs1. One of the more famous cooperation games is the Prisoner’s Dilemma, a non-zero-sum cooperation game in which two prisoners must make the decision to confess or stay silent.
What is a non-zero-sum game? In zero sum games, the total payoffs for all players in the game add up to zero. That is, one player’s gain will equal the other player’s loss exactly. Few real-world scenarios involve zero-sum games, but the game poker is an example of one. Non-zero-sum games are thus games in which the total payoffs for all players do not add up to zero – that the gain by one player does not result in an equivalent loss by the other player. Free-trade is an example of a non-zero-sum game in which all players can benefit in a win-win scenario. The aforementioned Prisoner’s Dilemma is non-zero-sum, as it can result in a win-win or lose-lose scenario.
Information is an essential component of every game, as information is at the heart of strategic interaction – particularly information regarding other player’s decision-making in the game. In perfect information games, all players know all decisions previously made by the other players. As you might suspect, real-life rarely allows such omniscience, outside of games like tic-tac-toe or chess. Imperfect information is common to our existence, wherein players cannot see all prior decision-making by the other players within the game.
Complete and incomplete information is another informational characteristic of games. In a game with complete information, players understand the potential payoffs, risk tolerances, strategies, and player “types” among other players. Again, complete information is largely unrealistic in the real-world. Instead, games with incomplete information are most common in human interaction, wherein players cannot discern other players’ preferences, motivations, and other strategic information.
Although it may scandalize true game theorists, for perspicuity’s sake, I will summarize these information-based characteristics into the concept of information asymmetry – that players possess relevant information to which the other players do not have access. Typically, information asymmetry is analyzed through the lens of transactions, though I argue it all comes back to decision-making between relevant players.
I believe one can view DevOps and infosec’s relationship as a coordination game with information asymmetry. I do not believe it is a non-cooperation game, as there is ample room for infosec and DevOps to form a coalition, and there is potential for the organization to serve as an external enforcer of cooperation. There is also information asymmetry, as neither DevOps nor infosec has perfect insight into the other’s decision-making process nor access to all relevant information (much of which might be tacit and considered tribal knowledge).
If we proceed with the assumption that DevOps and infosec’s relationship is a coordination game, then our goal is to understand how to encourage cooperation to maximize the collective payoff. This results in a crucial question: what leads to coordination failures? One impetus is strategic uncertainty among players, that they do not realize their objectives are aligned. Perhaps obviously, misalignment of objectives can also present friction in cooperation games. Thus, mechanisms are needed either to facilitate cooperation – in the event of aligned preferences – or enforce cooperation – in the event of less aligned preferences.
Interestingly, empirical evidence suggests that humans are far more cooperative by nature than traditional game theory predicts. In fact, the wealth of data points showing cooperation within games far exceeding predicted values has led scholars to delve into the nature of cooperation, seeking an answer to why this pattern holds. There is a split among those who seek to prove that cooperation still fits within the confines of rational behavior, and those who argue that cooperation represents more of a strategic irrationality – the philosophical nuance of which I will not cover here.
Despite humans being cooperative by nature, there are potential wrinkles that can come when decision-making power and information possession are unequal – known as moral hazard, which we will explore presently.
Moral Hazard Moral hazard results when someone increases their risk exposure because they are protected from risk impact in some way. It is related to the principal-agent problem, in which someone (the agent) can make decisions on behalf of another person (the principal), leading to the potential for self-interested actions by the agent that are not in the interests of their principal. Information asymmetry exacerbates moral hazard issues, since one party possessing information the other party cannot access creates an incentive for exploitation.
A tangible example of moral hazard is in insurance. If you received an insurance policy with full coverage for all potential losses due to security incidents, theory suggests you will have less of an incentive to invest in a strong security program. The insurance provider may assume you will maintain your current level of protection, but you possess information secret to them – that you have no intention of doing so. Thus, you are willing to accept more risk because you are protected from the risk impact, potentially empowering you to pursue a #yolosec strategy2.
Looking to infosec and DevOps, there is the potential for moral hazard given the team structure at most organizations. DevOps can increase their exposure to security-related risk because they are not held accountable for it, making the infosec team bear the risk impact. DevOps can likewise possess hidden motivations or take hidden action from the vantage point of the security team, as not all efforts to secure systems will be observable by infosec. This makes it difficult for infosec to manage the organization’s risk exposure – and thus the ability to manage the risk impact they experience.
I believe moral hazard goes both ways in this relationship, however. Infosec can also increase risk exposure for the rest of the organization – but risk of a different kind. Part of infosec’s infamy is due to creating policies or implementing tools that add risk to their colleagues’ workflows, such as a salesperson wrestling with a clunky VPN when trying to close a deal with a customer. Because infosec is shielded from the risk resulting from friction to their organizational peers’ workflows, they are more willing to add such risk in the name of their own priority: security.
Moral hazard is particularly relevant when considering conflicting goals. Very recent research suggests that decision-makers resolve goal conflicts on behalf of others differently than they would for themselves, potentially resulting in undesirable outcomes for the recipients of the decision. But, first, what defines goal conflict? It is when achieving one goal prohibits or discourages the achievement of another goal. Unless there is the option to satisfy both goals (more on this later), people will only “activate” the higher priority goal.
One study examined moral hazard and goal conflict through the lens of health practitioners choosing between curative care or palliative care for their patients3. Health practitioners typically prioritize the goal of curative care over palliative care, believing the two are in conflict. What makes this context interesting is widespread evidence that these two goals are not actually in conflict. Palliative care can extend lifespan; but at the very least, it is not shown to interfere with curative care.
Why do health practitioners believe these goals are in conflict, and why do they opt for curative care rather than improving patient comfort and pain reduction? The problem lies in moral hazard – that people focus on potential gains and perceive fewer risks when making decisions for others rather than themselves4, and these choices tend to be more creative and idealistic. People can also justify these choices by telling themselves that there are fewer tradeoffs than there are – pretending that it is an easier decision to make.
Another reason behind this errant goal conflict resolution lies in identity. Healthcare providers take pride in being able to solve medical problems, and curative care reinforces that identity. Palliative care, however, can be threatening to this identity, perceived as a form of “giving up.”5 Infosec professionals also take pride in their ability to stop threats and to solve security problems, and accepting risk or compromising can be likewise seen as giving up.
The process by which people accomplish multiple goals can also present challenges in an organizational context. Goals can either be pursued sequentially or concurrently. Sequential goal pursuit allocates resources to one goal at a time, only switching attention to an alternative goal once sufficient progress is perceived. Concurrent goal pursuit directs attention to more than one goal at a time, determining a single choice that can satisfy multiple goals (the “killing two birds with one stone” strategy).
There are benefits of each strategy along the spectrum of sequential to concurrent goal pursuit. “Goal shielding” is the primary mechanism of sequential goal pursuit, wherein alternative goals are inhibited to avoid interfering with pursuit of the primary goal – and it has been found to increase successful goal progress6. The multifinality principle is the primary mechanism of concurrent goal pursuit, wherein a single means is used to satisfy multiple goals – and it has been found to boost efficiency of goal achievement7.
There are downsides to each end of the spectrum, as well. Sequential goal pursuit means that accomplishing multiple goals can take more time. It can also lead to a false sense of security, as seen in a weight management scenario. In one study, those who believed they had made meaningful progress towards their weight goal were more likely to dig into unhealthy food than those who felt that their progress was wanting8. Sequential pursuit also allows the excuse of “I’ll do it later,” when “later” never seems to materialize.
Most with experience in information security have assuredly seen examples of the “I’ll do it later” issue, that certain goals are never achieved and continually pushed off. The false sense of security in the weight management case also manifests in infosec in a few ways, but primarily through the seductive delusion that a series of solid security choices means that security concerns can be ignored for a time, or that enough progress was made on security while ignoring the need for continuous maintenance (let alone improvement).
Concurrent goal pursuit requires a readily-available solution capable of satisfying multiple goals at the same time, which is not always possible. What’s more, these magical solutions end up being discounted due to their versatility. For instance, when people were asked to think of three goals a computer serves in a study, they perceived that the computer was less important to accomplishing those goals as when they were asked to think of only one goal a computer serves9. While a computer is seen as essential for the sole task of checking email, it is perceived as less critical for checking email when considered alongside the goals of using a search engine or browsing an online publication. Thus, concurrent goal pursuit can save time, but can potentially lead to the feeling of less being accomplished.
This curious effect, known as the “dilution of instrumentality,” is considered decidedly irrational, entirely based on people’s mental models rather than objective evidence of how a course of action can serve multiple goals. I believe this is commonly seen in infosec when evaluating solutions that can help both infosec and DevOps – that any course of action or means to an end that can solve use cases on both sides must not be a very good or powerful one.
Of course, in the context of enterprise security, there is not just one individual pursuing goals. Let us now dive into the concept of team reasoning to better understand the nature of collaborative goal pursuit.
Team Reasoning Team reasoning is an emerging theory10 that allows for a group of players within a game to represent a single player, postulating that team members aim to achieve common interests, rather than fulfilling their individual self-interest. As Sugden, one of the parents of the theory, would say, team reasoning represents “intentional cooperation for mutual benefit.” It is commonly referred to as “we thinking” or “joint intentionality” in the literature, and is considered a conceptual kin to “collective intentionality” from the realm of philosophy.11
There is ample experimental evidence suggesting that team reasoning serves as a more accurate theoretical framework than traditional game theory. For example, one study conducted five game types and found that across all games, a majority of players eschewed the individually rational strategy and opted instead for the collectively rational strategy – the one that maximized the group’s payoff12. An earlier study demonstrated that players specifically identified “focal points” – targets that naturally stand out – that facilitated coordination13.
An intuitive reason for cooperation is that we derive value from helping those with whom we identify. On the flip side, if we view the players in a cooperative game as “the others,” we are less likely to cooperate. An imperative ingredient to foster collaboration is encouraging the shift from an individual mindset to a group mindset – to consider what the group’s goals are and what part the individual should play in achieving those goals14. Naturally, the path to this shift to a collective perspective is not so simple as telling the individual to start thinking in that way.
One method to improve this so-called group identification is by making it salient – the behavioral economics term for making a piece of data more readily accessible and more prominent. For instance, I could encourage group identification among blonde people by telling an all-blonde group that I am conducting an experiment to compare brunettes versus blondes.
Regrettably, from what I have witnessed regarding infosec and DevOps’ relationship (or lack thereof), there is almost a point of pride among each group for being distinct. I certainly do not believe that increasing salience of group identification between the two will fix tension among infosec and DevOps – but it does seem like an important step worth attempting. In any projects or important decision-making meetings, increasing salience by emphasizing that you, collectively, are the core of the organization’s technology group – party members on the imperative quest of ensuring safe software delivery performance – can perhaps remind everyone more of the collective similarities rather than differences. Even more compellingly, this effect can be magnified if you emphasize the true “other.”
Emphasizing potential negative outcomes due to a group of “others” dissimilar from your group (as a compliment to espousing the potential positive outcomes due to group collaboration) offers potent results, backed by experimental evidence.15 This is known as “perceived interdependence,” achieved by framing the situation as one in which the group needs each other in the face of dissimilar “others” who might seek to harm them.
Luckily, it is not a stretch to present organizational defense in this light. Relative to attackers, infosec should lie squarely in the “in-group” for DevOps – despite their perceived antagonistic qualities otherwise. With that said, there may be perception that security is a more harrowing threat to DevOps than attackers, particularly if DevOps was relatively shielded from incidents in the past, because security stymies their work far more acutely. Hence, I believe attempting group salience alone to fix coordination issues is likely to product mixed results in the vexed realm of infosec and DevOps relations.
Leveraging salience in other ways may prove more fruitful, however. Experimental evidence indicates that making the joint goal salient makes team reasoning more likely to occur16. Unfortunately, in many games, payoffs are asymmetric, meaning one player or group receives a greater reward for success than their co-player or co-group. Such asymmetry can result in miscoordination, because players find the payoff factor more salient and begin to act more competitively. For instance, in one experiment, games with symmetric payoffs resulted in a 93% cooperation rate, while games with even small asymmetric payoffs dropped to a 49% coordination rate17.
At most organizations, payoffs for infosec and DevOps will be largely unknown, as most compensation information is private. However, perceived payoffs for success might result in greater miscoordination – and it will most likely take the form of infosec feeling less rewarded than DevOps, if I were to hazard a guess. These rewards are not necessarily monetary, either. The disparity in payoffs between DevOps and infosec is perhaps highest in social rewards, namely social recognition.
One all too common scenario is that infosec may insist on a fix of a critical vulnerability shortly before an anticipated release, requiring last minute scrambling by DevOps to unblock the release. DevOps may be rewarded for these efforts, which might have been avoided by fixing the vulnerability when it was first discovered by the infosec team, rather than ignoring or denying it. Infosec, however, is more likely to be chided than noted for their contributions.
A potential counter is to publicly state equal rewards for certain joint goals, such as mean time to remediation or, more contentiously, fewer vulnerabilities introduced into production, so that this data point regarding payoffs becomes most salient. Again, these rewards do not have to be monetary, but should involve joint recognition at an organizational level. With that said, emphasizing “label salience” – accentuating group salience, as discussed before – may also serve as a deterrence from payoff salience, so experimentation is recommended.
When emphasizing joint payoffs, the payoff matrix for a game changes, as shown below. The new, “we-thinking” payoff matrix illustrates why the theory of team reasoning can still easily support the notion of rationality, as it makes perfect sense for a player to choose a payoff-maximizing strategy:
By visualizing team reasoning through this payoff matrix, its value in promoting coordination becomes palpable. This serves as the perfect starting point for contemplating how we can win the coordination game.
Facilitating Cooperation In the course of my research, I did not encounter a single organization where DevOps and infosec codified guidelines for how they should interact – or even documented their collective objectives. As I will discuss in this section, choosing the right goals and publicizing them in the right way is essential to facilitate cooperation.
Team Reasoning One notable and highly relevant experiment involves the centipede game, from Lambsdorff, et al.18 The centipede game is a multi-stage trust game in which players alternate offers. For instance, in the first round, I could receive $10 while you receive $2, or I could pass the choice over to you to double each amount – giving you the choice of receiving $20 and me receiving $4, or passing it back to me to double the amounts again. This game structure is useful for exploring the reciprocal nature of cooperation, as the subgame perfect Nash equilibrium (i.e. what the rational player “should” play) is for the players to never pass.
The empirical evidence starkly differs from the theoretically rational strategy, with a majority of players passing in the first stage, and some even ending the game without anyone taking vs. passing. The results are even more elucidating when compared across the treatments the experimenters created. In one treatment, the game’s payoffs were presented in probabilistic terms, showing the probability of achieving a joint goal rather than individual payoffs. In the other treatment, the game was presented through a soccer analogy, where passing offers was seen through the lens of passing the ball to score a goal.
In games where the number of passes could lie between 0 or 2 for each player, the average number of passes in the probabilistic treatment was approximately 1.5, and for the soccer treatment approximately 1.9 passes. Both averages are substantially higher than the average for the vanilla version of the centipede game – approximately 1.2 passes – and, strikingly, much higher than the perfect Nash equilibrium of 0 passes. Why did these alternative game treatments succeed? The probabilistic treatment was successful in promoting cooperation due to emphasis on the shared goal, removing the salience of the payoff asymmetry (as discussed in the prior section).
What led the soccer treatment to be most effective? The researchers believe it is because the soccer analogy made the joint goal obvious, thereby making the team identity salient. This not only is supportive of the theory of team reasoning, but also perhaps supports the practice common among the stereotypical former-jock sales leaders of using sportsball analogies to foster team spirit and motivation. While I remain somewhat skeptical of the efficacy of sports analogies on engineers, I believe similar levels of salience for group identity can be accomplished and provide similar results in improving cooperation.
What else can these results suggest for the relationship between DevOps and infosec? One is that controls targeting individuals for infractions – for instance, punishing individual developers for security bugs – may be less effective than sharing security-enhancing goals at a group level. Further, emphasizing differences between groups is less effective in improving conflict resolution than emphasizing joint goals. This perhaps is where infosec culture fails the most – presenting a draconian notion of infosec vs. normies and declaring that infosec is a misunderstood, maligned clique that is the gatekeeper of secure practices.
Outcome &amp; Process Goals Another set of tools for our conceptual arsenal is found in outcome and process goals. Outcome accountability prioritizes people getting things right. Process accountability prioritizes people thinking in the right way. I am a huge proponent for focusing on outcomes rather than outputs, particularly when it comes to measuring success – because lots of aimless creation is not useful, but impactful creation is. Nevertheless, there are caveats to outcome accountability, regarding what it incentivizes, that are necessary to explore before people jump on the outcome bandwagon and fully abandon process accountability.
Outcome accountability is beneficial in incentivizing realization of expected outcomes when making decisions. There is experimental evidence showing that outcome accountability boosts performance relative to process accountability alone.19 However, this can ignore analysis of whether the decision itself was appropriate or not. The potential hazards of outcome accountability include the worsening of overall decision quality, increasing commitment to sunk costs, diminishing of complex thinking, impairment of attentiveness, and reduction of situational awareness20.
In short-term decision-making problems, outcome accountability can lead to analysis paralysis due to uncertainty, or succumbing to cognitive bias, thereby worsening decision quality. Process accountability, in contrast, can help temper the temptation of cognitive bias, helping decision-makers navigate choppy cognitive waters short-term. Further, in the aforementioned experiment, process accountability was shown to promote more effective knowledge sharing, boosting performance of those observing the decision-makers’ actions.
Process accountability, although already not as popular in Silicon Valley, has its deficiencies worth mentioning, too. Process accountability can incentivize adherence to policies and best practices, even when they are not the best options in dynamic environments long-term. It is inherently less flexible than outcome accountability, making it difficult to improvise and incorporate feedback loops to inform better strategy in changing environments. For example, process accountability was shown to be far weaker than outcome accountability in augmenting pilots’ skill in navigating changing environments21.
Thus, a hybrid of outcome accountability and process accountability is optimal, balancing out weaknesses while not decreasing the respective benefits of each method. A hybrid of outcome goals and process goals should give teams the ability to remain flexible while ensuring knowledge transfer and inhibiting potential abuses of power. Importantly, this hybrid accountability can ensure that standard practices are met, but also encourage experimentation with novel strategies. This ensures collective knowledge is not discarded and adaptive prowess are not discouraged – both of which are essential components for designing secure systems and responding to incidents.
Creating stretch outcome goals can encourage innovation, allowing teams to stretch their metaphorical wings and attempt ambitious plans – tempered with the need to still be able to justify process. Stretch goals tend to also be more fun, satiating people’s curiosity and desire to learning new things. For example, a security-related stretch goal might be to create a tool to automate asset inventory management, whereas the basic goal may be to perform manual asset inventory management.
Importantly, process goals should apply equally to infosec and to DevOps. Infosec should justify why they made a less business-friendly decision – helping curb the FUD-driven decision-making to which infosec is prone. DevOps should justify why they made a less secure decision – helping curb the tendency to treat security as an afterthought. By forcing each team to articulate their justifications, it allows the opportunity to, as the kids say, call out bullshit.
Goal Framing Finally, how these goals are communicated is consequential to winning the coordinative game. Drawing from the aforementioned study involving goal conflict between curative or palliative care, there are a few lessons to glean towards how to set goals. First, the study found that participants who received messaging emphasizing the conflicting nature of the goals reported increased perception of goal conflict and decreased importance of palliative care, as expected22.
Second, the researchers anticipated that self-affirmation and self-reflection on values – with the goal of ameliorating cognitive dissonance associated with maintaining conflicting beliefs – would improve providers’ abilities to empathize with patients, thereby increasing the importance of palliative care. Instead, the opposite was shown to be true. Unfortunately, self-affirmation can also deactivate less valued or more difficult goals, becoming a reinforcement mechanism for moral hazard-driven beliefs.
These results certainly seem discouraging! So, what can be done? The key takeaway is that goals must be presented as complementary rather than conflicting, particularly when there is evidence to support it (as is true with the complementary nature of palliative and curative care). When considering infosec and DevOps goals, I have long argued that their respective goals bear far more similarities than differences23. Empirical analysis from the most recent “2019 Accelerate: State of DevOps” report also shows that elite DevOps teams fix security issues earlier and faster than their less performant peers24.
We must also include the ordering of goals in our calculus here. Sequential goal pursuit is inevitable given the relative scarcity of solutions that can accomplish multiple goals, and in light of the tendency for people to prioritize one goal over another. However, concurrent goal pursuit can still be encouraged. DevOps and infosec teams can make a list of their respective goals and perform an exercise to brainstorm where goals from each side might be achieved through the same means. As an example, tools designed to collect data for performance use cases are collecting data valuable for security use cases as well.
To avoid the “dilution of instrumentality” – the perception that the stone is less important in killing the two birds, because it can kill both concurrently – highlight objective information about a choice, tying its importance to each goal directly. For instance, a database monitoring tool can help accomplish both performance and security goals, thus running the risk of invoking the dilution of instrumentality. To re-anchor perception to reality, you can express how the tool relates to each goal specifically: collecting data on resource utilization for performance, and detecting abnormal query behavior for security.
Conclusion The relationship between DevOps and information security must be healthy for the business to thrive. This relationship, like all relationships, requires work, and understanding it as a cooperative game involving information asymmetry can inform how we can work smarter to nurture it. By leveraging team reasoning, hybrid goals (outcome and process), and framing goals as complementary and concurrent, we become a strong contender for winning this coordinative game.
Lui, J. (2008). CSC6480: Introduction to Game Theory: Cooperative Games [PowerPoint slides]. Retrieved from http://www.cse.cuhk.edu.hk/~cslui/CSC6480/cooperative_game.pdf ↩︎
The invention of the term #yolosec is, perhaps, one of my crowning achievments within the infosec industry. See my 2017 Black Hat talk for the use of it in context of attack trees (slide 77). ↩︎
Ferrer, R. A., Orehek, E., &amp; Padgett, L. S. (2018). Goal conflict when making decisions for others. Journal of Experimental Social Psychology, 78, 93-103. ↩︎
Polman, E. (2012). Self–other decision making and loss aversion. Organizational Behavior and Human Decision Processes, 119(2), 141-150. ↩︎
CAPC (2011). 2011 public opinion research on palliative care: A report based on research by public opinion strategies. Retrieved from https://media.capc.org/filer_public/18/ab/18ab708c-f835-4380-921d-fbf729702e36/2011-public-opinion-research-on-palliative-care.pdf ↩︎
Shah, J. Y., Friedman, R., &amp; Kruglanski, A. W. (2002). Forgetting all else: on the antecedents and consequences of goal shielding. Journal of personality and social psychology, 83(6), 1261. ↩︎
Orehek, E., &amp; Vazeou-Nieuwenhuis, A. (2013). Sequential and concurrent strategies of multiple goal pursuit. Review of General Psychology, 17(3), 339-349. ↩︎
Fishbach, A., &amp; Dhar, R. (2005). Goals as excuses or guides: The liberating effect of perceived goal progress on choice. Journal of Consumer Research, 32(3), 370-377. ↩︎
Zhang, Y., Fishbach, A., &amp; Kruglanski, A. W. (2007). The dilution model: How additional goals undermine the perceived instrumentality of a shared path. Journal of personality and social psychology, 92(3), 389. ↩︎
Team reasoning’s roots are with Sugden and Bacharach, both who first released papers in the 1990s on the topic – so it is quite a recent theory on a relative basis. ↩︎
Schweikard, D. P., &amp; Schmid, H. B. (2013). Collective intentionality. ↩︎
Colman, A. M., Pulford, B. D., &amp; Rose, J. (2008). Collective rationality in interactive decisions: Evidence for team reasoning. Acta psychologica, 128(2), 387-397. ↩︎
Mehta, J., Starmer, C., &amp; Sugden, R. (1994). The nature of salience: An experimental investigation of pure coordination games. The American Economic Review, 84(3), 658-673. ↩︎
Colman, A. M., &amp; Gold, N. (2018). Team reasoning: Solving the puzzle of coordination. Psychonomic bulletin &amp; review, 25(5), 1770-1783. ↩︎
Flippen, A. R., Hornstein, H. A., Siegal, W. E., &amp; Weitzman, E. A. (1996). A comparison of similarity and interdependence as triggers for in-group formation. Personality and Social Psychology Bulletin, 22(9), 882-893. ↩︎
One such example is in the study by Lambsdorff, et al. involving the centipede game, mentioned later in this piece. ↩︎
van Elten, J., &amp; Penczynski, S. P. (2018). Coordination games with asymmetric payoffs: An experimental study with intra-group communication. Unpublished manuscript available at http://www.penczynski.de/attach/APC.pdf. ↩︎
Lambsdorff, J. G., Giamattei, M., Werner, K., &amp; Schubert, M. (2018). Team reasoning—Experimental evidence on cooperation from centipede games. PloS one, 13(11), e0206666. ↩︎
Chang, W., Atanasov, P., Patil, S., Mellers, B. A., &amp; Tetlock, P. E. (2017). Accountability and adaptive performance under uncertainty: A long-term view. Judgment &amp; Decision Making, 12(6). ↩︎
Scholars would call it reduction of “epistemic motivation.” ↩︎
Skitka, L. J., Mosier, K. L., &amp; Burdick, M. (1999). Does automation bias decision-making?. International Journal of Human-Computer Studies, 51(5), 991-1006. ↩︎
Ferrer, R. A., Orehek, E., &amp; Padgett, L. S. (2018). Goal conflict when making decisions for others. Journal of Experimental Social Psychology, 78, 93-103. ↩︎
Shortridge, K. (2018). Paint by Numbers: Measuring Resilience in Security. Retrieved from /speaking/Paint-by-Numbers-Resilience-in-Infosec-Kelly-Shortridge-AusCERT-2018.pdf ↩︎
Forsgren, N., et al. (2019). 2019 Accelerate State of DevOps Report. Retrieved from https://cloud.google.com/blog/products/devops-sre/the-2019-accelerate-state-of-devops-elite-performance-productivity-and-scaling ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/coordination-game.png" alt="An interface in the style of War Games asking if you want to play chess, fighter combat, theaterwide tactical warfare, or cooperative game theory"></p>
<blockquote>
<p>&ldquo;The seasons change; and both of us lose our harvests for want of mutual confidence and security.” – David Hume, A Treatise on Human Nature</p>
</blockquote>
<p><a href="/blog/posts/security-as-a-product/">As I expounded before</a>, security should be treated as a product – as “something created through a process that provides benefits” to the organization. Every product has a purpose, something it is trying to help its users accomplish. If security is a product, what is its purpose? What is it trying to help its users – the organization – accomplish? Without this purpose, security can become aimless – falling into the wretched trap of “security for its own sake.”</p>
<p>If we treat security instead as a business enabler, what results? In many tech organizations, the most critical business enabler is software delivery performance. Therefore, security should cooperate with relevant stakeholders who focus on software delivery performance, especially the engineering organization (colloquially known as the DevOps function).</p>
<p>As is well-discussed in the industry, the relationship between security and DevOps is typically described as fraught, icy, or adversarial – a far cry from cooperative, let alone collaborative. There are cultural reasons for this, but I will not be covering them in this post. Instead, I am going to draw on behavioral economics, looking both to cooperation games within game theory and the concept of moral hazard as a lens through which we can better understand the security and DevOps relationship.</p>
<p>So, shall we play a coordination game? Let&rsquo;s dive in.</p>
<hr>
<h1 id="barriers-to-cooperation">Barriers to Cooperation</h1>
<h2 id="cooperation-games">Cooperation Games</h2>
<p>Most relationships in life can be considered through the concept of “games,” behavioral relationships between decision-makers involving certain rules or conditions. It is worth exploring the different potential attributes of games as a backdrop for how to think about the game infosec plays with its DevOps peers.</p>
<p>Games can involve cooperation or non-cooperation. Non-cooperative games involve competition between the game’s players, without any sort of external authority to enforce cooperation between the players – resulting in no chance for alliance. The game between attackers and defenders can be considered a non-cooperative game. Cooperative games, unsurprisingly, involve cooperation rather than competition. Players can form coalitions to coordinate their strategies and share potential payoffs<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. One of the more famous cooperation games is the Prisoner’s Dilemma, a non-zero-sum cooperation game in which two prisoners must make the decision to confess or stay silent.</p>
<p>What is a non-zero-sum game? In zero sum games, the total payoffs for all players in the game add up to zero. That is, one player’s gain will equal the other player’s loss exactly. Few real-world scenarios involve zero-sum games, but the game poker is an example of one. Non-zero-sum games are thus games in which the total payoffs for all players do not add up to zero – that the gain by one player does not result in an equivalent loss by the other player. Free-trade is an example of a non-zero-sum game in which all players can benefit in a win-win scenario. The aforementioned Prisoner’s Dilemma is non-zero-sum, as it can result in a win-win or lose-lose scenario.</p>
<p>Information is an essential component of every game, as information is at the heart of strategic interaction – particularly information regarding other player’s decision-making in the game. In perfect information games, all players know all decisions previously made by the other players. As you might suspect, real-life rarely allows such omniscience, outside of games like tic-tac-toe or chess. Imperfect information is common to our existence, wherein players cannot see all prior decision-making by the other players within the game.</p>
<p>Complete and incomplete information is another informational characteristic of games. In a game with complete information, players understand the potential payoffs, risk tolerances, strategies, and player “types” among other players. Again, complete information is largely unrealistic in the real-world. Instead, games with incomplete information are most common in human interaction, wherein players cannot discern other players’ preferences, motivations, and other strategic information.</p>
<p>Although it may scandalize true game theorists, for perspicuity&rsquo;s sake, I will summarize these information-based characteristics into the concept of information asymmetry – that players possess relevant information to which the other players do not have access. Typically, information asymmetry is analyzed through the lens of transactions, though I argue it all comes back to decision-making between relevant players.</p>
<p>I believe one can view DevOps and infosec’s relationship as a coordination game with information asymmetry. I do not believe it is a non-cooperation game, as there is ample room for infosec and DevOps to form a coalition, and there is potential for the organization to serve as an external enforcer of cooperation. There is also information asymmetry, as neither DevOps nor infosec has perfect insight into the other’s decision-making process nor access to all relevant information (much of which might be tacit and considered tribal knowledge).</p>
<p>If we proceed with the assumption that DevOps and infosec’s relationship is a coordination game, then our goal is to understand how to encourage cooperation to maximize the collective payoff. This results in a crucial question: what leads to coordination failures? One impetus is strategic uncertainty among players, that they do not realize their objectives are aligned. Perhaps obviously, misalignment of objectives can also present friction in cooperation games. Thus, mechanisms are needed either to facilitate cooperation – in the event of aligned preferences – or enforce cooperation – in the event of less aligned preferences.</p>
<p>Interestingly, empirical evidence suggests that humans are far more cooperative by nature than traditional game theory predicts. In fact, the wealth of data points showing cooperation within games far exceeding predicted values has led scholars to delve into the nature of cooperation, seeking an answer to why this pattern holds. There is a split among those who seek to prove that cooperation still fits within the confines of rational behavior, and those who argue that cooperation represents more of a strategic irrationality – the philosophical nuance of which I will not cover here.</p>
<p>Despite humans being cooperative by nature, there are potential wrinkles that can come when decision-making power and information possession are unequal – known as moral hazard, which we will explore presently.</p>
<h2 id="moral-hazard">Moral Hazard</h2>
<p>Moral hazard results when someone increases their risk exposure because they are protected from risk impact in some way. It is related to the principal-agent problem, in which someone (the agent) can make decisions on behalf of another person (the principal), leading to the potential for self-interested actions by the agent that are not in the interests of their principal. Information asymmetry exacerbates moral hazard issues, since one party possessing information the other party cannot access creates an incentive for exploitation.</p>
<p>A tangible example of moral hazard is in insurance. If you received an insurance policy with full coverage for all potential losses due to security incidents, theory suggests you will have less of an incentive to invest in a strong security program. The insurance provider may assume you will maintain your current level of protection, but you possess information secret to them – that you have no intention of doing so. Thus, you are willing to accept more risk because you are protected from the risk impact, potentially empowering you to pursue a #yolosec strategy<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>.</p>
<p>Looking to infosec and DevOps, there is the potential for moral hazard given the team structure at most organizations. DevOps can increase their exposure to security-related risk because they are not held accountable for it, making the infosec team bear the risk impact. DevOps can likewise possess hidden motivations or take hidden action from the vantage point of the security team, as not all efforts to secure systems will be observable by infosec. This makes it difficult for infosec to manage the organization’s risk exposure – and thus the ability to manage the risk impact they experience.</p>
<p>I believe moral hazard goes both ways in this relationship, however. Infosec can also increase risk exposure for the rest of the organization – but risk of a different kind. Part of infosec’s infamy is due to creating policies or implementing tools that add risk to their colleagues’ workflows, such as a salesperson wrestling with a clunky VPN when trying to close a deal with a customer. Because infosec is shielded from the risk resulting from friction to their organizational peers’ workflows, they are more willing to add such risk in the name of their own priority: security.</p>
<p>Moral hazard is particularly relevant when considering conflicting goals. Very recent research suggests that decision-makers resolve goal conflicts on behalf of others differently than they would for themselves, potentially resulting in undesirable outcomes for the recipients of the decision. But, first, what defines goal conflict? It is when achieving one goal prohibits or discourages the achievement of another goal. Unless there is the option to satisfy both goals (more on this later), people will only “activate” the higher priority goal.</p>
<p>One study examined moral hazard and goal conflict through the lens of health practitioners choosing between curative care or <a href="https://en.wikipedia.org/wiki/Palliative_care">palliative care</a> for their patients<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>. Health practitioners typically prioritize the goal of curative care over palliative care, believing the two are in conflict. What makes this context interesting is widespread evidence that these two goals are not actually in conflict. Palliative care can extend lifespan; but at the very least, it is not shown to interfere with curative care.</p>
<p>Why do health practitioners believe these goals are in conflict, and why do they opt for curative care rather than improving patient comfort and pain reduction? The problem lies in moral hazard – that people focus on potential gains and perceive fewer risks when making decisions for others rather than themselves<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>, and these choices tend to be more creative and idealistic. People can also justify these choices by telling themselves that there are fewer tradeoffs than there are – pretending that it is an easier decision to make.</p>
<p>Another reason behind this errant goal conflict resolution lies in identity. Healthcare providers take pride in being able to solve medical problems, and curative care reinforces that identity. Palliative care, however, can be threatening to this identity, perceived as a form of “giving up.”<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> Infosec professionals also take pride in their ability to stop threats and to solve security problems, and accepting risk or compromising can be likewise seen as giving up.</p>
<p>The process by which people accomplish multiple goals can also present challenges in an organizational context. Goals can either be pursued sequentially or concurrently. Sequential goal pursuit allocates resources to one goal at a time, only switching attention to an alternative goal once sufficient progress is perceived. Concurrent goal pursuit directs attention to more than one goal at a time, determining a single choice that can satisfy multiple goals (the “killing two birds with one stone” strategy).</p>
<p>There are benefits of each strategy along the spectrum of sequential to concurrent goal pursuit. “Goal shielding” is the primary mechanism of sequential goal pursuit, wherein alternative goals are inhibited to avoid interfering with pursuit of the primary goal – and it has been found to increase successful goal progress<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup>. The multifinality principle is the primary mechanism of concurrent goal pursuit, wherein a single means is used to satisfy multiple goals – and it has been found to boost efficiency of goal achievement<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup>.</p>
<p>There are downsides to each end of the spectrum, as well. Sequential goal pursuit means that accomplishing multiple goals can take more time. It can also lead to a false sense of security, as seen in a weight management scenario. In one study, those who believed they had made meaningful progress towards their weight goal were more likely to dig into unhealthy food than those who felt that their progress was wanting<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup>. Sequential pursuit also allows the excuse of “I’ll do it later,” when “later” never seems to materialize.</p>
<p>Most with experience in information security have assuredly seen examples of the “I’ll do it later” issue, that certain goals are never achieved and continually pushed off. The false sense of security in the weight management case also manifests in infosec in a few ways, but primarily through the seductive delusion that a series of solid security choices means that security concerns can be ignored for a time, or that enough progress was made on security while ignoring the need for continuous maintenance (let alone improvement).</p>
<p>Concurrent goal pursuit requires a readily-available solution capable of satisfying multiple goals at the same time, which is not always possible. What’s more, these magical solutions end up being discounted due to their versatility. For instance, when people were asked to think of three goals a computer serves in a study, they perceived that the computer was less important to accomplishing those goals as when they were asked to think of only one goal a computer serves<sup id="fnref:9"><a href="#fn:9" class="footnote-ref" role="doc-noteref">9</a></sup>. While a computer is seen as essential for the sole task of checking email, it is perceived as less critical for checking email when considered alongside the goals of using a search engine or browsing an online publication. Thus, concurrent goal pursuit can save time, but can potentially lead to the feeling of less being accomplished.</p>
<p>This curious effect, known as the “dilution of instrumentality,” is considered decidedly irrational, entirely based on people’s mental models rather than objective evidence of how a course of action can serve multiple goals. I believe this is commonly seen in infosec when evaluating solutions that can help both infosec and DevOps – that any course of action or means to an end that can solve use cases on both sides must not be a very good or powerful one.</p>
<p>Of course, in the context of enterprise security, there is not just one individual pursuing goals. Let us now dive into the concept of team reasoning to better understand the nature of collaborative goal pursuit.</p>
<h2 id="team-reasoning">Team Reasoning</h2>
<p>Team reasoning is an emerging theory<sup id="fnref:10"><a href="#fn:10" class="footnote-ref" role="doc-noteref">10</a></sup> that allows for a group of players within a game to represent a single player, postulating that team members aim to achieve common interests, rather than fulfilling their individual self-interest. As Sugden, one of the parents of the theory, would say, team reasoning represents “intentional cooperation for mutual benefit.” It is commonly referred to as “we thinking” or “joint intentionality” in the literature, and is considered a conceptual kin to “collective intentionality” from the realm of philosophy.<sup id="fnref:11"><a href="#fn:11" class="footnote-ref" role="doc-noteref">11</a></sup></p>
<p>There is ample experimental evidence suggesting that team reasoning serves as a more accurate theoretical framework than traditional game theory. For example, one study conducted five game types and found that across all games, a majority of players eschewed the individually rational strategy and opted instead for the collectively rational strategy – the one that maximized the group’s payoff<sup id="fnref:12"><a href="#fn:12" class="footnote-ref" role="doc-noteref">12</a></sup>. An earlier study demonstrated that players specifically identified “focal points” – targets that naturally stand out – that facilitated coordination<sup id="fnref:13"><a href="#fn:13" class="footnote-ref" role="doc-noteref">13</a></sup>.</p>
<p>An intuitive reason for cooperation is that we derive value from helping those with whom we identify. On the flip side, if we view the players in a cooperative game as “the others,” we are less likely to cooperate. An imperative ingredient to foster collaboration is encouraging the shift from an individual mindset to a group mindset – to consider what the group’s goals are and what part the individual should play in achieving those goals<sup id="fnref:14"><a href="#fn:14" class="footnote-ref" role="doc-noteref">14</a></sup>. Naturally, the path to this shift to a collective perspective is not so simple as telling the individual to start thinking in that way.</p>
<p>One method to improve this so-called group identification is by making it <a href="https://en.wikipedia.org/wiki/Salience_%28neuroscience%29">salient</a> – the behavioral economics term for making a piece of data more readily accessible and more prominent. For instance, I could encourage group identification among blonde people by telling an all-blonde group that I am conducting an experiment to compare brunettes versus blondes.</p>
<p>Regrettably, from what I have witnessed regarding infosec and DevOps’ relationship (or lack thereof), there is almost a point of pride among each group for being distinct. I certainly do not believe that increasing salience of group identification between the two will fix tension among infosec and DevOps – but it does seem like an important step worth attempting. In any projects or important decision-making meetings, increasing salience by emphasizing that you, collectively, are the core of the organization’s technology group – party members on the imperative quest of ensuring safe software delivery performance – can perhaps remind everyone more of the collective similarities rather than differences. Even more compellingly, this effect can be magnified if you emphasize the true “other.”</p>
<p>Emphasizing potential negative outcomes due to a group of “others” dissimilar from your group (as a compliment to espousing the potential positive outcomes due to group collaboration) offers potent results, backed by experimental evidence.<sup id="fnref:15"><a href="#fn:15" class="footnote-ref" role="doc-noteref">15</a></sup> This is known as “perceived interdependence,” achieved by framing the situation as one in which the group needs each other in the face of dissimilar “others” who might seek to harm them.</p>
<p>Luckily, it is not a stretch to present organizational defense in this light. Relative to attackers, infosec should lie squarely in the “in-group” for DevOps – despite their perceived antagonistic qualities otherwise. With that said, there may be perception that security is a more harrowing threat to DevOps than attackers, particularly if DevOps was relatively shielded from incidents in the past, because security stymies their work far more acutely. Hence, I believe attempting group salience alone to fix coordination issues is likely to product mixed results in the vexed realm of infosec and DevOps relations.</p>
<p>Leveraging salience in other ways may prove more fruitful, however. Experimental evidence indicates that making the joint goal salient makes team reasoning more likely to occur<sup id="fnref:16"><a href="#fn:16" class="footnote-ref" role="doc-noteref">16</a></sup>. Unfortunately, in many games, payoffs are asymmetric, meaning one player or group receives a greater reward for success than their co-player or co-group. Such asymmetry can result in miscoordination, because players find the payoff factor more salient and begin to act more competitively. For instance, in one experiment, games with symmetric payoffs resulted in a 93% cooperation rate, while games with even small asymmetric payoffs dropped to a 49% coordination rate<sup id="fnref:17"><a href="#fn:17" class="footnote-ref" role="doc-noteref">17</a></sup>.</p>
<p>At most organizations, payoffs for infosec and DevOps will be largely unknown, as most compensation information is private. However, perceived payoffs for success might result in greater miscoordination – and it will most likely take the form of infosec feeling less rewarded than DevOps, if I were to hazard a guess. These rewards are not necessarily monetary, either. The disparity in payoffs between DevOps and infosec is perhaps highest in social rewards, namely social recognition.</p>
<p>One all too common scenario is that infosec may insist on a fix of a critical vulnerability shortly before an anticipated release, requiring last minute scrambling by DevOps to unblock the release. DevOps may be rewarded for these efforts, which might have been avoided by fixing the vulnerability when it was first discovered by the infosec team, rather than ignoring or denying it. Infosec, however, is more likely to be chided than noted for their contributions.</p>
<p>A potential counter is to publicly state equal rewards for certain joint goals, such as mean time to remediation or, more contentiously, fewer vulnerabilities introduced into production, so that this data point regarding payoffs becomes most salient. Again, these rewards do not have to be monetary, but should involve joint recognition at an organizational level. With that said, emphasizing “label salience” – accentuating group salience, as discussed before – may also serve as a deterrence from payoff salience, so experimentation is recommended.</p>
<p>When emphasizing joint payoffs, the payoff matrix for a game changes, as shown below. The new, “we-thinking” payoff matrix illustrates why the theory of team reasoning can still easily support the notion of rationality, as it makes perfect sense for a player to choose a payoff-maximizing strategy:</p>
<p><img src="/blog/img/payoff-matrix.png" alt="A payoff matrix showing how combining individual payoffs via team reasoning promotes cooperation. The original individual payoffs were (3,3) for cooperating, (0,5) for one player defecting, and (1,1) for both defecting. In the team reasoning payoff matrix, the payoff for cooperation is (6), one defection (5), and joint defection (2)."></p>
<p>By visualizing team reasoning through this payoff matrix, its value in promoting coordination becomes palpable. This serves as the perfect starting point for contemplating how we can win the coordination game.</p>
<hr>
<h1 id="facilitating-cooperation">Facilitating Cooperation</h1>
<p>In the course of my research, I did not encounter a single organization where DevOps and infosec codified guidelines for how they should interact – or even documented their collective objectives. As I will discuss in this section, choosing the right goals and publicizing them in the right way is essential to facilitate cooperation.</p>
<h2 id="team-reasoning-1">Team Reasoning</h2>
<p>One notable and highly relevant experiment involves the centipede game, from Lambsdorff, et al.<sup id="fnref:18"><a href="#fn:18" class="footnote-ref" role="doc-noteref">18</a></sup> The centipede game is a multi-stage trust game in which players alternate offers. For instance, in the first round, I could receive $10 while you receive $2, or I could pass the choice over to you to double each amount – giving you the choice of receiving $20 and me receiving $4, or passing it back to me to double the amounts again. This game structure is useful for exploring the reciprocal nature of cooperation, as the subgame perfect Nash equilibrium (i.e. what the rational player “should” play) is for the players to never pass.</p>
<p>The empirical evidence starkly differs from the theoretically rational strategy, with a majority of players passing in the first stage, and some even ending the game without anyone taking vs. passing. The results are even more elucidating when compared across the treatments the experimenters created. In one treatment, the game’s payoffs were presented in probabilistic terms, showing the probability of achieving a joint goal rather than individual payoffs. In the other treatment, the game was presented through a soccer analogy, where passing offers was seen through the lens of passing the ball to score a goal.</p>
<p>In games where the number of passes could lie between 0 or 2 for each player, the average number of passes in the probabilistic treatment was approximately 1.5, and for the soccer treatment approximately 1.9 passes. Both averages are substantially higher than the average for the vanilla version of the centipede game – approximately 1.2 passes – and, strikingly, much higher than the perfect Nash equilibrium of 0 passes. Why did these alternative game treatments succeed? The probabilistic treatment was successful in promoting cooperation due to emphasis on the shared goal, removing the salience of the payoff asymmetry (as discussed in the prior section).</p>
<p>What led the soccer treatment to be most effective? The researchers believe it is because the soccer analogy made the joint goal obvious, thereby making the team identity salient. This not only is supportive of the theory of team reasoning, but also perhaps supports the practice common among the stereotypical former-jock sales leaders of using sportsball analogies to foster team spirit and motivation. While I remain somewhat skeptical of the efficacy of sports analogies on engineers, I believe similar levels of salience for group identity can be accomplished and provide similar results in improving cooperation.</p>
<p>What else can these results suggest for the relationship between DevOps and infosec? One is that controls targeting individuals for infractions – for instance, punishing individual developers for security bugs – may be less effective than sharing security-enhancing goals at a group level. Further, emphasizing differences between groups is less effective in improving conflict resolution than emphasizing joint goals. This perhaps is where infosec culture fails the most – presenting a draconian notion of infosec vs. normies and declaring that infosec is a misunderstood, maligned clique that is the gatekeeper of secure practices.</p>
<h2 id="outcome--process-goals">Outcome &amp; Process Goals</h2>
<p>Another set of tools for our conceptual arsenal is found in outcome and process goals. Outcome accountability prioritizes people getting things right. Process accountability prioritizes people thinking in the right way. I am a huge proponent for focusing on outcomes rather than outputs, particularly when it comes to measuring success – because lots of aimless creation is not useful, but <em>impactful</em> creation is. Nevertheless, there are caveats to outcome accountability, regarding what it incentivizes, that are necessary to explore before people jump on the outcome bandwagon and fully abandon process accountability.</p>
<p>Outcome accountability is beneficial in incentivizing realization of expected outcomes when making decisions. There is experimental evidence showing that outcome accountability boosts performance relative to process accountability alone.<sup id="fnref:19"><a href="#fn:19" class="footnote-ref" role="doc-noteref">19</a></sup> However, this can ignore analysis of whether the decision itself was appropriate or not. The potential hazards of outcome accountability include the worsening of overall decision quality, increasing commitment to sunk costs, diminishing of complex thinking, impairment of attentiveness, and reduction of situational awareness<sup id="fnref:20"><a href="#fn:20" class="footnote-ref" role="doc-noteref">20</a></sup>.</p>
<p>In short-term decision-making problems, outcome accountability can lead to analysis paralysis due to uncertainty, or succumbing to cognitive bias, thereby worsening decision quality. Process accountability, in contrast, can help temper the temptation of cognitive bias, helping decision-makers navigate choppy cognitive waters short-term. Further, in the aforementioned experiment, process accountability was shown to promote more effective knowledge sharing, boosting performance of those observing the decision-makers’ actions.</p>
<p>Process accountability, although already not as popular in Silicon Valley, has its deficiencies worth mentioning, too. Process accountability can incentivize adherence to policies and best practices, even when they are not the best options in dynamic environments long-term. It is inherently less flexible than outcome accountability, making it difficult to improvise and incorporate feedback loops to inform better strategy in changing environments. For example, process accountability was shown to be far weaker than outcome accountability in augmenting pilots’ skill in navigating changing environments<sup id="fnref:21"><a href="#fn:21" class="footnote-ref" role="doc-noteref">21</a></sup>.</p>
<p>Thus, a hybrid of outcome accountability and process accountability is optimal, balancing out weaknesses while not decreasing the respective benefits of each method. A hybrid of outcome goals and process goals should give teams the ability to remain flexible while ensuring knowledge transfer and inhibiting potential abuses of power. Importantly, this hybrid accountability can ensure that standard practices are met, but also encourage experimentation with novel strategies. This ensures collective knowledge is not discarded and adaptive prowess are not discouraged – both of which are essential components for designing secure systems and responding to incidents.</p>
<p>Creating stretch outcome goals can encourage innovation, allowing teams to stretch their metaphorical wings and attempt ambitious plans – tempered with the need to still be able to justify process. Stretch goals tend to also be more fun, satiating people’s curiosity and desire to learning new things. For example, a security-related stretch goal might be to create a tool to automate asset inventory management, whereas the basic goal may be to perform manual asset inventory management.</p>
<p>Importantly, process goals should apply equally to infosec and to DevOps. Infosec should justify why they made a less business-friendly decision – helping curb the FUD-driven decision-making to which infosec is prone. DevOps should justify why they made a less secure decision – helping curb the tendency to treat security as an afterthought. By forcing each team to articulate their justifications, it allows the opportunity to, as the kids say, call out bullshit.</p>
<h2 id="goal-framing">Goal Framing</h2>
<p>Finally, <em>how</em> these goals are communicated is consequential to winning the coordinative game. Drawing from the aforementioned study involving goal conflict between curative or palliative care, there are a few lessons to glean towards how to set goals. First, the study found that participants who received messaging emphasizing the conflicting nature of the goals reported increased perception of goal conflict and decreased importance of palliative care, as expected<sup id="fnref:22"><a href="#fn:22" class="footnote-ref" role="doc-noteref">22</a></sup>.</p>
<p>Second, the researchers anticipated that self-affirmation and self-reflection on values – with the goal of ameliorating cognitive dissonance associated with maintaining conflicting beliefs – would improve providers’ abilities to empathize with patients, thereby increasing the importance of palliative care. Instead, the opposite was shown to be true. Unfortunately, self-affirmation can also deactivate less valued or more difficult goals, becoming a reinforcement mechanism for moral hazard-driven beliefs.</p>
<p>These results certainly seem discouraging! So, what can be done? The key takeaway is that goals must be presented as complementary rather than conflicting, particularly when there is evidence to support it (as is true with the complementary nature of palliative and curative care). When considering infosec and DevOps goals, I have long argued that their respective goals bear far more similarities than differences<sup id="fnref:23"><a href="#fn:23" class="footnote-ref" role="doc-noteref">23</a></sup>. Empirical analysis from the most recent “2019 Accelerate: State of DevOps” report also shows that elite DevOps teams fix security issues earlier and faster than their less performant peers<sup id="fnref:24"><a href="#fn:24" class="footnote-ref" role="doc-noteref">24</a></sup>.</p>
<p>We must also include the ordering of goals in our calculus here. Sequential goal pursuit is inevitable given the relative scarcity of solutions that can accomplish multiple goals, and in light of the tendency for people to prioritize one goal over another. However, concurrent goal pursuit can still be encouraged. DevOps and infosec teams can make a list of their respective goals and perform an exercise to brainstorm where goals from each side might be achieved through the same means. As an example, tools designed to collect data for performance use cases are collecting data valuable for security use cases as well.</p>
<p>To avoid the “dilution of instrumentality” – the perception that the stone is less important in killing the two birds, because it can kill both concurrently – highlight objective information about a choice, tying its importance to each goal directly. For instance, a database monitoring tool can help accomplish both performance and security goals, thus running the risk of invoking the dilution of instrumentality. To re-anchor perception to reality, you can express how the tool relates to each goal specifically: collecting data on resource utilization for performance, and detecting abnormal query behavior for security.</p>
<hr>
<h1 id="conclusion">Conclusion</h1>
<p>The relationship between DevOps and information security must be healthy for the business to thrive. This relationship, like all relationships, requires work, and understanding it as a cooperative game involving information asymmetry can inform how we can work smarter to nurture it. By leveraging team reasoning, hybrid goals (outcome and process), and framing goals as complementary and concurrent, we become a strong contender for winning this coordinative game.</p>
<p><img src="/blog/img/winning-cooperative-games.png" alt="An interface in the style of War Games declaring that &amp;ldquo;you&amp;rdquo; won by adopting the strategies of team reasoning, hybrid goals, and goal framing."></p>
<hr>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>Lui, J. (2008). <em>CSC6480: Introduction to Game Theory: Cooperative Games</em> [PowerPoint slides]. Retrieved from <a href="http://www.cse.cuhk.edu.hk/~cslui/CSC6480/cooperative_game.pdf">http://www.cse.cuhk.edu.hk/~cslui/CSC6480/cooperative_game.pdf</a>&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>The invention of the term #yolosec is, perhaps, one of my crowning achievments within the infosec industry. <a href="/speaking/us-17-Shortridge-Big-Game-Theory-Hunting.pdf">See my 2017 Black Hat talk</a> for the use of it in context of attack trees (slide 77).&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Ferrer, R. A., Orehek, E., &amp; Padgett, L. S. (2018). Goal conflict when making decisions for others. <em>Journal of Experimental Social Psychology, 78</em>, 93-103.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Polman, E. (2012). Self–other decision making and loss aversion. <em>Organizational Behavior and Human Decision Processes, 119</em>(2), 141-150.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>CAPC (2011). 2011 public opinion research on palliative care: A report based on research by public opinion strategies. Retrieved from <a href="https://media.capc.org/filer_public/18/ab/18ab708c-f835-4380-921d-fbf729702e36/2011-public-opinion-research-on-palliative-care.pdf">https://media.capc.org/filer_public/18/ab/18ab708c-f835-4380-921d-fbf729702e36/2011-public-opinion-research-on-palliative-care.pdf</a>&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>Shah, J. Y., Friedman, R., &amp; Kruglanski, A. W. (2002). Forgetting all else: on the antecedents and consequences of goal shielding. <em>Journal of personality and social psychology, 83</em>(6), 1261.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>Orehek, E., &amp; Vazeou-Nieuwenhuis, A. (2013). Sequential and concurrent strategies of multiple goal pursuit. <em>Review of General Psychology, 17</em>(3), 339-349.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>Fishbach, A., &amp; Dhar, R. (2005). Goals as excuses or guides: The liberating effect of perceived goal progress on choice. <em>Journal of Consumer Research, 32</em>(3), 370-377.&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:9">
<p>Zhang, Y., Fishbach, A., &amp; Kruglanski, A. W. (2007). The dilution model: How additional goals undermine the perceived instrumentality of a shared path. <em>Journal of personality and social psychology, 92</em>(3), 389.&#160;<a href="#fnref:9" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:10">
<p>Team reasoning’s roots are with Sugden and Bacharach, both who first released papers in the 1990s on the topic – so it is quite a recent theory on a relative basis.&#160;<a href="#fnref:10" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:11">
<p>Schweikard, D. P., &amp; Schmid, H. B. (2013). Collective intentionality.&#160;<a href="#fnref:11" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:12">
<p>Colman, A. M., Pulford, B. D., &amp; Rose, J. (2008). Collective rationality in interactive decisions: Evidence for team reasoning. <em>Acta psychologica, 128</em>(2), 387-397.&#160;<a href="#fnref:12" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:13">
<p>Mehta, J., Starmer, C., &amp; Sugden, R. (1994). The nature of salience: An experimental investigation of pure coordination games. <em>The American Economic Review, 84</em>(3), 658-673.&#160;<a href="#fnref:13" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:14">
<p>Colman, A. M., &amp; Gold, N. (2018). Team reasoning: Solving the puzzle of coordination. <em>Psychonomic bulletin &amp; review, 25</em>(5), 1770-1783.&#160;<a href="#fnref:14" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:15">
<p>Flippen, A. R., Hornstein, H. A., Siegal, W. E., &amp; Weitzman, E. A. (1996). A comparison of similarity and interdependence as triggers for in-group formation. <em>Personality and Social Psychology Bulletin, 22</em>(9), 882-893.&#160;<a href="#fnref:15" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:16">
<p>One such example is in the study by Lambsdorff, et al. involving the centipede game, mentioned later in this piece.&#160;<a href="#fnref:16" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:17">
<p>van Elten, J., &amp; Penczynski, S. P. (2018). Coordination games with asymmetric payoffs: An experimental study with intra-group communication. <em>Unpublished manuscript available at <a href="http://www.penczynski.de/attach/APC.pdf">http://www.penczynski.de/attach/APC.pdf</a></em>.&#160;<a href="#fnref:17" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:18">
<p>Lambsdorff, J. G., Giamattei, M., Werner, K., &amp; Schubert, M. (2018). Team reasoning—Experimental evidence on cooperation from centipede games. <em>PloS one, 13</em>(11), e0206666.&#160;<a href="#fnref:18" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:19">
<p>Chang, W., Atanasov, P., Patil, S., Mellers, B. A., &amp; Tetlock, P. E. (2017). Accountability and adaptive performance under uncertainty: A long-term view. <em>Judgment &amp; Decision Making, 12</em>(6).&#160;<a href="#fnref:19" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:20">
<p>Scholars would call it reduction of “epistemic motivation.”&#160;<a href="#fnref:20" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:21">
<p>Skitka, L. J., Mosier, K. L., &amp; Burdick, M. (1999). Does automation bias decision-making?. <em>International Journal of Human-Computer Studies, 51</em>(5), 991-1006.&#160;<a href="#fnref:21" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:22">
<p>Ferrer, R. A., Orehek, E., &amp; Padgett, L. S. (2018). Goal conflict when making decisions for others. <em>Journal of Experimental Social Psychology, 78</em>, 93-103.&#160;<a href="#fnref:22" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:23">
<p>Shortridge, K. (2018). Paint by Numbers: Measuring Resilience in Security. Retrieved from <a href="/speaking/Paint-by-Numbers-Resilience-in-Infosec-Kelly-Shortridge-AusCERT-2018.pdf">/speaking/Paint-by-Numbers-Resilience-in-Infosec-Kelly-Shortridge-AusCERT-2018.pdf</a>&#160;<a href="#fnref:23" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:24">
<p>Forsgren, N., et al. (2019). 2019 Accelerate State of DevOps Report. Retrieved from <a href="https://cloud.google.com/blog/products/devops-sre/the-2019-accelerate-state-of-devops-elite-performance-productivity-and-scaling">https://cloud.google.com/blog/products/devops-sre/the-2019-accelerate-state-of-devops-elite-performance-productivity-and-scaling</a>&#160;<a href="#fnref:24" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Analyzing the 2020 RSA Innovation Sandbox Finalists</title>
            <link>https://kellyshortridge.com/blog/posts/analyzing-2020-rsa-innovation-sandbox/</link>
            <pubDate>Thu, 06 Feb 2020 21:33:26 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/analyzing-2020-rsa-innovation-sandbox/</guid>
            <description>How do the nominees for the 2020 RSA Conference’s Innovation Sandbox look compared to prior years? Like last year, I decided to explore the funding profile of the ten selected startups – but with the benefit of comparison this year.
Total Funding Raised &amp; the Latest Stage for each 2019 RSA Innovation Sandbox Finalist ($USD millions)
The median funding raised by all ten startups this year is $14.3 million (mean of $21.3 million), which still reflects a reasonably early stage – but it reflects a 36% jump from 2019’s median of $10.5 million. Last year’s winner, Axonius, was tied for the second smallest startup funding-wise; they had only raised a $4 million seed round until a week before the competition.
Distribution of 2020 RSA Innovation Sandbox Finalists, by Funding Stage
There are no startups with that sort of profile in 2020, as they are all past the seed phase and into at least Series A. The distribution of startups at the Series A phase is even more concentrated this year (80% of all finalists vs. 60% in 2019). We have an additional nominee at the Series B phase this year, too.
However, if startup maturity is measured through temporal age[1], we see a dramatic increase in variability. The median number of months is roughly the same (29.5 months this year vs. 27.5 last year), but the range is quite different: 10 to 138 months (yes, over 11 years old) in 2020 compared to 20 to 46 months in 2019.
Category-wise, there is a decent level of variability of the problems the nominees are tackling. By far the greatest concentration is in AppSec tools, although they consist of different approaches (including RASP, WAF, AST, and risk assessment). There are two finalists tackling the security of third-party SaaS applications. The rest include PII discovery, vulnerability remediation, anti-phishing, and security awareness training.
For those of you familiar with my work, you’re aware of the fact that I dissect and analyze buzzwords in infosec with some regularity. This year’s word-cloud certainly doesn’t disappoint on the buzzword front:
There were 24 investors across the most recent funding rounds for the ten nominees, with more overlap than last year. ClearSky is a lead investor in both AppOmni and INKY Technology. Costanoa Ventures is a participating investor in AppOmni and Elevate Security. Greylock Partners is a participating investor in Obsidian Security and a lead investor in Sqreen. This debunks my theory from last year that investors put forth a chosen startup from their portfolios, like movie studios do with the Oscars.
The full list of lead investors in these ten startups is:
Blossom Capital ClearSky x2 Costanoa Ventures x2 DARPA Defy.vc FireBolt Ventures General Catalyst Greylock Partners x2 Gula Tech Adventures GV Inner Loop Capital Jackson Square Ventures Mayfield Fund NEA Point Nine Capital Point72 Ventures SignalFire Silicon Valley Data Capital StoneMill Ventures TechOperators TenEleven Ventures Unusual Ventures Wing Venture Capital Y Combinator YL Ventures How do these investors overlap with the judges? Well, judge Asheem Chandna is a Partner at Greylock and a board member of Obsidian (his colleague, Sarah Guo, is on Sqreen’s board). Judge Patrick Heim is an Operating Partner at ClearSky, a board member of AppOmni, and on Elevate’s advisory board (ClearSky colleagues Peter Kuper and John Cordo are on Inky’s board). Greylock has coinvested with Clearsky (Demisto) and GV (Censys, Obsidian). ClearSky has coinvested with Costanoa Ventures (AppOmni), Greylock (Demisto), GV (CyberGRX), and TenEleven Ventures (CyberGRX).
As President of Dell Technologies Capital, Scott Darling has coinvested with some of the notable VCs on the list above, including ClearSky (CloudKnox), General Catalyst (RiskRecon), Greylock Partners (Agari), and TenEleven Ventures (JASK). Neither judges Dr. Dorit Dor nor Paul Kocher have immediately obvious ties to any of the VCs backing the 2020 nominees.
All but one 2020 Innovation Sandbox finalist is based in the U.S. (the outlier is based in Israel). 70% are based in California, a huge jump from 40% in 2019. Maryland and Pennsylvania possess one nominee each.
As a final statistic, only one company’s founding team includes any female founders, a 0% increase from 2019.
[1] Not every startup gives the precise day they were founded, so I used a combination of Crunchbase and LinkedIn data (e.g., the start date the founder lists) to get as close as possible.
</description>
            <atom:content type="html"><![CDATA[<p>How do the nominees for the <a href="https://www.rsaconference.com/usa/the-experience/innovation-programs/innovation-sandbox">2020 RSA Conference’s Innovation Sandbox</a> look compared to prior years? <a href="/blog/posts/analyzing-2019-rsa-innovation-sandbox-finalists/">Like last year</a>, I decided to explore the funding profile of the ten selected startups – but with the benefit of comparison this year.</p>
<p><img src="/blog/img/rsa-2020-sandbox-01.png" alt="Chart of funding raised by RSA Innovation Sandbox finalists"><em>Total Funding Raised &amp; the Latest Stage for each 2019 RSA Innovation Sandbox Finalist ($USD millions)</em></p>
<p>The median funding raised by all ten startups this year is $14.3 million (mean of $21.3 million), which still reflects a reasonably early stage – but it reflects a 36% jump from 2019’s median of $10.5 million. Last year’s winner, Axonius, was tied for the second smallest startup funding-wise; they had only raised a $4 million seed round until a week before the competition.</p>
<figure>
    <img src="/blog/img/rsa-2020-sandbox-02.png"
         alt="Chart of the Distribution of 2020 RSA Innovation Sandbox Finalists, by Funding Stage"/> <figcaption>
            <p>Distribution of 2020 RSA Innovation Sandbox Finalists, by Funding Stage</p>
        </figcaption>
</figure>
<p>There are no startups with that sort of profile in 2020, as they are all past the seed phase and into at least Series A. The distribution of startups at the Series A phase is even more concentrated this year (80% of all finalists vs. 60% in 2019). We have an additional nominee at the Series B phase this year, too.</p>
<p>However, if startup maturity is measured through temporal age<a name="back-1"></a><a href="#cite-1">[1]</a>, we see a dramatic increase in variability. The median number of months is roughly the same (29.5 months this year vs. 27.5 last year), but the range is quite different: 10 to 138 months (yes, over 11 years old) in 2020 compared to 20 to 46 months in 2019.</p>
<p>Category-wise, there is a decent level of variability of the problems the nominees are tackling. By far the greatest concentration is in AppSec tools, although they consist of different approaches (including RASP, WAF, AST, and risk assessment). There are two finalists tackling the security of third-party SaaS applications. The rest include PII discovery, vulnerability remediation, anti-phishing, and security awareness training.</p>
<p>For those of you familiar with <a href="/speaking/index.html">my work</a>, you’re aware of the fact that I dissect and analyze buzzwords in infosec with some regularity. This year’s word-cloud certainly doesn’t disappoint on the buzzword front:</p>
<p><img src="/blog/img/wordcloud-2020-rsa-innovation-sandbox.png" alt="Wordcloud of buzzwords from RSA Innovation Sandbox finalists in 2020"></p>
<p>There were 24 investors across the most recent funding rounds for the ten nominees, with more overlap than last year. ClearSky is a lead investor in both AppOmni and INKY Technology. Costanoa Ventures is a participating investor in AppOmni and Elevate Security. Greylock Partners is a participating investor in Obsidian Security and a lead investor in Sqreen. This debunks my theory from last year that investors put forth a chosen startup from their portfolios, like movie studios do with the Oscars.</p>
<p>The full list of lead investors in these ten startups is:</p>
<ul>
<li>Blossom Capital</li>
<li>ClearSky x2</li>
<li>Costanoa Ventures x2</li>
<li>DARPA</li>
<li>Defy.vc</li>
<li>FireBolt Ventures</li>
<li>General Catalyst</li>
<li>Greylock Partners x2</li>
<li>Gula Tech Adventures</li>
<li>GV</li>
<li>Inner Loop Capital</li>
<li>Jackson Square Ventures</li>
<li>Mayfield Fund</li>
<li>NEA</li>
<li>Point Nine Capital</li>
<li>Point72 Ventures</li>
<li>SignalFire</li>
<li>Silicon Valley Data Capital</li>
<li>StoneMill Ventures</li>
<li>TechOperators</li>
<li>TenEleven Ventures</li>
<li>Unusual Ventures</li>
<li>Wing Venture Capital</li>
<li>Y Combinator</li>
<li>YL Ventures</li>
</ul>
<p>How do these investors overlap with the judges? Well, judge Asheem Chandna is a Partner at Greylock and a board member of Obsidian (his colleague, Sarah Guo, is on Sqreen’s board). Judge Patrick Heim is an Operating Partner at ClearSky, a board member of AppOmni, and on Elevate’s advisory board (ClearSky colleagues Peter Kuper and John Cordo are on Inky’s board). Greylock has coinvested with Clearsky (Demisto) and GV (Censys, Obsidian). ClearSky has coinvested with Costanoa Ventures (AppOmni), Greylock (Demisto), GV (CyberGRX), and TenEleven Ventures (CyberGRX).</p>
<p>As President of Dell Technologies Capital, Scott Darling has coinvested with some of the notable VCs on the list above, including ClearSky (CloudKnox), General Catalyst (RiskRecon), Greylock Partners (Agari), and TenEleven Ventures (JASK). Neither judges Dr. Dorit Dor nor Paul Kocher have immediately obvious ties to any of the VCs backing the 2020 nominees.</p>
<p>All but one 2020 Innovation Sandbox finalist is based in the U.S. (the outlier is based in Israel). 70% are based in California, a huge jump from 40% in 2019. Maryland and Pennsylvania possess one nominee each.</p>
<p>As a final statistic, only one company’s founding team includes any female founders, a 0% increase from 2019.</p>
<hr>
<p><a name="cite-1"></a><a href="#back-1">[1]</a> Not every startup gives the precise day they were founded, so I used a combination of Crunchbase and LinkedIn data (e.g., the start date the founder lists) to get as close as possible.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>My 2019 Reading List</title>
            <link>https://kellyshortridge.com/blog/posts/2019-reading-list/</link>
            <pubDate>Tue, 17 Dec 2019 20:00:00 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2019-reading-list/</guid>
            <description>This year, it’s blatantly obvious that I clung to escapism, as evidenced by a fiction list that dwarfs the non-fiction list. What’s more, I averaged around 2.7 books per month in 2019 – roughly double what I achieved the previous three years.
In contrast to prior years, I would say this year’s fiction list is weighted towards fantasy more than sci-fi. And, because I began a new vocational journey in a leadership position, my non-fiction books included business-y picks (unlike years prior). As always, I don’t provide ratings or reviews of each book here – I leave it to you to investigate them on your own.
If you’re looking for more science fiction, speculative fiction, or non-fiction recommendations, check out my 2018, my 2017, and my 2016 reading lists.
Fiction 2666 by Roberto Bolaño
An Unkindness of Ghosts by Rivers Solomon
The Brothers Karamazov by Fyodor Dostoevsky (re-read)1
The Bird King by G. Willow Wilson
Certain Dark Things by Silvia Moreno-Garcia
Circe by Madeline Miller
The City of Brass by S.A. Chakraborty
The Devil in America by Kai Ashante Wilson
The Devourers by Indra Das
Empire of Sand by Tasha Suri
Everfair by Nisi Shawl
Ficciones by Jorge Luis Borges (re-read)2
Gideon the Ninth by Tamsyn Muir
Gods, Monsters, and the Lucky Peach by Kelly Robson
Gods of Jade and Shadow by Silvia Moreno-Garcia
Kingdom of Copper by S.A. Chakraborty
The Haunting of Tram Car 015 by P. Djèlí Clark
Mistborn trilogy by Brandon Sanderson
Nexus by Ramez Naam
No Longer Human by Osamu Dazai
Rosewater by Tade Thompson
Spinning Silver by Naomi Novik
Storm of Locusts by Rebecca Roanhorse
To the Lighthouse by Virginia Woolf
Trial of Lightning by Rebecca Roanhorse
Uprooted by Naomi Novik
Waste Tide by Chen Qiufan
Non-Fiction Cheating and Deception by J. Bowyer Bell and Barton Whaley
Good Strategy/Bad Strategy: The Difference and Why It Matters by Richard Rumelt
Her Majesty’s Spymaster: Elizabeth I, Sir Francis Walsingham, and the Birth of Modern Espionage by Stephen Budiansky
The Inconvenient Indian: A Curious Account of Native People in North America by Thomas King
Lost Enlightenment: Central Asia’s Golden Age from the Arab Conquest to Tamerlane by S. Frederick Starr
The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change by Camille Fournier
One of my favorite lines from “Slaughterhouse Five” is: “He said that everything there was to know about life is in ‘The Brothers Karamazov,’ by Fyodor Dostoevsky. ‘But that isn’t enough anymore,’ said Rosewater.” Given current times, this line kept rattling around in my head and compelled me to re-read it. ↩︎
“No hay ejercicio intelectual que no sea finalmente inútil.” ↩︎
</description>
            <atom:content type="html"><![CDATA[<p>This year, it&rsquo;s blatantly obvious that I clung to escapism, as evidenced by a fiction list that dwarfs the non-fiction list. What&rsquo;s more, I averaged around 2.7 books per month in 2019 &ndash; roughly double what I achieved the previous three years.</p>
<p>In contrast to prior years, I would say this year&rsquo;s fiction list is weighted towards fantasy more than sci-fi. And, because I began a new vocational journey in a leadership position, my non-fiction books included business-y picks (unlike years prior). As always, I don&rsquo;t provide ratings or reviews of each book here &ndash; I leave it to you to investigate them on your own.</p>
<p>If you’re looking for more science fiction, speculative fiction, or non-fiction recommendations, check out <a href="/blog/posts/2018-reading-list">my 2018</a>, <a href="/blog/posts/2017-reading-list">my 2017</a>, and <a href="/blog/posts/2016-reading-list">my 2016</a> reading lists.</p>
<h2 id="fiction">Fiction</h2>
<p><a href="https://www.amazon.com/2666-Novel-Roberto-Bola%C3%B1o/dp/0312429215/">2666</a> by Roberto Bolaño</p>
<p><a href="https://www.amazon.com/Unkindness-Ghosts-Rivers-Solomon/dp/1617755885/">An Unkindness of Ghosts</a> by Rivers Solomon</p>
<p><a href="https://www.amazon.com/Brothers-Karamazov-Fyodor-Dostoevsky/dp/0374528373/">The Brothers Karamazov</a> by Fyodor Dostoevsky (re-read)<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p><a href="https://www.amazon.com/Bird-King-G-Willow-Wilson/dp/080212903X/">The Bird King</a> by G. Willow Wilson</p>
<p><a href="https://www.amazon.com/Certain-Dark-Things-Silvia-Moreno-Garcia/dp/1250099080/">Certain Dark Things</a> by Silvia Moreno-Garcia</p>
<p><a href="https://www.amazon.com/CIRCE-New-York-Times-bestseller/dp/0316556343/">Circe</a> by Madeline Miller</p>
<p><a href="https://www.amazon.com/City-Brass-Novel-Daevabad-Trilogy/dp/0062678108/">The City of Brass</a> by S.A. Chakraborty</p>
<p><a href="https://www.amazon.com/Devil-America-Tor-Com-Original-ebook/dp/B00IW3DCRK/">The Devil in America</a> by Kai Ashante Wilson</p>
<p><a href="https://www.amazon.com/Devourers-Novel-Indra-Das/dp/1101967536/">The Devourers</a> by Indra Das</p>
<p><a href="https://www.amazon.com/Empire-Sand-Books-Ambha-Tasha/dp/0316449717/">Empire of Sand</a> by Tasha Suri</p>
<p><a href="https://www.amazon.com/Everfair-Novel-Nisi-Shawl/dp/076533805X/">Everfair</a> by Nisi Shawl</p>
<p><a href="https://www.amazon.com/Ficciones-Jorge-Luis-Borges/dp/0802130305/">Ficciones</a> by Jorge Luis Borges (re-read)<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup></p>
<p><a href="https://www.amazon.com/Gideon-Ninth-Tamsyn-Muir/dp/1250313198/">Gideon the Ninth</a> by Tamsyn Muir</p>
<p><a href="https://www.amazon.com/Monsters-Lucky-Peach-Kelly-Robson/dp/1250163854/">Gods, Monsters, and the Lucky Peach</a> by Kelly Robson</p>
<p><a href="https://www.amazon.com/Gods-Jade-Shadow-Silvia-Moreno-Garcia/dp/0525620753/">Gods of Jade and Shadow</a> by Silvia Moreno-Garcia</p>
<p><a href="https://www.amazon.com/Kingdom-Copper-Novel-Daevabad-Trilogy/dp/0062678132/">Kingdom of Copper</a> by S.A. Chakraborty</p>
<p><a href="https://www.amazon.com/Haunting-Tram-Car-DJ%C3%88L%C3%8D-CLARK/dp/1250294800/">The Haunting of Tram Car 015</a> by P. Djèlí Clark</p>
<p><a href="https://www.amazon.com/Mistborn-Trilogy-Boxed-Hero-Ascension/dp/076536543X/">Mistborn trilogy</a> by Brandon Sanderson</p>
<p><a href="https://www.amazon.com/Nexus-Ramez-Naam/dp/0857662929/">Nexus</a> by Ramez Naam</p>
<p><a href="https://www.amazon.com/No-Longer-Human-Osamu-Dazai/dp/0811204812/">No Longer Human</a> by Osamu Dazai</p>
<p><a href="https://www.amazon.com/Rosewater-Wormwood-Trilogy-Tade-Thompson/dp/0316449059/">Rosewater</a> by Tade Thompson</p>
<p><a href="https://www.amazon.com/Spinning-Silver-Novel-Naomi-Novik/dp/0399180990/">Spinning Silver</a> by Naomi Novik</p>
<p><a href="https://www.amazon.com/Storm-Locusts-Sixth-Rebecca-Roanhorse/dp/1534413529/">Storm of Locusts</a> by Rebecca Roanhorse</p>
<p><a href="https://www.amazon.com/Lighthouse-Virginia-Woolf/dp/0156907399/">To the Lighthouse</a> by Virginia Woolf</p>
<p><a href="https://www.amazon.com/Trail-Lightning-Sixth-Rebecca-Roanhorse/dp/1534413499/">Trial of Lightning</a> by Rebecca Roanhorse</p>
<p><a href="https://www.amazon.com/Uprooted-Novel-Naomi-Novik/dp/0804179050/">Uprooted</a> by Naomi Novik</p>
<p><a href="https://www.amazon.com/Waste-Tide-Chen-Qiufan/dp/0765389312/">Waste Tide</a> by Chen Qiufan</p>
<hr>
<h2 id="non-fiction">Non-Fiction</h2>
<p><a href="https://www.amazon.com/Cheating-Deception-J-Bowyer-Bell/dp/088738868X/">Cheating and Deception</a> by J. Bowyer Bell and Barton Whaley</p>
<p><a href="https://www.amazon.com/Good-Strategy-Bad-Difference-Matters/dp/0307886239/">Good Strategy/Bad Strategy: The Difference and Why It Matters</a> by Richard Rumelt</p>
<p><a href="https://www.amazon.com/Her-Majestys-Spymaster-Elizabeth-Walsingham/dp/0452287472/">Her Majesty&rsquo;s Spymaster: Elizabeth I, Sir Francis Walsingham, and the Birth of Modern Espionage</a> by Stephen Budiansky</p>
<p><a href="https://www.amazon.com/Inconvenient-Indian-Curious-Account-America/dp/0816689768">The Inconvenient Indian: A Curious Account of Native People in North America</a> by Thomas King</p>
<p><a href="https://www.amazon.com/Lost-Enlightenment-Central-Conquest-Tamerlane/dp/0691165858/">Lost Enlightenment: Central Asia&rsquo;s Golden Age from the Arab Conquest to Tamerlane</a> by S. Frederick Starr</p>
<p><a href="https://www.amazon.com/Managers-Path-Leaders-Navigating-Growth/dp/1491973897/">The Manager&rsquo;s Path: A Guide for Tech Leaders Navigating Growth and Change</a> by Camille Fournier</p>
<hr>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>One of my favorite lines from &ldquo;Slaughterhouse Five&rdquo; is: <em>&ldquo;He said that everything there was to know about life is in &lsquo;The Brothers Karamazov,&rsquo; by Fyodor Dostoevsky. &lsquo;But that isn&rsquo;t enough anymore,&rsquo; said Rosewater.&rdquo;</em> Given current times, this line kept rattling around in my head and compelled me to re-read it.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p><em>&ldquo;No hay ejercicio intelectual que no sea finalmente inútil.&rdquo;</em>&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>Ransomware: Towards an Economic Equilibrium</title>
            <link>https://kellyshortridge.com/blog/posts/ransomware-towards-an-economic-equilibrium/</link>
            <pubDate>Sun, 15 Dec 2019 20:58:46 -0500</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/ransomware-towards-an-economic-equilibrium/</guid>
            <description>
On my vacation to Namibia earlier this year, I caught up on podcasts during the long stretches of driving (albeit in breathtaking scenery). One episode in particular jolted my mental juices – the EconTalk episode featuring Dr. Anja Shortland discussing her book Kidnap. The fascinating chat between Dr. Shortland and host Russ Roberts delved into the incentive mechanisms between kidnappers who demand ransom and insurers, and the economic paradigm that results.
Naturally, there are implications for ransomware, the digital counterpart to physical ransom. In particular, I believe the role cyber insurance should play deserves closer examination, but there are other spicy takes that fomented as I listened to and digested the podcast episode. This post will explore my thoughts on how the economics of physical ransom translate to digital ransom, and how we as an industry might want to reconceive our current approaches to considering and dealing with ransomware – and the criminals who run ransomware campaigns.
The Shadow of the Future First, let us set the stage with a discussion of the “shadow of the future” and how it applies to the ransom market. The “shadow of the future” is a game theoretic concept that explains how people will behave differently if they believe they will need to interact with their current counterparty again (“repeated games” in game theory lingo). This belief encourages cooperation between counterparties, as they will play much nicer in the hope of receiving better treatment in future interactions.1 Thus, the future lingers like a “shadow” over current interactions.
In the kidnapping for ransom world, criminals cast a shadow of the future. If they desire to kidnap again, they want to “behave” in the immediate interaction to help them in their future business. If kidnappers demonstrate they will honor their demands upon receipt of a ransom, then future victims will be more likely to pay ransoms, believing that promises are more likely to be kept. This, Dr. Shortland surmises, is why 97.5% of kidnapping cases “go right” – which is better than the success rate of many legitimate transactions.
An example Dr. Shortland gives is that Somali pirates, upon receiving ransom for multimillion-dollar sea vessels, are repeatedly seen to leave the vessels and not re-hijack them again. The exchange of hijacked sea vessels for cash consistently goes right for all parties involved – even in a situation that is rife with mistrust and conflict. The Somali pirates are now trustable in a twisted way, as owners of sea vessels know that they will indeed have their ships returned reliably if they pay the ransom. As Dr. Shortland points out, this shadow of the future applies to kidnapping gangs within cities as well, through the power of local gossip.
How does the shadow of the future apply to ransomware? It certainly should apply, as few attackers run ransomware campaigns against only one target, thus creating the opportunity for future interactions with victims. Victims, at least in theory, would be more likely to pay ransoms to criminal groups who were known to reliably decrypt data, especially those who do so without any data loss.
Unfortunately, the data on ransomware and success rates (on both sides) is limited, particularly when getting as granular as specific ransomware types. At first blush, the statistic from the first quarter of 2019 that 96%2 of ransomware payments were successful seems to match the high success rate of kidnapping cases. However, that just represents the number of victims who received a decryption tool. The recovery rate in the same survey ranges from 80% to 100%, depending on the ransomware type. Other surveys suggest the data recovery rate is as low as 66%3 to 75%4.
These statistics suggest a far more inefficient market for ransomware than exists for physical ransom. Additional supporting evidence of this inefficiency arises from the mismatch of ransomware campaigns with the most reliable data recovery and campaigns with the highest ransom demanded. Ryuk requests $286,557 on average5, despite their approximately 80% data recovery rate, due to targeting larger enterprises. Grand Crab, in contrast, demands just under $8,000 but offers an approximately 100% data recovery rate. This seems like a far cry from a “nice equilibrium” as exists in the physical sphere.
Encouraging Better Attackers This kind of illegal activity is an inevitability – neither physical nor digital ransom can be fully prevented. Therefore, we must optimize given the constraints of this reality. Just like you want to encourage kidnappers that do not kill their victims, we want to encourage attackers that do not lose ransomed data.
With this grounding aspiration in mind, let me offer you my ghost pepper-level take. There is some level of attacker activity that represents a healthy equilibrium, and defining that equilibrium (perhaps “only teams that can discover and weaponize 0day”) is healthier than trying to stop attacker activity. That is, if we can eliminate the script kiddies and #basicbitch6 attackers who lack the operational resources to ensure data fidelity when conducting digital ransom, organizations will be better off – even if they are still hit by sophisticated attackers who will receive payments but reliably facilitate data recovery.
As Dr. Shortland cited, the presence of armed guards in the Niger delta creates a “nice equilibrium,” because it, in essence, raises the cost of attack. Disorganized, resource-constrained street gangs cannot pursue physical ransom as one of their normal activities when they must contend with armed guards. Only more sophisticated criminal organizations will possess the means to pursue physical ransom – and they are vastly more likely to adhere to the aforementioned shadow of the future, leading them to behave professionally and responsibly.
The Role of Cyber Insurance How can we encourage this kind of equilibrium? If we look to the physical ransom domain, insurance companies play a critical role. Kidnap insurance creates stability in the market, reducing the friction between “buyers” (the victim’s representatives) and “sellers” (the kidnappers). Of course, it would be remiss of me not to acknowledge the dark side of insurance – the creation of macro-level moral hazard. Kidnap insurance’s very existence allows kidnapping to be an especially profitable ongoing activity. However, the safer ends – based on current evidence – seem to suitably justify the means.
The insurers’ goal, in the kidnapping market, is to eliminate stupid or irrational kidnappers from the market – those who make errors in process or judgment, such as cutting off fingers to hasten payment of ransom. Likewise, cyber insurance’s goal should be the same, to eliminate the skidiots7 who could accidentally brick systems or delete data and to encourage more sophisticated attackers who are true to their promises and conduct their operations reliably and professionally.
One can envision a world in which a cyber insurance company negotiates a flat rate to receive the decryptor, then sending a bonus if all data is recovered. Such a scheme would incentivize attackers to maximize victim data recovery upon receipt of payment. This, of course, relies on the “shadow of the future” – but it is in the best interests of the victims, cyber insurance firms, and attackers to develop the trust that full payments will be received and data will be fully recovered.
You may be thinking to yourself, “This feels really icky – aren’t the attackers winning here?” Attackers will continue to be a reality, as, likely, will ransomware. By accepting reality, we can depart from the unrealistic goal of “eliminate all ransomware attacks” to “maximize reliability of data recovery in ransomware attacks.” Part of maximizing reliability is encouraging better attackers, which is done by raising the cost of attack.
Naturally, there are caveats. Both physical and digital ransom are examples of markets in which imperfect information is actually advantageous to the “good” side, as insurance companies benefit from information being withheld from criminals. Accordingly, insurance companies should withhold the extent to which individuals are insured to prevent victims from divulging how much money the attackers could receive. The attackers will logically ask for the maximum ransom payment based on their expectations – so it is better to minimize their expectations as much as possible.
As Dr. Shortland advises, the goal is to “manage the size of the towel the [attackers] think they’re squeezing.” We – ideally, through insurance companies – must convince the attacker that they have reaped everything they could from their attack. Therein lies the beautiful pragmatism of this strategy: we focus on the elements we can control (perceptions) rather than the elements we cannot (motivations, resources, etc.). Yes, implementing proper backup and recovery solutions is within defenders’ control, too, but as we can see from the seemingly weekly headlines about ransomware incidents, we need a sane Plan B.
Conclusion In general, there is much information security can learn from domains which have a rich history of experimenting with solutions to similar problems. Physical ransom, in which a rather efficient market between attackers, victims, and insurance companies exists, is a pertinent exemplar for how infosec can more efficiently deal with ransomware.
While it may feel uncomfortable to accept a healthy level of malicious activity, at a certain point, we must become pragmatic rather than wallowing in sententious idealism. We can never fully prevent attacks, and that goes for ransomware as well. But we do have a chance to encourage more intelligent attackers – who operate professionally, incentivized by ongoing business interests – so that the ultimate impact of ransomware is less deleterious than what transpires under the claws of disorganized, incompetent attackers.
This may not be the safer world we imagined, but, as Machiavelli advised centuries ago, “Prudence consists in knowing how to distinguish the character of troubles, and for choice to take the lesser evil.”8 We owe it to those who depend on our expertise to think more boldly in how we can build a sustainable equilibrium in a world – digital and otherwise – that will always be troubled by aggressors.
This, as with many things in economics and game theory, does not always hold true, depending on the context. In the realm of international relations, where the shadow of the future is commonly studied, some researchers suggest the shadow can actually harm cooperation. ↩︎
Statistic sourced via Coveware ↩︎
Trend Micro, 2016 ↩︎
Fortinet, 2016 ↩︎
Both Ryuk and Grand Crab’s monetary amounts are taken from the Coveware report in footnote 2. ↩︎
I still have yet to find a word or term that so immediately evokes the blend of insipidness, mediocrity, and formulaicity that “basic bitch” conveys. ↩︎
Credit to @r00tkillah for this delightful term for script kiddies. ↩︎
This quote is, of course, from “The Prince.” ↩︎
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/ransomware.JPG" alt="Image of someone giving money in exchange for a key"></p>
<p>On my vacation to Namibia earlier this year, I caught up on podcasts during the long stretches of driving (albeit in breathtaking scenery). One episode in particular jolted my mental juices – the <a href="http://www.econtalk.org/anja-shortland-on-kidnap/">EconTalk episode featuring Dr. Anja Shortland</a> discussing her book <em><a href="https://www.amazon.com/Kidnap-Inside-Business-Anja-Shortland/dp/0198815476">Kidnap</a></em>. The fascinating chat between Dr. Shortland and host Russ Roberts delved into the incentive mechanisms between kidnappers who demand ransom and insurers, and the economic paradigm that results.</p>
<p>Naturally, there are implications for ransomware, the digital counterpart to physical ransom. In particular, I believe the role cyber insurance should play deserves closer examination, but there are other spicy takes that fomented as I listened to and digested the podcast episode. This post will explore my thoughts on how the economics of physical ransom translate to digital ransom, and how we as an industry might want to reconceive our current approaches to considering and dealing with ransomware – and the criminals who run ransomware campaigns.</p>
<hr>
<h1 id="the-shadow-of-the-future">The Shadow of the Future</h1>
<p>First, let us set the stage with a discussion of the “shadow of the future” and how it applies to the ransom market. The “shadow of the future” is a game theoretic concept that explains how people will behave differently if they believe they will need to interact with their current counterparty again (“repeated games” in game theory lingo). This belief encourages cooperation between counterparties, as they will play much nicer in the hope of receiving better treatment in future interactions.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> Thus, the future lingers like a “shadow” over current interactions.</p>
<p>In the kidnapping for ransom world, criminals cast a shadow of the future. If they desire to kidnap again, they want to “behave” in the immediate interaction to help them in their future business. If kidnappers demonstrate they will honor their demands upon receipt of a ransom, then future victims will be more likely to pay ransoms, believing that promises are more likely to be kept. This, Dr. Shortland surmises, is why 97.5% of kidnapping cases “go right” – which is better than the success rate of many <em>legitimate</em> transactions.</p>
<p>An example Dr. Shortland gives is that Somali pirates, upon receiving ransom for multimillion-dollar sea vessels, are repeatedly seen to leave the vessels and not re-hijack them again. The exchange of hijacked sea vessels for cash consistently goes right for all parties involved – even in a situation that is rife with mistrust and conflict. The Somali pirates are now trustable in a twisted way, as owners of sea vessels know that they will indeed have their ships returned reliably if they pay the ransom. As Dr. Shortland points out, this shadow of the future applies to kidnapping gangs within cities as well, through the power of local gossip.</p>
<p>How does the shadow of the future apply to ransomware? It certainly <em>should</em> apply, as few attackers run ransomware campaigns against only one target, thus creating the opportunity for future interactions with victims. Victims, at least in theory, would be more likely to pay ransoms to criminal groups who were known to reliably decrypt data, especially those who do so without any data loss.</p>
<p>Unfortunately, the data on ransomware and success rates (on both sides) is limited, particularly when getting as granular as specific ransomware types. At first blush, the statistic from the first quarter of 2019 that 96%<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> of ransomware payments were successful seems to match the high success rate of kidnapping cases. However, that just represents the number of victims who received a decryption tool. The recovery rate in the same survey ranges from 80% to 100%, depending on the ransomware type. Other surveys suggest the data recovery rate is as low as 66%<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup> to 75%<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>.</p>
<p>These statistics suggest a far more inefficient market for ransomware than exists for physical ransom. Additional supporting evidence of this inefficiency arises from the mismatch of ransomware campaigns with the most reliable data recovery and campaigns with the highest ransom demanded. Ryuk requests $286,557 on average<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup>, despite their approximately 80% data recovery rate, due to targeting larger enterprises. Grand Crab, in contrast, demands just under $8,000 but offers an approximately 100% data recovery rate. This seems like a far cry from a “nice equilibrium” as exists in the physical sphere.</p>
<hr>
<h1 id="encouraging-better-attackers">Encouraging Better Attackers</h1>
<p>This kind of illegal activity is an inevitability – neither physical nor digital ransom can be fully prevented. Therefore, we must optimize given the constraints of this reality. Just like you want to encourage kidnappers that do not kill their victims, we want to encourage attackers that do not lose ransomed data.</p>
<p>With this grounding aspiration in mind, let me offer you my ghost pepper-level take. There is some level of attacker activity that represents a healthy equilibrium, and defining that equilibrium (perhaps “only teams that can discover and weaponize 0day”) is healthier than trying to stop attacker activity. That is, if we can eliminate the <a href="https://en.wikipedia.org/wiki/Script_kiddie">script kiddies</a> and #basicbitch<sup id="fnref:6"><a href="#fn:6" class="footnote-ref" role="doc-noteref">6</a></sup> attackers who lack the operational resources to ensure data fidelity when conducting digital ransom, organizations will be better off – even if they are still hit by sophisticated attackers who will receive payments but reliably facilitate data recovery.</p>
<p>As Dr. Shortland cited, the presence of armed guards in the Niger delta creates a “nice equilibrium,” because it, in essence, raises the cost of attack. Disorganized, resource-constrained street gangs cannot pursue physical ransom as one of their normal activities when they must contend with armed guards. Only more sophisticated criminal organizations will possess the means to pursue physical ransom – and they are vastly more likely to adhere to the aforementioned shadow of the future, leading them to behave professionally and responsibly.</p>
<hr>
<h1 id="the-role-of-cyber-insurance">The Role of Cyber Insurance</h1>
<p>How can we encourage this kind of equilibrium? If we look to the physical ransom domain, insurance companies play a critical role. Kidnap insurance creates stability in the market, reducing the friction between “buyers” (the victim’s representatives) and “sellers” (the kidnappers). Of course, it would be remiss of me not to acknowledge the dark side of insurance – the creation of macro-level moral hazard. Kidnap insurance’s very existence allows kidnapping to be an especially profitable ongoing activity. However, the safer ends – based on current evidence – seem to suitably justify the means.</p>
<p>The insurers’ goal, in the kidnapping market, is to eliminate stupid or irrational kidnappers from the market – those who make errors in process or judgment, such as cutting off fingers to hasten payment of ransom. Likewise, cyber insurance’s goal should be the same, to eliminate the skidiots<sup id="fnref:7"><a href="#fn:7" class="footnote-ref" role="doc-noteref">7</a></sup> who could accidentally brick systems or delete data and to encourage more sophisticated attackers who are true to their promises and conduct their operations reliably and professionally.</p>
<p>One can envision a world in which a cyber insurance company negotiates a flat rate to receive the decryptor, then sending a bonus if all data is recovered. Such a scheme would incentivize attackers to maximize victim data recovery upon receipt of payment. This, of course, relies on the “shadow of the future” – but it is in the best interests of the victims, cyber insurance firms, and attackers to develop the trust that full payments will be received and data will be fully recovered.</p>
<p>You may be thinking to yourself, “This feels really icky – aren’t the attackers winning here?” Attackers will continue to be a reality, as, likely, will ransomware. By accepting reality, we can depart from the unrealistic goal of “eliminate all ransomware attacks” to “maximize reliability of data recovery in ransomware attacks.” Part of maximizing reliability is encouraging better attackers, which is done by raising the cost of attack.</p>
<p>Naturally, there are caveats. Both physical and digital ransom are examples of markets in which imperfect information is actually advantageous to the “good” side, as insurance companies benefit from information being withheld from criminals. Accordingly, insurance companies should withhold the extent to which individuals are insured to prevent victims from divulging how much money the attackers could receive. The attackers will logically ask for the maximum ransom payment based on their expectations – so it is better to minimize their expectations as much as possible.</p>
<p>As Dr. Shortland advises, the goal is to “manage the size of the towel the [attackers] think they’re squeezing.” We – ideally, through insurance companies – must convince the attacker that they have reaped everything they could from their attack. Therein lies the beautiful pragmatism of this strategy: we focus on the elements we can control (perceptions) rather than the elements we cannot (motivations, resources, etc.). Yes, implementing proper backup and recovery solutions is within defenders’ control, too, but as we can see from the seemingly weekly headlines about ransomware incidents, we need a sane Plan B.</p>
<hr>
<h1 id="conclusion">Conclusion</h1>
<p>In general, there is much information security can learn from domains which have a rich history of experimenting with solutions to similar problems. Physical ransom, in which a rather efficient market between attackers, victims, and insurance companies exists, is a pertinent exemplar for how infosec can more efficiently deal with ransomware.</p>
<p>While it may feel uncomfortable to accept a healthy level of malicious activity, at a certain point, we must become pragmatic rather than wallowing in sententious idealism. We can never fully prevent attacks, and that goes for ransomware as well. But we do have a chance to encourage more intelligent attackers – who operate professionally, incentivized by ongoing business interests – so that the ultimate impact of ransomware is less deleterious than what transpires under the claws of disorganized, incompetent attackers.</p>
<p>This may not be the safer world we imagined, but, as Machiavelli advised centuries ago, “Prudence consists in knowing how to distinguish the character of troubles, and for choice to take the lesser evil.”<sup id="fnref:8"><a href="#fn:8" class="footnote-ref" role="doc-noteref">8</a></sup> We owe it to those who depend on our expertise to think more boldly in how we can build a sustainable equilibrium in a world – digital and otherwise – that will always be troubled by aggressors.</p>
<hr>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>This, as with many things in economics and game theory, does not always hold true, depending on the context. In the realm of international relations, where the shadow of the future is commonly studied, <a href="https://www.sciencedirect.com/science/article/pii/0167268195000771">some researchers</a> <a href="https://www.sciencedirect.com/science/article/pii/0167268195000771">suggest</a> the shadow can actually harm cooperation.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>Statistic sourced <a href="https://www.coveware.com/blog/2019/4/15/ransom-amounts-rise-90-in-q1-as-ryuk-ransomware-increases">via Coveware</a>&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p><a href="https://blog.trendmicro.com/paying-for-ransomware-could-cost-you-more-than-just-the-ransom/">Trend Micro, 2016</a>&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p><a href="https://www.fortinet.com/content/dam/fortinet/assets/white-papers/WP-Mapping-The-Ransomware-Landscape.pdf">Fortinet, 2016</a>&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>Both Ryuk and Grand Crab&rsquo;s monetary amounts are taken from the Coveware report in footnote 2.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:6">
<p>I still have yet to find a word or term that so immediately evokes the blend of insipidness, mediocrity, and formulaicity that “basic bitch” conveys.&#160;<a href="#fnref:6" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:7">
<p>Credit to <a href="https://twitter.com/r00tkillah">@r00tkillah</a> for this delightful term for script kiddies.&#160;<a href="#fnref:7" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:8">
<p>This quote is, of course, from <a href="https://www.gutenberg.org/files/1232/1232-h/1232-h.htm">&ldquo;The Prince.&rdquo;</a>&#160;<a href="#fnref:8" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></atom:content>
        </item>
        
        <item>
            <title>When Prospect Theory Meets Chaos Engineering</title>
            <link>https://kellyshortridge.com/blog/posts/when-prospect-theory-meets-chaos-engineering/</link>
            <pubDate>Mon, 12 Aug 2019 08:00:00 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/when-prospect-theory-meets-chaos-engineering/</guid>
            <description>There is a connection between my research into behavioral models of information security and my Black Hat USA talk with Dr. Forsgren from last week regarding DevOps and the future of information security that might not be immediately obvious. However, it is a connection I believe is worth illuminating.
Adopting the “chaotic infosec” philosophy (the application of chaos engineering to information security) can influence those who build and secure systems to operate more like attackers – that is, to be more risk averse and to more quickly update their mental models given new inputs. This post will explore what exactly this means, and why it represents a valuable mindshift for organizational security.
How does Prospect Theory apply to security? While I encourage you to read my prior post on applying Prospect Theory to information security, I recognize it is a bit long. I will thus summarize its relevant points here.
Humans make decisions by setting a reference point (their view of the status quo) against which they measure the decisions’ potential outcomes. Empirical evidence shows that people generally prefer options that provide a small but certain gain rather than options that provide a larger but uncertain gain. They also generally prefer options that provide a miniscule chance of losing nothing (with a larger chance of losing a lot) rather than options that provide a certain chance of losing less.
These results, which are modelled by Prospect Theory, suggest that people are risk averse when making decisions with positive potential outcomes (when they are in the “Gain Domain”), and risk seeking when making decisions with negative potential outcomes (when they are in the “Loss Domain”). The further people get away from their reference point (again, their perceived status quo), these trends exaggerate. People who experience heavy losses become even more risk seeking in an attempt to jump out of the figurative hole, and people who experience substantial gains become even more risk averse to preserve their winnings.
My hypothesis was that defenders set their reference point based on a status quo where their organization is perceived to be secure and uncompromised (to simplify a bit). The nature of enterprise defense means that defenders will operate in the Loss Domain as a result – compromises are inevitable, as are failures to maintain a level of security posture that defenders find acceptable. Attackers, in contrast, operate in the Gain Domain – because their reference point is their current level of compromise, and, generally, they are likely to positively further their position of pwnage.
As a result, defenders are more risk seeking, leading them to adopt more speculative solutions to attempt to reach their reference point of perfect prevention instead of adopting more basic solutions with larger probabilities of success but a perception of being less likely to stop a big, sexy attack. Attackers, on the other hand, are more risk averse, moving more cautiously to achieve their goals, opting first for inexpensive methods before moving to expensive methods.
How does chaos transform reference points? Applying the principles of chaos engineering to information security, as outlined in my Black Hat USA 2019 talk, changes the reference point for defenders. If you extend the chaos engineering philosophy of “things will fail” to “things will be pwned,” then the status quo for security teams becomes the assumption of compromise.
For traditional enterprise security programs, mapping Prospect Theory to security shows a state of being that is fragile to reality. This state anchors those involved in enterprise defense to the Loss Domain, as shown in the graph below, encouraging risk seeking and reactive behaviors.
In contrast, applying chaos engineering to security transforms security’s Prospect Theory graph into being resilient to reality, encouraging risk averse and strategic behaviors. A dearth of significant compromises at your organization now feels like a gain, as shown in the graph below, incentivizing the preservation of that gain – which can be achieved in part through some of the strategies and testing I enumerated in the Black Hat talk (see the “A Phoenix Rises” section).
As demonstrated in these graphs, by assuming “things will be pwned,” there is really only upside – so those who architect and secure their organizations’ systems can begin feeling less defeated and instead feel proud of their efforts. Implementing security “basics” can feel unrewarding, but successfully increasing the cost of attack deserves a sense of accomplishment. While I use the term “stopping” on the Gain Domain side of the graphs for brevity, I do not literally mean preventing attacks by skiddies or APTs from happening. Rather, I mean stopping these attacks from negatively impacting your organization, which is ultimately what matters.
Just as failure-based metrics, such as Time Between Failure, inhibit innovation and make systems more brittle, allowing an organization’s security contributors to languish in the Loss Domain similarly hinders resilience and, ultimately, organizational performance. Instead, a success-oriented metric like Time to Recovery promotes innovation and resilience – and operating in the Gain Domain incentivizes similar behaviors. Therefore, switching your reference point from “maintaining secure systems” to “things will be pwned” – despite seeming like trivial semantics – has the potential to subtly transform how your teams design systems.
Conclusion The most effective way to engender cultural change is by changing what people do, not what they think. By altering the security mental model through this path of chaos, you change behavioral vectors – harnessing people’s embedded wetware – to promote the type of strategic, innovative, and perceptive decision-making that we so desperately need to build and maintain secure systems. Adopting the mantle of chaos through the philosophy of “things will be pwned” can reorient team perception to operate in a more strategic, less frenetic fashion by re-architecting their relationship between risk and reward.
</description>
            <atom:content type="html"><![CDATA[<p>There is a connection between my research into <a href="/blog/tags/behavioral-infosec/">behavioral models of information security</a> and <a href="/speaking/us-19-Shortridge-Forsgren-Controlled-Chaos-the-Inevitable-Marriage-of-DevOps-and-Security.pdf">my Black Hat USA talk</a> with Dr. Forsgren from last week regarding DevOps and the future of information security that might not be immediately obvious. However, it is a connection I believe is worth illuminating.</p>
<p>Adopting the “chaotic infosec” philosophy (the application of chaos engineering to information security) can influence those who build and secure systems to operate more like attackers – that is, to be more risk averse and to more quickly update their mental models given new inputs. This post will explore what exactly this means, and why it represents a valuable mindshift for organizational security.</p>
<h2 id="how-does-prospect-theory-apply-to-security">How does Prospect Theory apply to security?</h2>
<p>While I encourage you to read my prior post on applying <a href="/blog/posts/behavioral-models-infosec-prospect-theory">Prospect Theory to information security</a>, I recognize it is a bit long. I will thus summarize its relevant points here.</p>
<p>Humans make decisions by setting a reference point (their view of the status quo) against which they measure the decisions’ potential outcomes. Empirical evidence shows that people generally prefer options that provide a small but certain gain rather than options that provide a larger but uncertain gain. They also generally prefer options that provide a miniscule chance of losing nothing (with a larger chance of losing a lot) rather than options that provide a certain chance of losing less.</p>
<p>These results, which are modelled by <a href="https://en.wikipedia.org/wiki/Prospect_theory">Prospect Theory</a>, suggest that people are risk averse when making decisions with positive potential outcomes (when they are in the “Gain Domain”), and risk seeking when making decisions with negative potential outcomes (when they are in the “Loss Domain”). The further people get away from their reference point (again, their perceived status quo), these trends exaggerate. People who experience heavy losses become even more risk seeking in an attempt to jump out of the figurative hole, and people who experience substantial gains become even more risk averse to preserve their winnings.</p>
<p><a href="/blog/posts/behavioral-models-infosec-prospect-theory/#infosec-ref-points">My hypothesis was</a> that defenders set their reference point based on a status quo where their organization is perceived to be secure and uncompromised (to simplify a bit). The nature of enterprise defense means that defenders will operate in the Loss Domain as a result – compromises are inevitable, as are failures to maintain a level of security posture that defenders find acceptable. Attackers, in contrast, operate in the Gain Domain – because their reference point is their current level of compromise, and, generally, they are likely to positively further their position of pwnage.</p>
<p><a href="/blog/posts/behavioral-models-infosec-prospect-theory/#infosec-examples">As a result</a>, defenders are more risk seeking, leading them to adopt more speculative solutions to attempt to reach their reference point of perfect prevention instead of adopting more basic solutions with larger probabilities of success but a perception of being less likely to stop a big, sexy attack. Attackers, on the other hand, are more risk averse, moving more cautiously to achieve their goals, opting first for inexpensive methods before moving to expensive methods.</p>
<h2 id="how-does-chaos-transform-reference-points">How does chaos transform reference points?</h2>
<p>Applying the principles of chaos engineering to information security, as outlined in <a href="/speaking/us-19-Shortridge-Forsgren-Controlled-Chaos-the-Inevitable-Marriage-of-DevOps-and-Security.pdf">my Black Hat USA 2019 talk</a>, changes the reference point for defenders. If you extend the chaos engineering philosophy of “things will fail” to “things will be pwned,” then the status quo for security teams becomes the assumption of compromise.</p>
<p>For traditional enterprise security programs, mapping Prospect Theory to security shows a state of being that is fragile to reality. This state anchors those involved in enterprise defense to the Loss Domain, as shown in the graph below, encouraging risk seeking and reactive behaviors.</p>
<img style="display:block; margin-right:auto; margin-left:auto; max-width:75%;" src="/blog/img/prospect-theory-infosec-today.png" alt="A Prospect Theory graph representing how infosec lives in the Loss Domain today">
<p>In contrast, applying chaos engineering to security transforms security&rsquo;s Prospect Theory graph into being resilient to reality, encouraging risk averse and strategic behaviors. A dearth of significant compromises at your organization now feels like a gain, as shown in the graph below, incentivizing the preservation of that gain – which can be achieved in part through some of the <a href="/speaking/us-19-Shortridge-Forsgren-Controlled-Chaos-the-Inevitable-Marriage-of-DevOps-and-Security.pdf">strategies and testing</a> I enumerated in the Black Hat talk (see the “A Phoenix Rises” section).</p>
<img style="display:block; margin-right:auto; margin-left:auto; max-width:75%;" src="/blog/img/prospect-theory-infosec-chaos.png" alt="A Prospect Theory graph for infosec when influenced by the chaos philosophy of things will be pwned">
<p>As demonstrated in these graphs, by assuming “things will be pwned,” there is really only upside – so those who architect and secure their organizations’ systems can begin feeling less defeated and instead feel proud of their efforts. Implementing security “basics” can feel unrewarding, but successfully increasing the cost of attack deserves a sense of accomplishment. While I use the term “stopping” on the Gain Domain side of the graphs for brevity, I do not literally mean preventing attacks by skiddies or APTs from happening. Rather, I mean stopping these attacks from negatively impacting your organization, which is ultimately what matters.</p>
<p>Just as failure-based metrics, such as Time Between Failure, inhibit innovation and make systems more brittle, allowing an organization&rsquo;s security contributors to languish in the Loss Domain similarly hinders resilience and, ultimately, organizational performance. Instead, a success-oriented metric like Time to Recovery promotes innovation and resilience – and operating in the Gain Domain incentivizes similar behaviors. Therefore, switching your reference point from “maintaining secure systems” to “things will be pwned” – despite seeming like trivial semantics – has the potential to subtly transform how your teams design systems.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The most effective way to engender cultural change is by changing what people do, not what they think. By altering the security mental model through this path of chaos, you change behavioral vectors – harnessing people’s embedded wetware – to promote the type of strategic, innovative, and perceptive decision-making that we so desperately need to build and maintain secure systems. Adopting the mantle of chaos through the philosophy of “things will be pwned” can reorient team perception to operate in a more strategic, less frenetic fashion by re-architecting their relationship between risk and reward.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Analyzing the Black Hat USA 2019 Business Hall</title>
            <link>https://kellyshortridge.com/blog/posts/analyzing-blackhatusa-business-hall-2019/</link>
            <pubDate>Fri, 02 Aug 2019 08:35:54 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/analyzing-blackhatusa-business-hall-2019/</guid>
            <description>Prerequisite plug that you should come see my talk with Dr. Nicole Forsgren at Black Hat next week (16:00 in South Pacific)!
What type of vendors are showing themselves off in the Business Hall? Are they mostly startups? Exactly like last year, 46% of the vendors in the Business Hall are startups backed by venture capital (VC) firms. Private companies represent only 13% of total vendors this year (vs. 17% last year), and there are far more acquired companies (&#34;M&amp;A&#34; within the chart) this year (8% vs. 5% in 2018). Some of them were acquired within the past year, which means they already booked the booth in their name. However, others are companies acquired awhile ago that still retain their brand name as distinct from their new owners. Over half of VC-backed vendors received fresh funding within the past year, up quite a bit from 44% last year. We saw a 80% were funded within the past two years, and 92% were funded within the last three. Only nine companies have not been funded in the last three years, and those companies are very likely itching for exit (i.e. M&amp;A) opportunities now.How many vendors from 2018 are exhibiting again in 2019? 67% of vendors who exhibited in 2018 are exhibiting in 2019, while 33% are not. Of the 31% of VC-backed security vendors who exhibited in 2018 but are not in 2019, some are vendors who were acquired (e.g. Twistlock, ProtectWise), while many of the others are well-known to be “long in the tooth” (e.g. Bromium, CounterTack). Others were still funded recently (e.g. Digital Guardian), so it’s possible they calculated a lack of ROI from the event.
An interesting statistic is that the VC-backed vendors who are exhibiting again received their latest VC infusion more recently than those who are not still exhibiting. For VC-backed vendors exhibiting again in 2019, the delta between their last funding round and Black Hat USA 2018 (which was August 2018) was 482 days (a mean of April 2017). For VC-backed vendors not exhibiting again in 2019, the average was 622 days (a mean of November 2016). This might support the notion that vendors who have not received VC funding for awhile are less financially healthy, and therefore unable to pony up the cash for another booth at Black Hat.
Outside of VC-backed vendors, around half of the publicly-traded non-security vendors from last year decided not to exhibit this year. 8 companies that were acquired decided not to exhibit this year (for instance, Phantom presumably now will be present at the Splunk booth). Around half of the privately-owned vendors from last year decided not to exhibit. All of the publicly-traded security vendors are exhibiting again this year, save for Gemalto (who knew they were being acquired by Thales). Perhaps not surprisingly, DarkMatter decided not to show their face at BlackHat this year. Can’t imagine why.
How many VCs are dedicated to investing in infosec? Out of the 168 investors who led a funding round in a security vendor exhibiting at the Black Hat USA 2019 Business Hall, 69% only led one round. This suggests that the amount of investors &#34;dabbling&#34; in infosec is not decreasing, which is a shame. Ideally, investors educated on the space would make most of the funding decisions as, at least in theory, they can better sift through the FUD and buzzwords to determine which vendors actually fill a need in the market. There were 16 investors who led 4 deals or more, which is down from 21 last year. 25 venture capital firms led 3 deals or more, which is a convenient number to remember as a proxy for the amount of VCs at least somewhat dedicated to infosec investing. Below is the Top Ten list, which includes all VC firms who led at least 5 funding rounds in this year’s exhibitors:
What venture stage are vendors &amp; how much capital are they raising? The distribution of exhibitors by funding stage looks a little different than last year. While the number of late-stage companies (Series C or later) stayed roughly constant, there was a notable spike in early-stage companies (Seed through Series B) -- from 53% last year to representing 61% of all VC-backed startups in 2019. This may reflect the increasing dollar-size of early-stage rounds over the past year, as younger companies can now more easily afford to purchase booths. Supporting evidence for my theory is that the median dollar amount raised in Series A deals increased 45% for exhibitors who raised after August 1, 2018 vs. those who raised before then. A counterpoint is that Seed and Series B median deal sizes were down from 2018 (-15% and -4%, respectively). Late-stage deals boomed in median dollar amount raised, increasing 32% for Series C, 40% for Series D, and 119% for Series E rounds.
The median amount raised across all stages increased 38%, from a median of $17.0 million in 2018 to $23.5 million in 2019. I use the median here since the mean is skewed by some absolute unit late stage deals since August 2018. Nevertheless, the mean amount raised across all stages increased 89%, from $21.6 million to $40.8 million.
How many Private Equity firms are backing companies on the floor? There are 29 companies backed by 20 total Private Equity firms in the Business Hall this year. Last year, there was roughly the same number of companies (30), but significantly more Private Equity firms (27). It is still true that the vast majority (85%, up from 81%) of Private Equity investors are one-off investors, and only 3 Private Equity firms invested in / acquired 2 or more security companies. Dominating that figure is Thoma Bravo, who has 5 portfolio companies exhibiting in 2019.
How fresh is the Innovation City? Innovation City now has 44 &#34;residents&#34; (up from 41 in 2018), and has a significantly higher concentration of VC-backed companies than the total population (73% vs. 46%). Last year, Innovation City was only 59% VC-backed, and part of the increase this year was at the expense of the proportion of privately-held but unfunded vendors.True to its promise as a bastion of “startups and emerging companies,” 91% of the VC-backed residents of Innovation City are Seed through Series B stage – way more than the 61% present in the total population. The median amount of the latest round of funding was $10.0 million among Innovation City residents, in contrast to the total population’s median of $23.5 million.
Fascinatingly, 12 of the 44 Innovation City residents were also residents last year – over a quarter of the population. I personally assumed they rotated vendors to keep things fresh for attendees attempting to scout the most cutting-edge vendors of the flock, but I was clearly pretty wrong.
How U.S.-centric are the vendors at Black Hat? The Business Hall is marginally less U.S.-centric in 2019 than in 2018, decreasing to 79% from 83% last year. Even the proportion of VC-backed companies that are U.S.-based decreased, from 86% in 2018 to 82% in 2019. The EU grew its portion of VC-backed companies, which now represent over half of all EU-based vendors. Perhaps this is the GDPR effect in action?It surprises me somewhat that the number of Israeli startups is still quite low proportionally, despite the substantial number of them funded each year. I suspect the reason is that many Israeli startups end up switching their headquarters to the U.S. (particularly California) once they&#39;ve reached the Series A stage or later. Thus, many of the startups founded in Israel may not be reflected in these numbers. Within the U.S., California makes up 43% of all U.S.-based vendors presenting, and over half (53%) of VC-backed startups in the Business Hall. After California, Massachusettes, New York, and the D.C.-region dominate, though Texas is ascendant.
Final note: I have no idea what is happening with Brexit (does anyone?), but given the UK has more representative companies than the definitely-EU-countries combined, I kept the UK separate.
A few notes on the data Vendors were retrieved from the Black Hat 2019 Business Hall Floorplan, and exclude any federal agencies, educational organizations, or nonprofits. I also excluded any companies in the Career Zone, as they are aiming to recruit security talent rather than sell products or services.
Location and funding information was collected through Crunchbase. There may be errors if a company’s Crunchbase page is not up to date.
</description>
            <atom:content type="html"><![CDATA[<p><em>Prerequisite plug that you should come see <a href="https://www.blackhat.com/us-19/briefings/schedule/index.html#controlled-chaos-the-inevitable-marriage-of-devops--security-15273">my talk with Dr. Nicole Forsgren</a> at Black Hat next week (16:00 in South Pacific)!</em></p>
<h2 id="what-type-of-vendors-are-showing-themselves-off-in-the-business-hall-are-they-mostly-startups">What type of vendors are showing themselves off in the Business Hall? Are they mostly startups?</h2>
<img style="display:block; margin-right:auto; margin-left:auto; max-width:80%;" src="/blog/img/bhusa2019/vendors-type.png" alt="Chart of number of vendors by funding type for the Black Hat USA 2019 Vendor Hall">
Exactly like last year, 46% of the vendors in the Business Hall are startups backed by venture capital (VC) firms. Private companies represent only 13% of total vendors this year (vs. 17% last year), and there are far more acquired companies ("M&A" within the chart) this year (8% vs. 5% in 2018). Some of them were acquired within the past year, which means they already booked the booth in their name. However, others are companies acquired awhile ago that still retain their brand name as distinct from their new owners. 
<img style="float:right; max-width:50%; padding-left: 10px" src="/blog/img/bhusa2019/vc-by-age.png" alt="Chart of number of VC-backed companies by age of last raise">
Over half of VC-backed vendors received fresh funding within the past year, up quite a bit from 44% last year. We saw a 80% were funded within the past two years, and 92% were funded within the last three. Only nine companies have not been funded in the last three years, and those companies are very likely itching for exit (i.e. M&A) opportunities now.
<hr>
<h2 id="how-many-vendors-from-2018-are-exhibiting-again-in-2019">How many vendors from 2018 are exhibiting again in 2019?</h2>
<p>67% of vendors who exhibited in 2018 are exhibiting in 2019, while 33% are not. Of the 31% of VC-backed security vendors who exhibited in 2018 but are not in 2019, some are vendors who were acquired (e.g. Twistlock, ProtectWise), while many of the others are well-known to be &ldquo;long in the tooth&rdquo; (e.g. Bromium, CounterTack). Others were still funded recently (e.g. Digital Guardian), so it&rsquo;s possible they calculated a lack of ROI from the event.</p>
<p>An interesting statistic is that the VC-backed vendors who are exhibiting again received their latest VC infusion more recently than those who are not still exhibiting. For VC-backed vendors exhibiting again in 2019, the delta between their last funding round and Black Hat USA 2018 (which was August 2018) was 482 days (a mean of April 2017). For VC-backed vendors not exhibiting again in 2019, the average was 622 days (a mean of November 2016). This might support the notion that vendors who have not received VC funding for awhile are less financially healthy, and therefore unable to pony up the cash for another booth at Black Hat.</p>
<p>Outside of VC-backed vendors, around half of the publicly-traded non-security vendors from last year decided not to exhibit this year. 8 companies that were acquired decided not to exhibit this year (for instance, Phantom presumably now will be present at the Splunk booth). Around half of the privately-owned vendors from last year decided not to exhibit. All of the publicly-traded security vendors are exhibiting again this year, save for Gemalto (who knew they were being acquired by Thales). Perhaps not surprisingly, DarkMatter decided not to show their face at BlackHat this year. <a href="https://www.reuters.com/article/us-usa-cyber-alphabet-google/google-blocks-websites-certified-by-darkmatter-after-reuters-reports-idUSKCN1UR5JD">Can&rsquo;t imagine why</a>.</p>
<hr>
<h2 id="how-many-vcs-are-dedicated-to-investing-in-infosec">How many VCs are dedicated to investing in infosec?</h2>
<img style="float:right; max-width:50%; padding-left: 10px" src="/blog/img/bhusa2019/fund-numbers.png" alt="Chart of the number of VC funds by companies invested">
Out of the 168 investors who led a funding round in a security vendor exhibiting at the Black Hat USA 2019 Business Hall, 69% only led one round. This suggests that the amount of investors "dabbling" in infosec is not decreasing, which is a shame. Ideally, investors educated on the space would make most of the funding decisions as, at least in theory, they can better sift through the FUD and buzzwords to determine which vendors actually fill a need in the market. 
<p>There were 16 investors who led 4 deals or more, which is down from 21 last year. 25 venture capital firms led 3 deals or more, which is a convenient number to remember as a proxy for the amount of VCs at least somewhat dedicated to infosec investing. Below is the Top Ten list, which includes all VC firms who led at least 5 funding rounds in this year&rsquo;s exhibitors:</p>
<script src="https://gist.github.com/swagitda/e640aa3f4d1c740094292b956b61f68a.js"></script>
<hr>
<h2 id="what-venture-stage-are-vendors--how-much-capital-are-they-raising">What venture stage are vendors &amp; how much capital are they raising?</h2>
<img style="float:right; max-width:50%; padding-left: 10px" src="/blog/img/bhusa2019/vc-by-round.png" alt="Chart of the number of VC-backed vendors, by latest round">
The distribution of exhibitors by funding stage looks a little different than last year. While the number of late-stage companies (Series C or later) stayed roughly constant, there was a notable spike in early-stage companies (Seed through Series B) -- from 53% last year to representing 61% of all VC-backed startups in 2019. This may reflect the increasing dollar-size of early-stage rounds over the past year, as younger companies can now more easily afford to purchase booths. 
<p>Supporting evidence for my theory is that the median dollar amount raised in Series A deals increased 45% for exhibitors who raised after August 1, 2018 vs. those who raised before then. A counterpoint is that Seed and Series B median deal sizes were down from 2018 (-15% and -4%, respectively). Late-stage deals boomed in median dollar amount raised, increasing 32% for Series C, 40% for Series D, and 119% for Series E rounds.</p>
<p>The median amount raised across all stages increased 38%, from a median of $17.0 million in 2018 to $23.5 million in 2019. I use the median here since the mean is skewed by some absolute unit late stage deals since August 2018. Nevertheless, the mean amount raised across all stages increased 89%, from $21.6 million to $40.8 million.</p>
<img style="display:block; margin-right:auto; margin-left:auto; max-width:80%" src="/blog/img/bhusa2019/vc-dollars-by-round.png" alt="Chart of the average size of funding rounds, by stage">
<hr>
<h2 id="how-many-private-equity-firms-are-backing-companies-on-the-floor">How many Private Equity firms are backing companies on the floor?</h2>
<p>There are 29 companies backed by 20 total Private Equity firms in the Business Hall this year. Last year, there was roughly the same number of companies (30), but significantly more Private Equity firms (27). It is still true that the vast majority (85%, up from 81%) of Private Equity investors are one-off investors, and only 3 Private Equity firms invested in / acquired 2 or more security companies. Dominating that figure is Thoma Bravo, who has 5 portfolio companies exhibiting in 2019.</p>
<script src="https://gist.github.com/swagitda/1c894e489c750649f025cc6eda277b33.js"></script>
<hr>
<h2 id="how-fresh-is-the-innovation-city">How fresh is the Innovation City?</h2>
<img style="float:right; max-width:50%; padding-left: 10px" src="/blog/img/bhusa2019/innovation-city.png" alt="Chart of Innovation City 'residents,' by type">
Innovation City now has 44 "residents" (up from 41 in 2018), and has a significantly higher concentration of VC-backed companies than the total population (73% vs. 46%). Last year, Innovation City was only 59% VC-backed, and part of the increase this year was at the expense of the proportion of privately-held but unfunded vendors.
<p>True to its promise as a bastion of &ldquo;startups and emerging companies,&rdquo; 91% of the VC-backed residents of Innovation City are Seed through Series B stage &ndash; way more than the 61% present in the total population. The median amount of the latest round of funding was $10.0 million among Innovation City residents, in contrast to the total population&rsquo;s median of $23.5 million.</p>
<p>Fascinatingly, 12 of the 44 Innovation City residents were also residents last year &ndash; over a quarter of the population. I personally assumed they rotated vendors to keep things fresh for attendees attempting to scout the most cutting-edge vendors of the flock, but I was clearly pretty wrong.</p>
<hr>
<h2 id="how-us-centric-are-the-vendors-at-black-hat">How U.S.-centric are the vendors at Black Hat?</h2>
<img style="float:right; max-width:60%; padding-left: 10px" src="/blog/img/bhusa2019/geo-all.png" alt="Chart of all companies, by geography">
The Business Hall is marginally less U.S.-centric in 2019 than in 2018, decreasing to 79% from 83% last year. Even the proportion of VC-backed companies that are U.S.-based decreased, from 86% in 2018 to 82% in 2019. The EU grew its portion of VC-backed companies, which now represent over half of all EU-based vendors. Perhaps this is the GDPR effect in action?
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/bhusa2019/geo-vc.png" alt="Chart of VC-backed companies, by geography">
It surprises me somewhat that the number of Israeli startups is still quite low proportionally, despite the substantial number of them funded each year. I suspect the reason is that many Israeli startups end up switching their headquarters to the U.S. (particularly California) once they've reached the Series A stage or later. Thus, many of the startups founded in Israel may not be reflected in these numbers. 
<p>Within the U.S., California makes up 43% of all U.S.-based vendors presenting, and over half (53%) of VC-backed startups in the Business Hall. After California, Massachusettes, New York, and the D.C.-region dominate, though Texas is ascendant.</p>
<img style="display:block; margin-right:auto; margin-left:auto; max-width:80%" src="/blog/img/bhusa2019/state-all.png" alt="Chart of all U.S. companies, by state">
<p>Final note: I have no idea what is happening with Brexit (does anyone?), but given the UK has more representative companies than the definitely-EU-countries combined, I kept the UK separate.</p>
<hr>
<h2 id="a-few-notes-on-the-data">A few notes on the data</h2>
<p>Vendors were retrieved from the Black Hat 2019 Business Hall Floorplan, and exclude any federal agencies, educational organizations, or nonprofits. I also excluded any companies in the Career Zone, as they are aiming to recruit security talent rather than sell products or services.</p>
<p>Location and funding information was collected through Crunchbase. There may be errors if a company&rsquo;s Crunchbase page is not up to date.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Darth Jar Jar: a Model for Infosec Innovation</title>
            <link>https://kellyshortridge.com/blog/posts/darth-jar-jar-model-infosec-innovation/</link>
            <pubDate>Sun, 05 May 2019 08:30:09 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/darth-jar-jar-model-infosec-innovation/</guid>
            <description>
Despite its seeming absurdity and superficiality, the theory of Darth Jar Jar can serve as a poignant parable for security innovation. For those unfamiliar, the notion of a Darth Jar Jar springs from a meticulously researched fan theory from the Star Wars subreddit. While I do not wish to spoil the breathtaking beauty of the fleshed-out theory, the underlying legend is one of an ostensibly bumbling fool who, in reality, is an insidious puppet master full of cunning and strength.
What lessons from Darth Jar Jar can we glean and apply to information security? I argue there are a few innovative strategies one can postulate on both the offense and defense sides, which I will explore within this post.
Darth Jar Jar for Offense For an attacker to fully become one with Darth Jar Jar, they must be content playing a fool. Just as Darth Jar Jar appeared to be clumsy to lead to his underestimation, attackers should likewise appear to be sloppy to lead to their underestimation by defenders. By conducting purposefully noisy and messy operations in one part of a system, attackers can direct defenders’ attention away from the attack that is truly transpiring right under their cocksure noses.
There already exist public examples of attackers embracing the spirit of Darth Jar Jar. One of my favorites is the misdirection employed by attackers back in 2013, in which they conducted a DDoS to cover their tracks when exfiltrating funds out of corporate accounts. A further three banks were hit by the same approach later that year – a low-powered DDoS attack that captured defenders’ attention, while fraudulent money transfers happened concurrently.
Additionally, the Nigerian prince scam is arguably a long-term ploy very much in the vein of Darth Jar Jar. Scammers specifically use poor spelling and grammatical errors to weed out victims who would be less trusting. Those who are not deterred by lacking language are more likely to believe in the tale the scammer is spinning – and thus more willingly depart with their money. The clumsiness is intentional to lure precisely the right victims into the attacker’s maw.
I collected and brainstormed a few additional offense strategies for attackers who seek to replicate the brilliance of Darth Jar Jar:
Meesa so noisy! One option is to flood EDR systems with noise so that alert fatigue sets in on the part of the defender, letting subtler attacks slip through. The goal is to create noise for events that will receive a higher priority flag within the tool, such as purposefully clumsy kernel exploits – no team with a hundred critical-priority alerts (often called “P0s,” for priority-zero) would clear them in time to catch your sneakier attack.
Even cleverer would be to throw sloppy attacks over a period of time, while inserting the exact same quieter traffic that you plan to leverage in your real attack, so that the machine learning systems begin baselining it in. After all, Darth Jar Jar maintained his foolish façade for more than just one day!
EICAR-CAR Binks A common strategy in security detection is the notion of “alert on the first bad thing, then stop processing” so that there is not a flood of alerts. Thusly, an attacker could send EICAR in the same data stream or file as a real attack so that defenders see the EICAR alert and dismiss it. Then, the darker, sneakier underlying attack will go unnoticed – just as Darth Jar Jar used rambling babbling during the entirety of The Phantom Menace to obscure his devious speech that directly led to the dissolution of democracy.
Ex-squeeze the data outta the network During the rescue of Queen Amidala, Darth Jar Jar proved to be a master of diverting the attention of his enemies to take them by surprise. In the realm of critical infrastructure, an attacker could send a large volume of fake location update requests from SIMs, which would certainly attract the attention of defenders at any telecom provider.
While defenders remain distracted and scrambling, the attacker could then infiltrate the internal telecom network by plugging into an eNodeB and moving horizontally to compromise elements such as the MME or even the HSS (think: the master user database). Darth Jar Jar would absolutely approve of deflecting defender attention to outside of their own network to conceal the true threat from within.
Darth Jar Jar for Defense Just as attackers must play the fool to follow the wisdom of Darth Jar Jar, so must defenders. Luckily, many defenders are accidentally foolish, making purposeful foolishness all the more likely to lead attackers to underestimate defenders. While I usually strongly advise against #yolosec strategies, pretending to deploy a #yolosec strategy can perfectly replicate the sinister subterfuge of Darth Jar Jar.
There are fewer public examples in the realm of defense that highlight the gloriousness of a Darth Jar Jar approach. The closest might be the use of honeypots or canarytokens, such as a webserver that appears to be configured in a thoroughly #yolosec manner, creating irresistible temptation for attackers to explore them – and thus alert defenders to the fact that someone is very intrigued by their data.
In ths vein, there are a few Darth Jar Jar-esque defense strategies I collected and brainstormed:
Oh, no! Yousa connection is slow Once you detect attackers within your network, begin throttling their connections to satellite-link speeds at random. The goal is to make it seem like an unstable system to test how the attacker reacts and alters their strategy – just as Darth Jar Jar “clumsily” handled a booma during the Battle of Grassy Plains to take down an armored assault tank.
Where wesa executing? Darth Jar Jar never revealed to his Jedi companions that they were succumbing to his duplicity. Likewise, defenders can avoid revealing that they have not only caught attackers, but are bamboozling them.
For instance, when defenders catch an attacker attempting to pwn a production instance, they can migrate that instance out of production, setting up all the same network connectivity so no change is perceived by the attacker. Then, defenders can begin logging and monitoring everything for later learning (and to inform broader investigation) while spinning up a replacement instance in production to avoid downtime.
Mmm, dissen loverly data Just as Darth Jar Jar tricked the Jedi into travelling through the core of a planet to make himself indispensable, defenders can trick attackers into going down a heavily monitored path to make it inevitable that they will be caught. For example, defenders can sprinkle cleartext AWS credentials in places that lead to alert traps. Or, defenders can “accidentally” reveal credentials attackers would need to access a seemingly sensitive system – when in reality, there is no valuable data residing in the system, just deep monitoring present.
Ultimately, no matter which side of the infosec game on which you fight, I hope that you will adopt the following mantra to let the dark side flow through you, elevating your #basic strategy to one of subtle manipulation and insidious deception – “What would Darth Jar Jar do?”
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/darth-jar-jar.jpg" alt="Image of Darth Jar Jar"></p>
<p>Despite its seeming absurdity and superficiality, the theory of Darth Jar Jar can serve as a poignant parable for security innovation. For those unfamiliar, the notion of a Darth Jar Jar springs from a <a href="https://www.reddit.com/r/StarWars/comments/3qvj6w/theory_jar_jar_binks_was_a_trained_force_user/">meticulously researched fan theory</a> from the Star Wars subreddit. While I do not wish to spoil the breathtaking beauty of the fleshed-out theory, the underlying legend is one of an ostensibly bumbling fool who, in reality, is an insidious puppet master full of cunning and strength.</p>
<p>What lessons from Darth Jar Jar can we glean and apply to information security? I argue there are a few innovative strategies one can postulate on both the offense and defense sides, which I will explore within this post.</p>
<hr>
<h2 id="darth-jar-jar-for-offense">Darth Jar Jar for Offense</h2>
<p>For an attacker to fully become one with Darth Jar Jar, they must be content playing a fool. Just as Darth Jar Jar appeared to be clumsy to lead to his underestimation, attackers should likewise appear to be sloppy to lead to their underestimation by defenders. By conducting purposefully noisy and messy operations in one part of a system, attackers can direct defenders’ attention away from the attack that is truly transpiring right under their cocksure noses.</p>
<p>There already exist public examples of attackers embracing the spirit of Darth Jar Jar. One of my favorites is the misdirection employed by <a href="https://krebsonsecurity.com/2013/02/ddos-attack-on-bank-hid-900000-cyberheist/">attackers back in 2013</a>, in which they conducted a DDoS to cover their tracks when exfiltrating funds out of corporate accounts. A further three banks were hit by <a href="https://www.cnet.com/news/cybercrooks-use-ddos-attacks-to-mask-theft-of-banks-millions/">the same approach later that year</a> – a low-powered DDoS attack that captured defenders’ attention, while fraudulent money transfers happened concurrently.</p>
<p>Additionally, the Nigerian prince scam is arguably a long-term ploy very much in the vein of Darth Jar Jar. Scammers specifically use poor spelling and grammatical errors to weed out victims who would be less trusting. Those who are not deterred by lacking language are more likely to believe in the tale the scammer is spinning – and thus more willingly depart with their money. The clumsiness is intentional to lure precisely the right victims into the attacker’s maw.</p>
<p>I collected and brainstormed a few additional offense strategies for attackers who seek to replicate the brilliance of Darth Jar Jar:</p>
<h3 id="meesa-so-noisy">Meesa so noisy!</h3>
<p>One option is to flood EDR systems with noise so that alert fatigue sets in on the part of the defender, letting subtler attacks slip through. The goal is to create noise for events that will receive a higher priority flag within the tool, such as purposefully clumsy kernel exploits – no team with a hundred critical-priority alerts (often called &ldquo;P0s,&rdquo; for priority-zero) would clear them in time to catch your sneakier attack.</p>
<p>Even cleverer would be to throw sloppy attacks over a period of time, while inserting the exact same quieter traffic that you plan to leverage in your real attack, so that the machine learning systems begin baselining it in. After all, Darth Jar Jar maintained his foolish façade for more than just one day!</p>
<h3 id="eicar-car-binks">EICAR-CAR Binks</h3>
<p>A common strategy in security detection is the notion of “alert on the first bad thing, then stop processing” so that there is not a flood of alerts. Thusly, an attacker could send EICAR in the same data stream or file as a real attack so that defenders see the EICAR alert and dismiss it. Then, the darker, sneakier underlying attack will go unnoticed – just as Darth Jar Jar used rambling babbling during the entirety of <em>The Phantom Menace</em> to obscure his devious speech that directly led to the dissolution of democracy.</p>
<h3 id="ex-squeeze-the-data-outta-the-network">Ex-squeeze the data outta the network</h3>
<p>During the rescue of Queen Amidala, Darth Jar Jar proved to be a master of diverting the attention of his enemies to take them by surprise. In the realm of critical infrastructure, an attacker could send a large volume of fake location update requests from SIMs, which would certainly attract the attention of defenders at any telecom provider.</p>
<p>While defenders remain distracted and scrambling, the attacker could then infiltrate the internal telecom network by plugging into <a href="https://en.wikipedia.org/wiki/ENodeB">an eNodeB</a> and moving horizontally to compromise elements such as <a href="https://en.wikipedia.org/wiki/System_Architecture_Evolution#MME_(Mobility_Management_Entity)">the MME</a> or even <a href="https://medium.com/@AlepoTech/home-subscriber-server-hss-82470d3f332">the HSS</a> (think: the master user database). Darth Jar Jar would absolutely approve of deflecting defender attention to outside of their own network to conceal the true threat from within.</p>
<hr>
<h2 id="darth-jar-jar-for-defense">Darth Jar Jar for Defense</h2>
<p>Just as attackers must play the fool to follow the wisdom of Darth Jar Jar, so must defenders. Luckily, many defenders are accidentally foolish, making purposeful foolishness all the more likely to lead attackers to underestimate defenders. While I usually strongly advise against #yolosec strategies, pretending to deploy a #yolosec strategy can perfectly replicate the sinister subterfuge of Darth Jar Jar.</p>
<p>There are fewer public examples in the realm of defense that highlight the gloriousness of a Darth Jar Jar approach. The closest might be the use of honeypots or canarytokens, such as a webserver that appears to be configured in a thoroughly #yolosec manner, creating irresistible temptation for attackers to explore them – and thus alert defenders to the fact that someone is very intrigued by their data.</p>
<p>In ths vein, there are a few Darth Jar Jar-esque defense strategies I collected and brainstormed:</p>
<h3 id="oh-no-yousa-connection-is-slow">Oh, no! Yousa connection is slow</h3>
<p>Once you detect attackers within your network, begin throttling their connections to satellite-link speeds at random. The goal is to make it seem like an unstable system to test how the attacker reacts and alters their strategy – just as Darth Jar Jar “clumsily” handled a booma during the Battle of Grassy Plains to take down an armored assault tank.</p>
<h3 id="where-wesa-executing">Where wesa executing?</h3>
<p>Darth Jar Jar never revealed to his Jedi companions that they were succumbing to his duplicity. Likewise, defenders can avoid revealing that they have not only caught attackers, but are bamboozling them.</p>
<p>For instance, when defenders catch an attacker attempting to pwn a production instance, they can migrate that instance out of production, setting up all the same network connectivity so no change is perceived by the attacker. Then, defenders can begin logging and monitoring everything for later learning (and to inform broader investigation) while spinning up a replacement instance in production to avoid downtime.</p>
<h3 id="mmm-dissen-loverly-data">Mmm, dissen loverly data</h3>
<p>Just as Darth Jar Jar tricked the Jedi into travelling through the core of a planet to make himself indispensable, defenders can trick attackers into going down a heavily monitored path to make it inevitable that they will be caught. For example, defenders can sprinkle cleartext AWS credentials in places that lead to alert traps. Or, defenders can “accidentally” reveal credentials attackers would need to access a seemingly sensitive system – when in reality, there is no valuable data residing in the system, just deep monitoring present.</p>
<hr>
<p>Ultimately, no matter which side of the infosec game on which you fight, I hope that you will adopt the following mantra to let the dark side flow through you, elevating your #basic strategy to one of subtle manipulation and insidious deception – “What would Darth Jar Jar do?”</p>
<p><img src="/blog/img/darth-jar-jar.gif" alt="Gif of Darth Jar Jar peeking through Vader&amp;rsquo;s helmet"></p>
]]></atom:content>
        </item>
        
        <item>
            <title>My Reflections on the 2019 RSA Conference</title>
            <link>https://kellyshortridge.com/blog/posts/my-reflections-on-rsac-2019/</link>
            <pubDate>Wed, 13 Mar 2019 21:28:01 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/my-reflections-on-rsac-2019/</guid>
            <description>Artist’s rendition of how I felt by Thursday morning of the con
Reflecting on my existential crisis during RSAC, I tried to distill what exactly was so troublesome about the conference. The expo floor was less two separate halls, as per years prior, and more like Mordor, with a befouled sprawl connecting Minas Morgul and the Black Gate — but instead of orcs and Uruk-hai, vendors crammed the hallways that used to serve as open breathing space.
The RSAC Innovation Sandbox’s “natural” lighting (featuring me and my buzzword bingo card)The color of RSAC itself is royal purple — fitting as hosts of this ostentatious banquet — with a splash of cyber-turquoise. The vendors left an abstract expressionist mark bursting with oceanic blues, imperial reds, dramatic gunmetal, and the occasional accent of atomic tangerine. Artfully abstract globes, honeycombs, and glowing orbs were juxtaposed with elegant waves and gently curved lines, mixing familiar shapes with the futuristic mouthfeel of “cyber.”There was a dizzying sense of perpetual motion, like a spinning top just barely staying upright. Quiet was seemingly anathema — the pestilential cacophony of canned speeches and whizbangs and desperate pleas for attention was unavoidable.
In this vainglorious feast by the information security industry in dedication to itself, experiencing an existential crisis is perhaps an inevitability. The ultimate purpose of information security — ensuring an organization can thrive despite digital risks — was obscured beneath the thick layers of cheap pens dimly glinting, XL men’s t-shirts boldly proclaiming trivialities, and the hollow promises of soothing your fears one badge scan at a time.
But this alone can be rationalized away and not lead to a crisis of faith in the importance of our collective work. These criticisms are true for most trade shows; they are inherently loud and sales-y. What makes the RSA Conference burrow into the brain like a parasite greedily devouring all hope is that the diabolical song and dance is not helping beneficial solutions fall into the right hands.
RSAC is largely successful due to network effects — because everyone seemingly attends, everyone else attends — making it a good place to catch up with a lot of people at once. This reality makes its issues even more problematic; the four that personally drove my distress are:
Hyperbolization of FUD Disconnect between products &amp; personas Misunderstanding of personas Promotion of security as a blocker vs. a compromiser Hyperbolization of FUD In a banquet of distasteful and inane catch phrases and bullet points, one gave me acid reflux like no other—it exemplifies what I am calling the hyperbolization of fear, uncertainty, and doubt (“FUD”). A large EDR vendor’s advertisements around town roughly stated, “It takes a lifetime to build your career, and 5 seconds to lose it.”
There was no shortage of FUD around threats, but this tagline directly stokes the fear that someone’s professional life will be over if they let an attacker slip by. It’s tasteless and mostly incorrect, and I find it appalling to imply that someone’s life will be ruined if they don’t buy your product. I also find it lazy; if you can’t sell your product without scaring people to such a degree, perhaps you should make your product inherently matter more to them.
Part of the issue might be that the industry is generally poor at realistic threat modelling, making these doom and gloom buzzphrases compelling. While it’s too much to hope that all vendors could help their customers threat model responsibly, I believe it’s a reasonable expectation to not aim directly at personal fears and not make up scenarios that lack precedence in history or logic. If a vendor must create dystopian sci-fi to justify use of their product, they are doing it all wrong.
Disconnect between products &amp; personas My first supposition with regards to the persona problem was a disconnect between buyer and user personas. Specifically, that narratives around usability seemed to be keeping the buyer (CISO) in mind, rather than the people who would actually use it (individual contributors (“ICs”). Upon reflection, I think it’s worse than this — that there is a disconnect between the buyer personas and the vendors’ target audience.
By this I mean that the marketing tactics, highlighted characteristics, even the words used all feel targeted towards other sales and marketing professionals, or perhaps venture capitalists (“VCs”). During the conference, I spoke with a large swath of practitioners ranging from CISOs to managers to senior ICs to more junior ICs, and none of them found the booths enlightening or particularly worthy of their attention.
This begets the question — for whom is all this marketing? My prior assumption was that vendors would target the buyer persona (generally the CISO or director/manager-level) at the very least, since they are the people spending the budget. Ideally vendors would also target ICs, who are the ones actually using the product, and whose thumbs up is generally required by the buyer.
But given none of those personas felt addressed by vendors at the RSAC expo hall, who did? To be perfectly frank, I’m not entirely convinced of the answer, but my hunch is that:
VCs are dictating marketing/value propositions too much, particularly given they are generally disconnected from customer viewpoints Marketing people are generally too disconnected from customers &amp; don’t really understand the relevant personas Infosec is generally horrendously bad at understanding the spectrum of relevant user personas Misunderstanding of personas Point #3 above leads to this observation, which is that vendors overall seem to egregiously misunderstand the personas for which they are allegedly building their products.
As I discussed in my TechCrunch article, vendors are building tools and determining the specific problem being solved after the fact — often leading to the need to convince customers that the vendor-invented problem is relevant to their organization. This trend of focusing on tech rather than customer problems extends, I think, to vendor-invented personas, as well.
If you were judging solely based on the RSAC vendor floor this year, you might think the most important user persona in infosec is “SecOps.” There’s little differentiation visible in this “persona” between a SOC analyst straight out of school analyzing low-priority events vs. a SecOps engineer who writes automation scripts for things like distributed alerting — not to mention a SecOps manager vs a SOC manager and the chromatic variation around all of these titles. And, is their purview just DFIR or other responsibilities, too?
This isn’t to say generalized personas aren’t useful — but I’m wondering if vendors are now treating SecOps as the “person who deals with security events.” If so, then most people in infosec are kind of SecOps depending on how you define “event,” which would thus render SecOps a meaningless term given how different everyone’s workflows are across the range of roles.
Additionally, there were impressively lazy attempts at catering to the DevOps and so-called SecDevOps personas. Security vendors seem to think that making their product available via API means they “integrate with DevOps workflows,” which is either amazingly apathetic or oblivious.
There was also lip service paid to working with existing CI/CD pipelines, but then security vendors still expect DevOps people to go through 20 additional hoops, not realizing that they simply won’t. For instance, please don’t claim you are DevOps-friendly while asking them to install a kernel module on every machine they actively use.
This leads to my assumption (and into my next point) that security doesn’t realize how immaterial they are relative to a team (DevOps) that can justifiably argue that they support revenue-generating activities. And, what’s worse, vendors provide little support for their customers to fight for security’s inclusion in the conversation.
It as if vendors expect that by saying “we secure the development lifecycle!”, security practitioners can convince the organization to install their product. This notion of a security team having the keys to an organizational steam roller is, of course, pure fantasy.
Promotion of security as a blocker vs. a compromiser Finally, I think infosec is largely ignoring an important emerging shift, which I believe is inevitable — the transition from security as the “no people” to enablers of the business. Enterprise security only matters to the extent that it is helping preserve the company’s ongoing operations. Anything beyond that feels largely like intellectual vanity to me (this is precisely why I’ve been evangelizing the notion for a few years now).
How does this play into RSAC? I argue that the most successful security vendors of late are about removing barriers, in contrast to many security people believing barriers are worthwhile aids in their infosec crusade (and I also suspect that creating organizational barriers provides significant vocational fulfillment among a chunk of security professionals).
To wit, zScaler consolidated old products and made it easier for the organization to access resources without the tedious VPN dance. Okta likewise streamlined access and made a lot of enterprise workflows simpler. Both are now trading at delectably rich multiples (both above 20x EV/Revenue) — a victory even beyond the fact that they are some of the few information security startups to IPO in recent years.
Ultimately, anyone can say “no” to something —but just saying “no” isn’t actually solving a problem. Figuring out a compromise, like preserving or even improving UX while still ensuring an organization’s security, is a hard problem — the type of problem which should be most intellectually fulfilling.
Security will keep being dismissed by other parts of the organization until it lets go of finding fulfillment through security righteousness and instead finds fulfillment by finding an elegant way to enable the business while still reducing its risk.
One of my favorite examples to cite is John “Four” Flynn and the tale of implementing 2FA on SSH at Facebook. The security team conducted product-like user research interviews across engineering teams to figure out how SSH was being used — essentially mapping user workflows. The security team’s goal was to add 2FA to SSH, but instead of ramming it through like petulant ayatollahs, they sought to understand the user perspective and determine a solution that would not add friction for engineers.
It was a hard problem, but with a huge payoff; now the engineering teams can trust that security isn’t just trying to make their lives more difficult for sport or quasi-religious fervor. This means the relationship can be more like a partnership, rather than a political battle — the sort of battle security practitioners allegedly detest (likely because they tend to lose it).
Conclusion These issues are not engendered by one constituent of the industry — they just all happen to notably bubble up like a tar pit burping methane in the harsh fluorescent spotlight of RSAC. It is the failure of culture among practitioners, vendors failing to understand their customers’ needs, founders going for cash grabs, and VCs encouraging the noxious sleaze and bamboozlery.
All of this comes, perhaps, from a reticence towards doing the harder, right thing and instead taking the easier path — the one that harvests confusion and fear to seize success that mediocrity could not earn otherwise.
I speak a lot about incentives, and, in fairness, there is an abundance of incentives rewarding this mediocrity, but far fewer promoting the more difficult path. CISOs don’t want to speak out against vendors and, in fact, benefit when they are seen as essential in navigating not just the threat landscape, but the vendor landscape as well. Vendors can still find decent M&amp;A exits that make founders and investors happy enough, even if they sell solely within their professional networks and don’t ever find true product/market fit.
And if VCs make enough money from this buffoonery, why should they bother to understand the customer personas better to inform their investment choices? Many VCs have CISO friends, but I know from private conversations that these CISO friends often view the VCs as useful fools who can help them gain advisory or board positions.
If it sounds hopeless, well… I’m not sure it isn’t. But I refuse to give into nihlism. There are absolutely those of us who are fighting to change things, but there is a lot of entrenched power which benefits from keeping these dynamics the way they are in information security.
It will take those willing to risk their power to speak out in the truest form of thought leadership there is. Speaking into a warm and fuzzy echo chamber isn’t thought leadership; bravely challenging the status quo, armed with evidence, is.
We can keep the corporate-friendly cerulean and navy aesthetic, the blustering loudness, and yes, even the free pens, but we need to dedicate these efforts not to the thrill of our industry finally being considered “cool,” but to solving hard problems and protecting organizations in the way they need.
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/bad-cyberart-12.jpeg" alt="Very cyber-y skull"><em>Artist’s rendition of how I felt by Thursday morning of the con</em></p>
<p>Reflecting on my existential crisis during RSAC, I tried to distill what exactly was so troublesome about the conference. The expo floor was less two separate halls, as per years prior, and more like Mordor, with a befouled sprawl connecting Minas Morgul and the Black Gate — but instead of orcs and Uruk-hai, vendors crammed the hallways that used to serve as open breathing space.</p>
<figure style="float:right; max-width:40%; padding-left: 10px">
	<img src="/blog/img/infosec-buzzword-bingo.jpg" alt="Kelly Shortridge holding their infosec buzzword bingo card">
	<figcaption>The RSAC Innovation Sandbox’s “natural” lighting (featuring me and <a href="/blog/posts/infosec-startup-buzzword-bingo-2019-edition">my buzzword bingo card</a>)<figcaption>
</figure>
The color of RSAC itself is royal purple — fitting as hosts of this ostentatious banquet — with a splash of cyber-turquoise. The vendors left an abstract expressionist mark bursting with oceanic blues, imperial reds, dramatic gunmetal, and the occasional accent of atomic tangerine. Artfully abstract globes, honeycombs, and glowing orbs were juxtaposed with elegant waves and gently curved lines, mixing familiar shapes with the futuristic mouthfeel of “cyber.”
<p>There was a dizzying sense of perpetual motion, like a spinning top just barely staying upright. Quiet was seemingly anathema — the pestilential cacophony of canned speeches and whizbangs and desperate pleas for attention was unavoidable.</p>
<p>In this vainglorious feast by the information security industry in dedication to itself, experiencing an existential crisis is perhaps an inevitability. The ultimate purpose of information security — ensuring an organization can thrive despite digital risks — was obscured beneath the thick layers of cheap pens dimly glinting, XL men’s t-shirts boldly proclaiming trivialities, and the hollow promises of soothing your fears one badge scan at a time.</p>
<p>But this alone can be rationalized away and not lead to a crisis of faith in the importance of our collective work. These criticisms are true for most trade shows; they are inherently loud and sales-y. What makes the RSA Conference burrow into the brain like a parasite greedily devouring all hope is that the diabolical song and dance is not helping beneficial solutions fall into the right hands.</p>
<p>RSAC is largely successful due to network effects — because everyone seemingly attends, everyone else attends — making it a good place to catch up with a lot of people at once. This reality makes its issues even more problematic; the four that personally drove my distress are:</p>
<ol>
<li><a href="#fud">Hyperbolization of FUD</a></li>
<li><a href="#disconnect">Disconnect between products &amp; personas</a></li>
<li><a href="#personas">Misunderstanding of personas</a></li>
<li><a href="#blocker">Promotion of security as a blocker vs. a compromiser</a></li>
</ol>
<hr>
<h2 id="a-namefudahyperbolization-of-fud"><a name="fud"></a>Hyperbolization of FUD</h2>
<p>In a banquet of distasteful and inane catch phrases and bullet points, one gave me acid reflux like no other—it exemplifies what I am calling the hyperbolization of fear, uncertainty, and doubt (“FUD”). A large EDR vendor’s advertisements around town roughly stated, <em>“It takes a lifetime to build your career, and 5 seconds to lose it.”</em></p>
<p>There was no shortage of FUD around threats, but this tagline directly stokes the fear that someone’s professional life will be over if they let an attacker slip by. It’s tasteless and mostly incorrect, and I find it appalling to imply that someone’s life will be ruined if they don’t buy your product. I also find it lazy; if you can’t sell your product without scaring people to such a degree, perhaps you should make your product inherently matter more to them.</p>
<p>Part of the issue might be that the industry is generally poor at realistic threat modelling, making these doom and gloom buzzphrases compelling. While it’s too much to hope that all vendors could help their customers threat model responsibly, I believe it’s a reasonable expectation to not aim directly at personal fears and not make up scenarios that lack precedence in history or logic. If a vendor must create dystopian sci-fi to justify use of their product, they are doing it all wrong.</p>
<h2 id="a-namedisconnectadisconnect-between-products--personas"><a name="disconnect"></a>Disconnect between products &amp; personas</h2>
<p>My first supposition with regards to the persona problem was a disconnect <a href="https://blog.hubspot.com/marketing/buyer-persona-definition-under-100-sr">between buyer</a> and <a href="https://blog.hubspot.com/marketing/buyer-persona-definition-under-100-sr">user personas</a>. Specifically, that narratives around usability seemed to be keeping the buyer (CISO) in mind, rather than the people who would actually use it (individual contributors (“ICs”). Upon reflection, I think it’s worse than this — that there is a disconnect between the buyer personas and the vendors’ target audience.</p>
<p>By this I mean that the marketing tactics, highlighted characteristics, even the words used all feel targeted towards other sales and marketing professionals, or perhaps venture capitalists (“VCs”). During the conference, I spoke with a large swath of practitioners ranging from CISOs to managers to senior ICs to more junior ICs, and none of them found the booths enlightening or particularly worthy of their attention.</p>
<p>This begets the question — for whom is all this marketing? My prior assumption was that vendors would target the buyer persona (generally the CISO or director/manager-level) at the very least, since they are the people spending the budget. Ideally vendors would also target ICs, who are the ones actually using the product, and whose thumbs up is generally required by the buyer.</p>
<p>But given <em>none</em> of those personas felt addressed by vendors at the RSAC expo hall, who did? To be perfectly frank, I’m not entirely convinced of the answer, but my hunch is that:</p>
<ol>
<li>VCs are dictating marketing/value propositions too much, particularly given they are generally disconnected from customer viewpoints</li>
<li>Marketing people are generally too disconnected from customers &amp; don’t really understand the relevant personas</li>
<li>Infosec is generally horrendously bad at understanding the spectrum of relevant user personas</li>
</ol>
<h2 id="a-namepersonasamisunderstanding-of-personas"><a name="personas"></a>Misunderstanding of personas</h2>
<p>Point #3 above leads to this observation, which is that vendors overall seem to egregiously misunderstand the personas for which they are allegedly building their products.</p>
<p>As I discussed in <a href="https://techcrunch.com/2019/02/13/the-infosec-reckoning-has-arrived/">my TechCrunch article</a>, vendors are building tools and determining the specific problem being solved after the fact — often leading to the need to convince customers that the vendor-invented problem is relevant to their organization. This trend of focusing on tech rather than customer problems extends, I think, to vendor-invented personas, as well.</p>
<p>If you were judging solely based on the RSAC vendor floor this year, you might think the most important user persona in infosec is “SecOps.” There’s little differentiation visible in this “persona” between a SOC analyst straight out of school analyzing low-priority events vs. a SecOps engineer who writes automation scripts for things like <a href="https://slack.engineering/distributed-security-alerting-c89414c992d6">distributed alerting</a> — not to mention a SecOps manager vs a SOC manager and the chromatic variation around all of these titles. And, is their purview <a href="https://medium.com/@sroberts/introduction-to-dfir-d35d5de4c180">just DFIR</a> or other responsibilities, too?</p>
<p>This isn’t to say generalized personas aren’t useful — but I’m wondering if vendors are now treating SecOps as the “person who deals with security events.” If so, then most people in infosec are kind of SecOps depending on how you define “event,” which would thus render SecOps a meaningless term given how different everyone’s workflows are across the range of roles.</p>
<p>Additionally, there were impressively lazy attempts at catering to the DevOps and so-called SecDevOps personas. Security vendors seem to think that making their product available via API means they “integrate with DevOps workflows,” which is either amazingly apathetic or oblivious.</p>
<p>There was also lip service paid to working with existing CI/CD pipelines, but then security vendors still expect DevOps people to go through 20 additional hoops, not realizing that they simply won’t. For instance, please don’t claim you are DevOps-friendly while asking them to install a kernel module on every machine they actively use.</p>
<p>This leads to my assumption (and into my next point) that security doesn’t realize how immaterial they are relative to a team (DevOps) that can justifiably argue that they support revenue-generating activities. And, what’s worse, vendors provide little support for their customers to fight for security’s inclusion in the conversation.</p>
<p>It as if vendors expect that by saying “we secure the development lifecycle!”, security practitioners can convince the organization to install their product. This notion of a security team having the keys to an organizational steam roller is, of course, pure fantasy.</p>
<h2 id="a-nameblockerapromotion-of-security-as-a-blocker-vs-a-compromiser"><a name="blocker"></a>Promotion of security as a blocker vs. a compromiser</h2>
<p>Finally, I think infosec is largely ignoring an important emerging shift, which I believe is inevitable — the transition from security as the “no people” to enablers of the business. Enterprise security only matters to the extent that it is helping preserve the company’s ongoing operations. Anything beyond that feels largely like intellectual vanity to me (this is precisely why I’ve been evangelizing the notion for <a href="/blog/posts/security-as-a-product/">a few years now</a>).</p>
<p>How does this play into RSAC? I argue that the most successful security vendors of late are about removing barriers, in contrast to many security people believing barriers are worthwhile aids in their infosec crusade (and I also suspect that creating organizational barriers provides significant vocational fulfillment among a chunk of security professionals).</p>
<p>To wit, zScaler consolidated old products and made it easier for the organization to access resources without the tedious VPN dance. Okta likewise streamlined access and made a lot of enterprise workflows simpler. Both are now trading at delectably rich multiples (both above 20x <a href="https://www.investopedia.com/terms/e/ev-revenue-multiple.asp">EV/Revenue</a>) — a victory even beyond the fact that they are some of the few information security startups to IPO in recent years.</p>
<p>Ultimately, anyone can say “no” to something —but just saying “no” isn’t actually solving a problem. Figuring out a compromise, like preserving or even improving UX while still ensuring an organization’s security, is a hard problem — the type of problem which should be most intellectually fulfilling.</p>
<p>Security will keep being dismissed by other parts of the organization until it lets go of finding fulfillment through security righteousness and instead finds fulfillment by finding an elegant way to enable the business while still reducing its risk.</p>
<p>One of my favorite examples to cite is John “Four” Flynn and <a href="https://www.youtube.com/watch?v=pY4FBGI7bHM">the tale of implementing 2FA on SSH at Facebook</a>. The security team conducted product-like user research interviews across engineering teams to figure out how SSH was being used — essentially mapping user workflows. The security team’s goal was to add 2FA to SSH, but instead of ramming it through like petulant ayatollahs, they sought to understand the user perspective and determine a solution that would not add friction for engineers.</p>
<p>It was a hard problem, but with a huge payoff; now the engineering teams can trust that security isn’t just trying to make their lives more difficult for sport or quasi-religious fervor. This means the relationship can be more like a <em>partnership</em>, rather than a political battle — the sort of battle security practitioners allegedly detest (likely because they tend to lose it).</p>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>These issues are not engendered by one constituent of the industry — they just all happen to notably bubble up like a tar pit burping methane in the harsh fluorescent spotlight of RSAC. It is the failure of culture among practitioners, vendors failing to understand their customers’ needs, founders going for cash grabs, and VCs encouraging the noxious sleaze and bamboozlery.</p>
<p>All of this comes, perhaps, from a reticence towards doing the harder, right thing and instead taking the easier path — the one that harvests confusion and fear to seize success that mediocrity could not earn otherwise.</p>
<p><a href="/speaking/index.html">I speak a lot</a> about incentives, and, in fairness, there is an abundance of incentives rewarding this mediocrity, but far fewer promoting the more difficult path. CISOs don’t want to speak out against vendors and, in fact, benefit when they are seen as essential in navigating not just the threat landscape, but the vendor landscape as well. Vendors can still find decent M&amp;A exits that make founders and investors happy enough, even if they sell solely within their professional networks and don’t ever find true product/market fit.</p>
<p>And if VCs make enough money from this buffoonery, why should they bother to understand the customer personas better to inform their investment choices? Many VCs have CISO friends, but I know from private conversations that these CISO friends often view the VCs as useful fools who can help them gain advisory or board positions.</p>
<p>If it sounds hopeless, well… I’m not sure it isn’t. But I refuse to <a href="/blog/posts/red-pill-of-resilience-infosec/">give into nihlism</a>. There are absolutely those of us who are fighting to change things, but there is a lot of entrenched power which benefits from keeping these dynamics the way they are in information security.</p>
<p>It will take those willing to risk their power to speak out in the truest form of thought leadership there is. Speaking into a warm and fuzzy echo chamber isn’t thought leadership; bravely challenging the status quo, armed with evidence, is.</p>
<p>We can keep the corporate-friendly cerulean and navy aesthetic, the blustering loudness, and yes, even the free pens, but we need to dedicate these efforts not to the thrill of our industry finally being considered “cool,” but to solving hard problems and protecting organizations in the way they need.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>InfoSec Startup Buzzword Bingo: 2019 Edition</title>
            <link>https://kellyshortridge.com/blog/posts/infosec-buzzword-bingo-2019-edition/</link>
            <pubDate>Wed, 27 Feb 2019 16:57:23 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/infosec-buzzword-bingo-2019-edition/</guid>
            <description>This is the third edition of my Infosec Buzzword Bingo, just in time for 2019’s RSA Conference (RSAC). Rather than relying on my keenly tuned snake oil spidey senses to generate the words populating the bingo card, I took a more data-driven approach this year.
I surveyed 100 companies’ websites[1], the vast majority of which are exhibiting at RSAC and possess VC funding. I did not include any of the large security vendors[2], who probably could populate their own bingo cards across their mastodonian websites.
The idea is to take this with you to RSAC or any other vendor halls at information security conferences this year to see how many times you can win bingo at a single vendor booth! For more fun, try out my Cyber Tagline Generator script to create your own maniacally terrible buzzword salad (then @ it to me on Twitter).
Without further introduction, here’s the bingo card in all its glory — read on if you want more analysis on the stats: The top word by far this year was automated and its variants — nearly three quarters of all companies used it on their sites in one way or another (e.g. automatically, automation, automates, etc.). There were a few repeats from prior bingo cards, perhaps proving my natural acuity for sensing the buzziest buzzwords. The following table of the top 25 buzzwords (the ones on the bingo card) includes the number of companies who cited the buzzword, along with whether the buzzword was on prior bingo cards:
Which buzzwords are on the rise? Let’s start with the words on the bingo card itself. You can’t just enable security people anymore, you must empower them. Seemingly a reaction to CISOs through SecOps analysts complaining about the complexity of security tools, simple, simplifies, and simplified are being used to (proactively ;) assuage concerns.
Allegedly, security professionals now want to discover things, and I suppose with most data lakes being more akin to data swamps, the predilection for adventure that discovery implies is required. And, taking a page from the DevOps world, orchestration is peppered around enough now to create a veritable symphony of infosec startups orchestrating away.
Not quite making it to the bingo card, but still heating up, is collaborate and collaboration. Who knew that infosec teams wanted to work together? And if your security product isn’t optimized, what are you even doing? Note that you do not have to say for what your product is optimizing, just that it is, in fact, optimized.
Finally, solutions for runtime security are growing, which basically is just saying it doesn’t break the computer as it is computing. The fact that this assurance must be stated at all says more about the infosec vendor situation than perhaps even a long thought piece can.
Which buzzwords are starting to fall? Hunting / hunt, most frequently used with “threat” before it, just barely missed making the Infosec Buzzword Bingo card again this year. I anticipate that it’ll fall even further by next year as the category morphs into SIEM 2.0. While behavioral is still holding strong, anomalies is beginning to wane — it’s much sexier to say you have behavior-based machine learning or AI instead.
As far as threats, they are notably less sophisticated, and less found on the dark web or in IoT devices. You def don’t want to talk about your product as a single-pane-of-glass anymore (try intuitive instead, which is on the rise). And, cloud-based is becoming less relevant as most security solutions move to a SaaSy model. If anything, companies should now specify when they aren’t cloud-based.
Which buzzwords are the weirdest? Most of the weirdo-words are only used in a handful of companies’ marketing spiels, thankfully. But, a full seven companies used real-world, which, I mean — as far as I know, we aren’t trying to secure hypothetical worlds? I’m being purposefully obtuse (in the Dostoevskyan-fool spirit), since I do understand that too many solutions catch bad stuff only in theory without rigorously testing against what attackers will actually do (or without even really considering this during the product’s conception). But, its usage still makes me sad.
Another odd buzzword was holistic, also cited by seven companies, which is perhaps the most credible buzzword due to its close association with essential “healing” oils. However, the one I hate the most by far is quantum-resistant. For my sanity’s sake, I am grateful only one company chose to use that term.
[1] I scoped it to the websites’ landing and product/platform pages (e.g. no blog content).
[2] When I say “large” security vendors, think those who are publicly traded, are more than ten years old, or who have entire product “suites.”
</description>
            <atom:content type="html"><![CDATA[<p>This is the <a href="https://twitter.com/swagitda_/status/912494974779973632">third</a> <a href="https://twitter.com/swagitda_/status/967566556262694912">edition</a> of my Infosec Buzzword Bingo, just in time for 2019’s RSA Conference (RSAC). Rather than relying on my keenly tuned snake oil spidey senses to generate the words populating the bingo card, I took a more data-driven approach this year.</p>
<p>I surveyed 100 companies’ websites<a name="back-1"></a><a href="#cite-1">[1]</a>, the vast majority of which are exhibiting at RSAC and possess VC funding. I did not include any of the large security vendors<a name="back-2"></a><a href="#cite-2">[2]</a>, who probably could populate their own bingo cards across their mastodonian websites.</p>
<p>The idea is to take this with you to RSAC or any other vendor halls at information security conferences this year to see how many times you can win bingo at a single vendor booth! For more fun, <a href="https://github.com/swagitda/infosec-buzzword-bingo/blob/master/cyber-taglines.py">try out my Cyber Tagline Generator script</a> to create your own maniacally terrible buzzword salad (<a href="https://twitter.com/swagitda_/">then @ it to me on Twitter</a>).</p>
<p>Without further introduction, here’s the bingo card in all its glory — read on if you want more analysis on the stats:
<img src="/blog/img/infosec-startup-bingo-2019.png" alt="Infosec Startup Buzzword Bingo card for 2019. The buzzwords, from top left to bottom right, include: Threat. Context. Machine Learning and AI. Discover. Real-time. Cyber. Insights. Seamless. Comprehensive. Simplify. Intelligence. Scalable. Automation (the center square). Actionable. Platform. Deep. Orchestrated. Empower. Prioritization. Behavioral. Visibility. Workflow. Advanced. Unknown. Continuous."></p>
<p>The top word by far this year was <em><strong>automated</strong></em> and its variants — nearly three quarters of all companies used it on their sites in one way or another (e.g. <em><strong>automatically</strong></em>, <em><strong>automation</strong></em>, <em><strong>automates</strong></em>, etc.). There were a few repeats from prior bingo cards, perhaps proving my natural acuity for sensing the buzziest buzzwords. The <a href="https://github.com/swagitda/infosec-buzzword-bingo/blob/master/buzzword-bingo.md">following table</a> of the top 25 buzzwords (the ones on the bingo card) includes the number of companies who cited the buzzword, along with whether the buzzword was on prior bingo cards:</p>
<script src="https://gist.github.com/swagitda/7a55615c8b7889b92a3edaae7e8462e2.js"></script>
<h2 id="which-buzzwords-are-on-the-rise">Which buzzwords are on the rise?</h2>
<p>Let’s start with the words on the bingo card itself. You can’t just enable security people anymore, you must <em><strong>empower</strong></em> them. Seemingly a reaction to CISOs through SecOps analysts complaining about the complexity of security tools, <em><strong>simple</strong></em>, <em><strong>simplifies</strong></em>, and <em><strong>simplified</strong></em> are being used to (proactively ;) assuage concerns.</p>
<p>Allegedly, security professionals now want to <em><strong>discover</strong></em> things, and I suppose with most data lakes being more akin to data swamps, the predilection for adventure that <em><strong>discovery</strong></em> implies is required. And, taking a page from the DevOps world, <em><strong>orchestration</strong></em> is peppered around enough now to create a veritable symphony of infosec startups <em><strong>orchestrating</strong></em> away.</p>
<p>Not quite making it to the bingo card, but still heating up, is <em><strong>collaborate</strong></em> and <em><strong>collaboration</strong></em>. Who knew that infosec teams wanted to work together? And if your security product isn’t <em><strong>optimized</strong></em>, what are you even doing? Note that you do not have to say for what your product is <em><strong>optimizing</strong></em>, just that it is, in fact, <em><strong>optimized</strong></em>.</p>
<p>Finally, solutions for <em><strong>runtime</strong></em> security are growing, which basically is just saying it doesn’t break the computer as it is computing. The fact that this assurance must be stated at all says more about the infosec vendor situation than perhaps <a href="https://techcrunch.com/2019/02/13/the-infosec-reckoning-has-arrived/">even a long thought piece</a> can.</p>
<h2 id="which-buzzwords-are-starting-to-fall">Which buzzwords are starting to fall?</h2>
<p><em><strong>Hunting</strong></em> / <em><strong>hunt</strong></em>, most frequently used with “threat” before it, just barely missed making the Infosec Buzzword Bingo card again this year. I anticipate that it’ll fall even further by next year as the category morphs into SIEM 2.0. While <em><strong>behavioral</strong></em> is still holding strong, <em><strong>anomalies</strong></em> is beginning to wane — it’s much sexier to say you have <em><strong>behavior-based machine learning</strong></em> or <em><strong>AI</strong></em> instead.</p>
<p>As far as threats, they are notably less <em><strong>sophisticated</strong></em>, and less found on the <em><strong>dark</strong></em> web or in <em><strong>IoT</strong></em> devices. You def don’t want to talk about your product as a <em><strong>single-pane-of-glass</strong></em> anymore (try <em><strong>intuitive</strong></em> instead, which is on the rise). And, <em><strong>cloud-based</strong></em> is becoming less relevant as most security solutions move to a SaaSy model. If anything, companies should now specify when they <em>aren’t</em> cloud-based.</p>
<h2 id="which-buzzwords-are-the-weirdest">Which buzzwords are the weirdest?</h2>
<p>Most of the weirdo-words are only used in a handful of companies’ marketing spiels, thankfully. But, a full seven companies used <em><strong>real-world</strong></em>, which, I mean — as far as I know, we aren’t trying to secure hypothetical worlds? I’m being purposefully obtuse (in the Dostoevskyan-fool spirit), since I do understand that too many solutions catch bad stuff only in theory without rigorously testing against what attackers will actually do (or without even really considering this during the product’s conception). But, its usage still makes me sad.</p>
<p>Another odd buzzword was <em><strong>holistic</strong></em>, also cited by seven companies, which is perhaps the most credible buzzword due to its close association with essential “healing” oils. However, the one I hate the most by far is <em><strong>quantum-resistant</strong></em>. For my sanity’s sake, I am grateful only one company chose to use that term.</p>
<hr>
<p><a name="cite-1"></a><a href="#back-1">[1]</a> I scoped it to the websites’ landing and product/platform pages (e.g. no blog content).</p>
<p><a name="cite-2"></a><a href="#back-2">[2]</a> When I say “large” security vendors, think those who are publicly traded, are more than ten years old, or who have entire product “suites.”</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Analyzing the 2019 RSA Innovation Sandbox Finalists</title>
            <link>https://kellyshortridge.com/blog/posts/analyzing-2019-rsa-innovation-sandbox-finalists/</link>
            <pubDate>Tue, 05 Feb 2019 19:02:53 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/analyzing-2019-rsa-innovation-sandbox-finalists/</guid>
            <description>This year’s nominees for the 2019 edition of the RSA Conference’s Innovation Sandbox were announced this morning. As I’m wont to do, I wanted to explore the funding side of these ten startups.
Total Funding Raised &amp; the Latest Stage for each 2019 RSA Innovation Sandbox Finalist ($USD millions)
The median funding raised by all ten startups is $10.5 million (mean of $10.3 million), which makes sense for an early stage-flavored competition. Additional analysis is required to compare funding levels of finalists from prior years to see if there is a correlation with those who win the competition.
Chart of the Distribution of 2019 Innovation Sandbox Finalists, by Funding Stage
The distribution of startups by stage is also consistent, with the highest concentration at the Series A stage. Of course, another way to determine startup maturity is through actual temporal age. The median number of months[1] since the finalists’ founded dates is 27.5, equating to roughly 2.3 years. The eldest is just shy of four years old.
Category-wise, there is a healthy distribution across sub-sectors. The highest concentration is in DevSecOps-related tools, which range from detecting attacks in modern infrastructure to “API security.” There are also two finalists tackling data protection/privacy. The rest include one startup each tackling anti-fraud, appsec, asset management, firmware/hardware security, and IAM. If you follow me, you know I find masochistic pleasure in examining the nature of infosec buzzwords, and the Innovation Sandbox word cloud does not disappoint on this front:
“a security platform to automatically protect human fraud from exhaustive enforcement”
What is perhaps most peculiar about this group of finalists to those who follow VC funding trends in infosec (hopefully I am not alone in this nerddom), is that there is scant overlap between investors in these startups. Perhaps, like with movie studios and the Oscars, each VC selects the startup in their portfolio they believe is in the strongest position to win. The sole overlapping investor was ClearSky, who led Capsule8’s Series B last August, and also led CloudKnox’s “Venture Round” (which feels very Series A) last October. The full list of lead investors in these ten startups is:
Bain Capital Ventures Bessemer ClearSky x 2 Foundation Capital Madrona Venture Group Mayfield Fund New Enterprise Associates PayPal PSP Growth Rally Ventures S Capital Team8 USVP Y Combinator YL Ventures All finalists are based inside of the U.S. Of those, 40% are based in California — New York has two, and Kansas, New Jersey, Oregon, and Virginia have one startup each.
As a final statistic, only one company’s founding team includes any female founders.
[1] Note: Not every startup gives the precise day they were founded, so I used a combination of Crunchbase and LinkedIn data (e.g., the start date the founder lists) to get as close as possible.
</description>
            <atom:content type="html"><![CDATA[<p>This year’s nominees for the 2019 edition of the RSA Conference’s Innovation Sandbox were <a href="https://www.rsaconference.com/press/98/rsa-conference-announces-finalists-for-innovation">announced this morning</a>. As I’m wont to do, I wanted to explore the funding side of these ten startups.</p>
<p><img src="/blog/img/rsa-sandbox-02.png" alt="Chart of funding raised by RSA Innovation Sandbox finalists"><em>Total Funding Raised &amp; the Latest Stage for each 2019 RSA Innovation Sandbox Finalist ($USD millions)</em></p>
<p>The median funding raised by all ten startups is $10.5 million (mean of $10.3 million), which makes sense for an early stage-flavored competition. Additional analysis is required to compare funding levels of finalists from prior years to see if there is a correlation with those who win the competition.</p>
<figure>
    <img src="/blog/img/rsa-sandbox-01.png"
         alt="Chart of the Distribution of 2019 Innovation Sandbox Finalists, by Funding Stage"/> <figcaption>
            <p>Chart of the Distribution of 2019 Innovation Sandbox Finalists, by Funding Stage</p>
        </figcaption>
</figure>
<p>The distribution of startups by stage is also consistent, with the highest concentration at the Series A stage. Of course, another way to determine startup maturity is through actual temporal age. The median number of months<a name="back-1"></a><a href="#cite-1">[1]</a> since the finalists’ founded dates is 27.5, equating to roughly 2.3 years. The eldest is just shy of four years old.</p>
<p>Category-wise, there is a healthy distribution across sub-sectors. The highest concentration is in DevSecOps-related tools, which range from detecting attacks in modern infrastructure to “API security.” There are also two finalists tackling data protection/privacy. The rest include one startup each tackling anti-fraud, appsec, asset management, firmware/hardware security, and IAM. If <a href="https://twitter.com/swagitda_">you follow me</a>, you know I find <a href="/blog/posts/2019-cybersecurity-predictions/">masochistic pleasure</a> in examining the nature of infosec buzzwords, and the Innovation Sandbox word cloud does not disappoint on this front:</p>
<p><img src="/blog/img/rsa-sandbox-wordcloud.png" alt="Wordcloud of buzzwords from RSA Innovation Sandbox finalists in 2019"><em>“a security platform to automatically protect human fraud from exhaustive enforcement”</em></p>
<p>What is perhaps most peculiar about this group of finalists to those who follow VC funding trends in infosec (hopefully I am not alone in this nerddom), is that there is scant overlap between investors in these startups. Perhaps, like with movie studios and the Oscars, each VC selects the startup in their portfolio they believe is in the strongest position to win. The sole overlapping investor was ClearSky, who led Capsule8’s Series B last August, and also led CloudKnox’s “Venture Round” (which feels very Series A) last October. The full list of lead investors in these ten startups is:</p>
<ul>
<li>Bain Capital Ventures</li>
<li>Bessemer</li>
<li>ClearSky x 2</li>
<li>Foundation Capital</li>
<li>Madrona Venture Group</li>
<li>Mayfield Fund</li>
<li>New Enterprise Associates</li>
<li>PayPal</li>
<li>PSP Growth</li>
<li>Rally Ventures</li>
<li>S Capital</li>
<li>Team8</li>
<li>USVP</li>
<li>Y Combinator</li>
<li>YL Ventures</li>
</ul>
<p>All finalists are based inside of the U.S. Of those, 40% are based in California — New York has two, and Kansas, New Jersey, Oregon, and Virginia have one startup each.</p>
<p>As a final statistic, only one company’s founding team includes any female founders.</p>
<hr>
<p><a name="cite-1"></a><a href="#back-1">[1]</a> Note: Not every startup gives the precise day they were founded, so I used a combination of Crunchbase and LinkedIn data (e.g., the start date the founder lists) to get as close as possible.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>The Cyber Tub: Communicating the Dynamics of Information Security Risk Management</title>
            <link>https://kellyshortridge.com/blog/posts/the-cyber-tub/</link>
            <pubDate>Thu, 20 Dec 2018 22:02:49 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/the-cyber-tub/</guid>
            <description>This is probably how a Cyber Tub looks, right?
People struggle to understand how risk accumulates in complex systems, thereby also not understanding the extent to which risk must be reduced. This misapprehension can lead to “wait and see” decisions that cause a problem to snowball, or mitigations that don’t meaningfully reduce risk, creating the feeling of just barely treading water in your security program.
It is challenging for people to understand risk dynamics conceptually for two primary reasons. First, we are bad at tying inflows and outflows to the current level of risk in a system, as we tend to believe outputs are positively correlated with inputs. For example, from the climate change realm, 63% of MIT graduate students erroneously believed that if you stabilize emissions above the rate they’re being removed, atmospheric CO2 would stabilize.[1] If emissions are still higher than reductions, there still will be net pollution — it will just be added at a more stable rate.
Second, we tend to ignore the accumulation of effects from inflows.[2] For example, you may reduce the amount you overspend in a given year, but that doesn’t mean your personal debt is being reduced — you must earn a surplus over a period of time to pay down the debt you already accumulated. Ignoring accumulation leads us to underestimate the magnitude of mitigations required to stabilize risk — let alone make a dent in decreasing the risk level towards our goal.
What can we do about this lack of intuitive comprehension? A useful analogy to help people conceptually is a bathtub, with its straightforward inflows and outflows. When discussing information security risk, this analogy can help policy makers grasp the true implications of the decisions they’re making. Too often, mitigations are believed to “solve” the problem, while in reality, the inflows contributing to the problem are still outpacing the benefits from mitigations — but no further action is taken, resulting in continued accumulation of risk.
Thus, I’ve conceived the “Cyber Tub” as a way to better communicate information security risk and ensuring its dynamics — from accumulation to reduction — are well understood. Let’s delve into the analogy.
The Spout Let’s say you have a bathtub already full of water. The bathtub is actually a “Cyber Tub,” and the water already in it represents your risk level — it could include things like your legacy systems or even the risk of credential theft in modern systems. The spout is actively running, adding a steady, hot stream of complexity into the tub. You don’t want the tub to overflow, because in real life, that leads to a state where you want to tear your hair out from all the complexity you must manage, and probably something will go wrong. So what do you do?
The Drain First, you need to install a drain — the patch management drain, in fact — if you don’t have one already, which is going to be challenging given all the water already in the tub. If you do have a drain, it’s likely clogged with lots of gross hair from people using the tub, so you’re going to have to clean it out manually so it can drain — and you’ll have to do that each time manually when it gets clogged again.
But, of course, you’re very clever, so you decide to install a self-cleaning drain — an automated patch management solution. However, you already know it will take a lot of effort to implement, and it probably won’t work perfectly the first time (and probably not every time in the future, either). So, you perform some calculations to see which helps keep the tub from overflowing more effectively given how many manual uncloggers you have vs. how many auto-drain maintainers you have on your team.
The Bucket Regrettably, your drain solution only keeps the tub from filling up more rather than helping the water level go down. Clearly you need a bucket to remove a bunch of water all at once. But where do you get the bucket? Which type of bucket is right? How do you dunk in the bucket without splashing a bunch of water out? Where do you put the water in the bucket after? Do you need multiple buckets? There are lots of things to think about when transitioning to the bucket life.
Think of this like transitioning to a cloud or containerized world — the transition costs will be non-trivial and involve a lot of thinking over how to carefully lift the water without losing any.[3] You can also only transition a certain amount of code at a time, so there needs to be a rollout plan as well. After all, even if you can buy a bunch of buckets at one time, it’s unlikely you can dump them all in at once to clear out the tub without something going wrong.
As you can see, there’s a lot of calculation here, including whether the bucket-based transition to cloud / containers can clear out the tub, since we already know the patch management drain will cancel out the complexity faucet, but nothing more. You also can’t forget that managing a bunch of little buckets is different from managing one tub, so you’ll need to consider the potential risks from that, too.
Additional Examples To solidify exactly what I mean through this analogy, let’s consider other examples from information security and how they can be viewed through the Cyber Tub lens:
A design review by the security team helps steady the water level as you release a new feature or product. By addressing security issues at the design phase, you tackle problems before they go into production and come out of the spout (you can use a threat model to prioritize). You could also require sign-off by the security team before release. Therefore, when a new feature or product is released, it won’t add as much water to the Cyber Tub. Pentesting only tells you a very rough measure of your water level is — do limited findings mean your app is secure, or instead that your testing team is inadequate? Even if you remedy issues from the results of the pentest, it may only be counteracting the spout slightly more — you can think of a pentest like a leaky bucket that may or may not help. Adopting a continuous testing model instead can act as a healthy drain and hedge against ongoing risk that point-in-time assessments cannot catch. If you are concerned about credential theft risk, think of it as the water in the bathtub. The drain could represent requirements for complex passwords —perhaps enough to counteract the additional risk of each new credential added. The bucket could be enforcing the use of SSO and a password manager, which will actually lower the level of the risk engendered by the existing pool of credentials.[4] Conclusion For any problem in the information security risk space, try to think of it in terms of this “Cyber Tub.” Consider these four key questions:
What are the inflows that add to the problem? What can you do to reduce water coming in from the spout? How can you keep the drain open and draining quickly? What mitigations will actually make a dent in the problem? What are the best buckets available?[5] While the Cyber Tub may seem like an overly simplistic analogy, that is part of its beauty — nearly anyone can understand it, as long as they’re familiar with the basics of how a bathtub works. We can’t expect to present facts and figures to non-experts and have them perfectly grasp the dynamic processes around information security risk. Let’s give everyone a helping hand and start presenting information security risk in a way people can actually understand it — a way that leads to smarter decision making.
References [1] Sterman, J. D., &amp; Sweeney, L. B. (2007). Understanding public complacency about climate change: Adults’ mental models of climate change violate conservation of matter. Climatic Change, 80(3–4), 213–238. doi:10.1007/s10584–006–9107–5
[2] Sterman, J. D. (2012). Sustaining Sustainability: Creating a Systems Science in a Fragmented Academy and Polarized World. Sustainability Science, 21–58. doi:10.1007/978–1–4614–3188–6_2
[3] It’s surprisingly difficult to find a succinct explanation for how modern infrastructure can help with information security, outside of vendor drivel. For now, please check out the fulltext of my keynote on resilience in infosec, specifically the “adaptability” and “transformability” sections. You can also view these resources by Leviathan, although they are a bit out of date (2015).
[4] Thank you to Julian Cohen for this example.
[5] You also have to consider the difficulty implementing these mitigations in the context of the stagnant bathwater — the problem — potentially obscuring your path to implementation (a topic for another time).
I also suggest reading: Åström, K.J. and Murray, R.M. Feedback Systems: An Introduction for Scientists and Engineers.
Thank you to Julian Cohen, Camille Fournier, and Alex Rasmussen.
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/cyber-tub.png" alt="A tub covered in Matrix code"><em>This is probably how a Cyber Tub looks, right?</em></p>
<p>People struggle to understand how risk accumulates in complex systems, thereby also not understanding the extent to which risk must be reduced. This misapprehension can lead to “wait and see” decisions that cause a problem to snowball, or mitigations that don’t meaningfully reduce risk, creating the feeling of just barely treading water in your security program.</p>
<p>It is challenging for people to understand risk dynamics conceptually for two primary reasons. First, we are bad at tying inflows and outflows to the current level of risk in a system, as we tend to believe outputs are positively correlated with inputs. For example, from the climate change realm, 63% of MIT graduate students erroneously believed that if you stabilize emissions above the rate they’re being removed, atmospheric CO2 would stabilize.<a name="back-1"></a><a href="#cite-1">[1]</a> If emissions are still higher than reductions, there still will be net pollution — it will just be added at a more stable rate.</p>
<p>Second, we tend to ignore the accumulation of effects from inflows.<a name="back-2"></a><a href="#cite-2">[2]</a> For example, you may reduce the amount you overspend in a given year, but that doesn’t mean your personal debt is being reduced — you must earn a surplus over a period of time to pay down the debt you already accumulated. Ignoring accumulation leads us to underestimate the magnitude of mitigations required to stabilize risk — let alone make a dent in decreasing the risk level towards our goal.</p>
<p>What can we do about this lack of intuitive comprehension? A useful analogy to help people conceptually is a bathtub, with its straightforward inflows and outflows. When discussing information security risk, this analogy can help policy makers grasp the true implications of the decisions they’re making. Too often, mitigations are believed to “solve” the problem, while in reality, the inflows contributing to the problem are still outpacing the benefits from mitigations — but no further action is taken, resulting in continued accumulation of risk.</p>
<p>Thus, I’ve conceived the “Cyber Tub” as a way to better communicate information security risk and ensuring its dynamics — from accumulation to reduction — are well understood. Let’s delve into the analogy.</p>
<hr>
<h3 id="the-spout">The Spout</h3>
<p>Let’s say you have a bathtub already full of water. The bathtub is actually a “Cyber Tub,” and the water already in it represents your risk level — it could include things like your legacy systems or even the risk of credential theft in modern systems. The spout is actively running, adding a steady, hot stream of complexity into the tub. You don’t want the tub to overflow, because in real life, that leads to a state where you want to tear your hair out from all the complexity you must manage, and probably something will go wrong. So what do you do?</p>
<h3 id="the-drain">The Drain</h3>
<p>First, you need to install a drain — the patch management drain, in fact — if you don’t have one already, which is going to be challenging given all the water already in the tub. If you do have a drain, it’s likely clogged with lots of gross hair from people using the tub, so you’re going to have to clean it out manually so it can drain — and you’ll have to do that each time manually when it gets clogged again.</p>
<p>But, of course, you’re very clever, so you decide to install a self-cleaning drain — an automated patch management solution. However, you already know it will take a lot of effort to implement, and it probably won’t work perfectly the first time (and probably not every time in the future, either). So, you perform some calculations to see which helps keep the tub from overflowing more effectively given how many manual uncloggers you have vs. how many auto-drain maintainers you have on your team.</p>
<h3 id="the-bucket">The Bucket</h3>
<p>Regrettably, your drain solution only keeps the tub from filling up more rather than helping the water level go down. Clearly you need a bucket to remove a bunch of water all at once. But where do you get the bucket? Which type of bucket is right? How do you dunk in the bucket without splashing a bunch of water out? Where do you put the water in the bucket after? Do you need multiple buckets? There are lots of things to think about when transitioning to the bucket life.</p>
<p>Think of this like transitioning to a cloud or containerized world — the transition costs will be non-trivial and involve a lot of thinking over how to carefully lift the water without losing any.<a name="back-3"></a><a href="#cite-3">[3]</a> You can also only transition a certain amount of code at a time, so there needs to be a rollout plan as well. After all, even if you can buy a bunch of buckets at one time, it’s unlikely you can dump them all in at once to clear out the tub without something going wrong.</p>
<p>As you can see, there’s a lot of calculation here, including whether the bucket-based transition to cloud / containers can clear out the tub, since we already know the patch management drain will cancel out the complexity faucet, but nothing more. You also can’t forget that managing a bunch of little buckets is different from managing one tub, so you’ll need to consider the potential risks from that, too.</p>
<h3 id="additional-examples">Additional Examples</h3>
<p>To solidify exactly what I mean through this analogy, let’s consider other examples from information security and how they can be viewed through the Cyber Tub lens:</p>
<ol>
<li>A design review by the security team helps steady the water level as you release a new feature or product. By addressing security issues at the design phase, you tackle problems before they go into production and come out of the spout (you can use a threat model to prioritize). You could also require sign-off by the security team before release. Therefore, when a new feature or product is released, it won’t add as much water to the Cyber Tub.</li>
<li>Pentesting only tells you a very rough measure of your water level is — do limited findings mean your app is secure, or instead that your testing team is inadequate? Even if you remedy issues from the results of the pentest, it may only be counteracting the spout slightly more — you can think of a pentest like a leaky bucket that may or may not help. Adopting a continuous testing model instead can act as a healthy drain and hedge against ongoing risk that point-in-time assessments cannot catch.</li>
<li>If you are concerned about credential theft risk, think of it as the water in the bathtub. The drain could represent requirements for complex passwords —perhaps enough to counteract the additional risk of each new credential added. The bucket could be enforcing the use of SSO and a password manager, which will actually lower the level of the risk engendered by the existing pool of credentials.<a name="back-4"></a><a href="#cite-4">[4]</a></li>
</ol>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>For any problem in the information security risk space, try to think of it in terms of this “Cyber Tub.” Consider these four key questions:</p>
<ol>
<li>What are the inflows that add to the problem?</li>
<li>What can you do to reduce water coming in from the spout?</li>
<li>How can you keep the drain open and draining quickly?</li>
<li>What mitigations will actually make a dent in the problem? What are the best buckets available?<a name="back-5"></a><a href="#cite-5">[5]</a></li>
</ol>
<p>While the Cyber Tub may seem like an overly simplistic analogy, that is part of its beauty — nearly anyone can understand it, as long as they’re familiar with the basics of how a bathtub works. We can’t expect to present facts and figures to non-experts and have them perfectly grasp the dynamic processes around information security risk. Let’s give everyone a helping hand and start presenting information security risk in a way people can actually understand it — a way that leads to smarter decision making.</p>
<hr>
<h2 id="references">References</h2>
<p><a name="cite-1"></a><a href="#back-1">[1]</a> Sterman, J. D., &amp; Sweeney, L. B. (2007). <a href="http://web.mit.edu/jsterman/www/StermanSweeneyClimaticChangeFinal.pdf">Understanding public complacency about climate change: Adults’ mental models of climate change violate conservation of matter.</a> <em>Climatic Change, 80</em>(3–4), 213–238. doi:10.1007/s10584–006–9107–5</p>
<p><a name="cite-2"></a><a href="#back-2">[2]</a> Sterman, J. D. (2012). <a href="http://jsterman.scripts.mit.edu/docs/Sterman%20Sustaining%20Sustainability%2010-2.pdf">Sustaining Sustainability: Creating a Systems Science in a Fragmented Academy and Polarized World.</a> <em>Sustainability Science</em>, 21–58. doi:10.1007/978–1–4614–3188–6_2</p>
<p><a name="cite-3"></a><a href="#back-3">[3]</a> It’s surprisingly difficult to find a succinct explanation for how modern infrastructure can help with information security, outside of vendor drivel. For now, please check out <a href="/blog/posts/red-pill-of-resilience-infosec/">the fulltext of my keynote on resilience in infosec</a>, specifically the “adaptability” and “transformability” sections. You can also view <a href="https://www.leviathansecurity.com/cloudsecurity">these resources by Leviathan</a>, although they are a bit out of date (2015).</p>
<p><a name="cite-4"></a><a href="#back-4">[4]</a> Thank you to <a href="https://medium.com/@HockeyInJune/product-security-14127b5838ba">Julian Cohen</a> for this example.</p>
<p><a name="cite-5"></a><a href="#back-5">[5]</a> You also have to consider the difficulty implementing these mitigations in the context of the stagnant bathwater — the problem — potentially obscuring your path to implementation (a topic for another time).</p>
<p>I also suggest reading: Åström, K.J. and Murray, R.M. <a href="http://www.cds.caltech.edu/~murray/books/AM08/pdf/am07-complete_17Jul07.pdf">Feedback Systems: An Introduction for Scientists and Engineers</a>.</p>
<p><em>Thank you to Julian Cohen, Camille Fournier, and Alex Rasmussen.</em></p>
]]></atom:content>
        </item>
        
        <item>
            <title>My 2018 Reading List</title>
            <link>https://kellyshortridge.com/blog/posts/2018-reading-list/</link>
            <pubDate>Mon, 17 Dec 2018 18:35:02 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2018-reading-list/</guid>
            <description>My perennial New Year’s resolution is to read one fiction and one non-fiction book per month. I tend to fail, and this year I only averaged 1.33 books per month (which, interestingly, is the same as last year; 2016 was 1.5 per month).
As you can tell from this list, I became a bit obsessed with afrofuturism and am still in awe of the immersive worldbuilding within the genre’s novels I read. I gravitated more towards fiction this year in general, which meant I snuck in fewer non-fiction books than usual (I did read more academic papers this year, but they’re far more arid).
If you’re looking for more science fiction, speculative fiction, or non-fiction recommendations, check out my 2017 and my 2016 reading lists.
Fiction 1Q84 by Haruki Murakami
After the Flare: A Novel by Olukotun Deji Bryce
All Systems Red: the Murderbot Diaries by Martha Wells
The Black God’s Drums by P. Djèlí Clark
Children of Blood and Bone (Legacy of Orisha Book 1) by Tomi Adeyemi
The Last Wish: Introducing the Witcher by Andrzej Sapkowski
The Obelisk Gate (The Broken Earth Book 2) by N. K. Jemisin
Pale Fire by Vladimir Nabokov
The Stone Sky (The Broken Earth Book 3) by N. K. Jemisin
The Tiger’s Daughter by K Arsenault Rivera
Who Fears Death by Nnedi Okorafor
A Wizard of Earthsea (The Earthsea Cycle Series Book 1) by Ursula K. Le Guin
Non-Fiction Anticipating Surprise: Analysis for Strategic Warning by Cynthia M. Grabo
Behind Human Error by David D. Woods, Sidney Dekker, Richard Cook, Leila Johannesen, Nadine Sarter
The Book of Why: The New Science of Cause and Effect by Judea Pearl and Dana Mackenzie
Complexity: A Guided Tour by Melanie Mitchell
</description>
            <atom:content type="html"><![CDATA[<p>My perennial New Year’s resolution is to read one fiction and one non-fiction book per month. I tend to fail, and this year I only averaged 1.33 books per month (which, interestingly, is the same as last year; 2016 was 1.5 per month).</p>
<p>As you can tell from this list, I became a bit obsessed with <a href="https://en.wikipedia.org/wiki/Afrofuturism">afrofuturism</a> and am still in awe of the immersive worldbuilding within the genre’s novels I read. I gravitated more towards fiction this year in general, which meant I snuck in fewer non-fiction books than usual (I did read more academic papers this year, but they’re far more arid).</p>
<p>If you’re looking for more science fiction, speculative fiction, or non-fiction recommendations, check out <a href="/blog/posts/2017-reading-list">my 2017</a> and <a href="/blog/posts/2016-reading-list">my 2016</a> reading lists.</p>
<hr>
<h2 id="fiction">Fiction</h2>
<p><a href="https://www.amazon.com/1Q84-Vintage-International-Haruki-Murakami-ebook/dp/B004LROUW2/">1Q84</a> by Haruki Murakami</p>
<p><a href="https://www.amazon.com/After-Flare-Olukotun-Deji-Bryce-ebook/dp/B0759VMZ66/">After the Flare: A Novel</a> by Olukotun Deji Bryce</p>
<p><a href="https://www.amazon.com/All-Systems-Red-Kindle-Single-ebook/dp/B01MYZ8X5C">All Systems Red: the Murderbot Diaries</a> by Martha Wells</p>
<p><a href="https://www.amazon.com/Black-Gods-Drums-Dj%C3%A8l%C3%AD-Clark-ebook/dp/B0791JV58Z/">The Black God’s Drums</a> by P. Djèlí Clark</p>
<p><a href="https://www.amazon.com/Black-Gods-Drums-Dj%C3%A8l%C3%AD-Clark-ebook/dp/B0791JV58Z/">Children of Blood and Bone (Legacy of Orisha Book 1)</a> by Tomi Adeyemi</p>
<p><a href="https://www.amazon.com/Black-Gods-Drums-Dj%C3%A8l%C3%AD-Clark-ebook/dp/B0791JV58Z/">The Last Wish: Introducing the Witcher</a> by Andrzej Sapkowski</p>
<p><a href="https://www.amazon.com/Obelisk-Gate-Broken-Earth-Book-ebook/dp/B01922I1GG/">The Obelisk Gate (The Broken Earth Book 2)</a> by N. K. Jemisin</p>
<p><a href="https://www.amazon.com/Pale-Vintage-International-Vladimir-Nabokov-ebook/dp/B004KABDSY/">Pale Fire</a> by Vladimir Nabokov</p>
<p><a href="https://www.amazon.com/Stone-Sky-Broken-Earth-Book-ebook/dp/B01N7EQOFA/">The Stone Sky (The Broken Earth Book 3)</a> by N. K. Jemisin</p>
<p><a href="https://www.amazon.com/Tigers-Daughter-Ascendant-Book-ebook/dp/B01MT7C6T7/">The Tiger’s Daughter</a> by K Arsenault Rivera</p>
<p><a href="https://www.amazon.com/Tigers-Daughter-Ascendant-Book-ebook/dp/B01MT7C6T7/">Who Fears Death</a> by Nnedi Okorafor</p>
<p><a href="https://www.amazon.com/Wizard-Earthsea-Cycle-Book-ebook/dp/B008T9L6AM/">A Wizard of Earthsea (The Earthsea Cycle Series Book 1)</a> by Ursula K. Le Guin</p>
<hr>
<h2 id="non-fiction">Non-Fiction</h2>
<p><a href="https://www.amazon.com/Anticipating-Surprise-Analysis-Strategic-Warning-ebook/dp/B008H9Q5IW/">Anticipating Surprise: Analysis for Strategic Warning</a> by Cynthia M. Grabo</p>
<p><a href="https://www.amazon.com/Behind-Human-Error-David-Woods-ebook/dp/B075QFGTNP/">Behind Human Error</a> by David D. Woods, Sidney Dekker, Richard Cook, Leila Johannesen, Nadine Sarter</p>
<p><a href="https://www.amazon.com/Book-Why-Science-Cause-Effect-ebook/dp/B075CR9QBJ/">The Book of Why: The New Science of Cause and Effect</a> by Judea Pearl and Dana Mackenzie</p>
<p><a href="https://www.amazon.com/Complexity-Guided-Tour-Melanie-Mitchell-ebook/dp/B002SAUBWC/">Complexity: A Guided Tour</a> by Melanie Mitchell</p>
]]></atom:content>
        </item>
        
        <item>
            <title>2019 Cyber Security Predictions</title>
            <link>https://kellyshortridge.com/blog/posts/2019-cyber-security-predictions/</link>
            <pubDate>Wed, 05 Dec 2018 20:07:01 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2019-cyber-security-predictions/</guid>
            <description>Fed up with ridiculous infosec predictions for the upcoming year, I decided to aggregate them all and use the power of Markov Chains to generate my own list. What follows is the result, very lightly edited solely for readability. You can see last year’s edition here.
In 2019, we predict 2019. Cyber espionage, cybercriminals — in 2019, they materialize. What if this is a dangerous reality? For example, consider how the world feels sometimes. According to Ponemon, security leaders around the world feel sometimes.
During 2019 we expect to see an increase in cyber space. The prospects are understatement. If a sophisticated attack involves not one but five top-notch threats synergistically working together, the defense panorama could become very blurry. Security experts have a recipe for disaster.
We predict that criminals will further focus their efforts injudiciously, ignoring the lower severity vulnerabilities with known exploits in favor of largely academic high severity vulnerabilities. In 2019, we will see a version of this fictional attacker.
The purchase of cybersecurity has led to expanding attacks that will become more sophisticated in 2019 and beyond. We will continue to influence societal expectations on security, which will trickle down to companies through hundreds of thousands of vulnerable and easy targets for attackers to profit. Driven by many falling victim to feature misconceptions, more will become key targets. Cyber products that provide consolidated feature sets have a hard time understanding each customer’s specific pain points and the bad guys know this.
In 2019, even more high-profile breaches will push the security and privacy, finally. Security is argued about until we die. That’s a particularly terrifying threat.
Prediction #1: AI TECHNIQUES: ATTACKS WILL RESULT, HACKERS In this day and age of big data, artificial intelligence is the next weapon. The gold standard in hacking efficiency, weaponized AI offers attackers unparalleled insight into what, when, and where to strike. Attempts to weaponize AI offers attackers actual attacks. Systems could launch coordinated cyber criminals to increasingly AI. Is it a matter of anomalies.
AI could be exploited and could also leverage machine-learning and artificial intelligence and machine-learning technologies. The consistent threat is very real. In 2017, a Vietnamese security group claims to have created a mask that can learn incrementally from data scientists providing frequent feedback.
We predict AI-powered attacks become the keys for email scams. For example, imagine a fake AI-created phishing using AI to aid assaults. Unlike humans, machines can do it in seconds and continue even after business hours. They have gotten smarter about phishing and other human activities such as opening doors. Closer to home, AI will expose the mistakes they’ve made regarding human activities.
Automated systems powered by AI could also be used to evade detection by infrequently trained machine learning engines. This game of cat and machine-learning technology will be an investment in the new year. There will likely be future attacks focused on building robust centers for security breach infringement, but the AI bubble has many experts worried.
In 2019, we will see brute force attacks powered by AI. The attack requires automating out all the less interesting stuff so attackers can focus their resources on such attractive, data-rich environments, with no downtime to these utilities. More corporate attacks based on math will propel this trend forward.
Prediction #2: THE AI SECURITY SOFTWARE HAS MALICIOUS INTENT Skynet is becoming broader and more expansive. To combat this, organizations have turned to the promise of big data, artificial intelligence (AI), and machine learning. Automated systems powered by AI could help people better understand the tradeoffs involved when they give up personal information in their malicious software.
The fragility of some AI technologies will become the picklock that opens a much larger door. Certain algorithms may be too late. 2019 will demonstrate a lot of the “AI Winter” of 1969, in which Congress cut funding as results lagged behind lofty expectations. AI will bolster security in 2019 to a total of $206.2 billion, up from $175.8 billion in 2016, down to $14 billion by 2025.
The buzz for cybersecurity AI is expected to grow in popularity. As the report notes, the pure-play AI security story also has a dark side — they will start scamming you. In addition, certain algorithms may be too complex to understand what is driving a specific set of security firm activities that are popping up in Cyber Town, USA.
AI start-ups are going to exploit the growth of attacks. Analytics solutions will extort companies with 1,000 or more slippery endpoints. Based on developments we are seeing, this change will come as all teams recognize that cybersecurity AI in the purest sense is nonexistent, and we will continue raging.
Prediction #3: CLOUD WILL SLIP OUT INTO THE WILD Cloud adoption will begin to expand the world (though many dispute this story). By default, cloud is sensitive data. Also, the internet. In 2019, attackers will hold the internet hostage on a computer disc with Internet written on tape in sharpie.Cloud adoption is game-changing in the threat equation. Many of the tried and true attacks of five years ago don’t work very well in the cloud. Organizations are rapidly shifting content to the cloud, therefore we predict a shortfall of 3.5 million cyber threats that demonstrates a real demand for these easy pickings.
Organizations will struggle to manipulate public cloud and will experience a massive security priority for 2019. Emerging technologies used to protect the cloud not only help capture the big picture but also are less effective at mitigating. Cloud and DevOps teams’ security experts are worried.
Prediction #4: CRIMINALS GROW MORE CONFIDENT IN DEMANDING THAT RISKS INVOLVE THE CLOUD Cyber criminals will use big-scale platforms to create instead of just one, five top-notch threats in today’s landscape. Such threats would be very difficult for hackers. Attacks are usually centered on the use of one threat. Bad actors concentrate their efforts on iterating and evolving one threat at a time for effectiveness and evasion.
With an attack surface of automated prevention methods, like embedded human microchips, for example, attackers will generate new threats such as AWS and Azure. Large-scale data breaches will be attributed to misconfigured Amazon S3 buckets. This is clearly not the fault of AWS. IDG, for example, calls 2019 “a seminal year” on the criminal to-do list, since criminals can silently steal thousands of open buckets and credentials.
Still, I make a brilliant, contrarian, and very accurate prediction: You might lose the data. There will be surprises, too, says Captain Obvious.
Prediction #5: IOT-POWERED DISTRIBUTED DENIAL OF CAT The security breaches will be IoT. There is an ever-increasing probability that these devices make their vulnerabilities. The Future often uses an IoT botnet, which runs the entire network. In one example, an attacker could compromise or alter a chip or add source code to avoid or delay botnet takedowns.Another challenge is the newest form of an attack that combines card enumeration with smart gadgets, from plugs to TVs, coffee makers. In transportation, data has been accused of sneaking into a site connected to traffic lights. With IoT growth posing huge unknown risks to enterprises with the internet, which runs entirely in memory without effective mitigation, this tactic works. Refrigerators and washing malware will be undetected.
“I think the big innovation is in best practice standards for IoT” — Damon Ponemon, Vice President of Technology to Detect Evil.
Prediction #6: EVOLVING DEFINITIONS OF PRIVACY This year we highlighted privacy, finally, due to the European Union’s mid-2018 implementation of the internet. Nearly every nation has not been able to settle on a standard of constant privacy, which will continue to exacerbate in 2019. Singapore and India are consulting to adopt breach notification regimes, while Australia has already enforced GDPR-like legislation due to lack of attribution and accountability.
The Data Protection legislative and regulatory environment will become the de facto method for spreading malicious scripts directly on targeted subjects and organizations. The U.S. government will give birth to more advanced technology and employee training in order to distribute it quickly and surreptitiously to malware. Congress is already working on an RDP option.
“Managing privacy will become a huge priority for the C-suite and board” — Prasad Woodridge, More Compliance Officer
In 2019, black hat hackers will penetrate critical aspects of GDPR to become broadly deployed threats. The internet itself is ripe for the taking by someone with PCI or SOX. Well-crafted emails designed to avoid detection are likely to be life-threatening; however, we’re unlikely to see upticks in legislative and regulatory activity. With this in mind, even an organization that erased event logs and backups to avoid investigation will have to decide whether something that happened was supposed to happen.
Prediction #7: MALWARE BOTS TEND TO SPREAD CHAOS In 2019, we predict malware. Attackers will undoubtedly continue to evolve their tactics to steal credit cards and credentials. Malware authors will turn to either more targeted attacks using embedded chips on printers or use ransom techniques, including the manipulation of memory space and adding arbitrary code. Because the attack landscape continues to evade AI-based solutions, attackers will be able to use this naivete to their advantage and pull off a major attack with ransomware.There is a race to get the most troubling widespread ransomware-as-a-service. These attacks often have costs far beyond the ransom itself. There is evidence that the author of GandCrab is already working on their marketing campaign to extort companies by threatening the data lakes. What can we do? What is permissible? What if we are missing the reasons synergic threats are becoming more than just real people? We will continue to falter.
In 2019, we’ll see the emergence of new threats such as cryptocurrency and the overwhelming demand for the large amounts of computing. Inevitably, there will be a battle as to which is more convenient than ransomware. An example is WaterMiner, which simply stops its mining process when the consumer is just about die.
Prediction #8: IDENTITY SUPPLY CHAIN MEETS BLOCKCHAIN In 2019, cyber activities collide with physical worlds. New techniques will use attacks on critical infrastructure of blockchain, with a touch of “Huh?”
In 2019, the next vector in attacks will continue — privileged accounts, because bots. Identity is a fundamental shift in risk. Identity providers are exposed to an increase in the Open Authorization standard. Access management solutions are actually the intended malware — one was launched by Fancy Bear, the Russian cyber espionage.
“Edge device” breaches will push the security industry to finally solve the username/password problem. The ineffective username/password conundrum has plagued consumers and businesses for years. AI could be used in the hope that 2019 will see a more concerted effort to replace passwords altogether.
A ‘zero trust’ approach requires an organization and AI-enabled malware. This ‘zero trust’ approach can open up several attack vectors. First, it transfers risk and no one can rest easy. Second, organizations end up creating their own criminal activities. The embrace of Google’s BeyondCorp is a strategic guess by taking intelligence, which will become more clear across the field.
Prediction #9: NATION-STATE ATTACKS WILL CAUSE THE U.N. TO COME TO SOME CONSENSUS 2019 might just be the toughest in the United States to date. While a direct cyberwar is not on the horizon, a nation-state will launch a “Fire Sale” attack: electronics on fire. You may remember the fictional concept of a “fire sale” attack from the 4th Die Hard movie, in which a terrorist demonstrated this.Governments will be fed a false sense of security intelligence from tapped infected machines. Nation-states have launched huge distributed denial of services, Bitcoin mixers, and counter-antimalware services. These attacks mean governments are deeply suspicious of each threat actors’ criminal groups.
Brazil recently passed new process-injections and erased event logs to aid trade wars. North Korea, meanwhile, has allegedly attacked public and privacy needs. We are looking forward to seeing a steady increase in Iranian attackers that will continue to fall further and further behind in competency and integrity.
Prediction #10: THE PUBLIC CLOUD WILL EXPERIENCE A MASSIVE SECURITY ATTACK The worldwide public cloud will experience a massive security attack
The worldwide public cloud will experience a massive security attack
The worldwide public cloud will experience a massive security attack
The worldwide public cloud will experience a massive security attack
The worldwide public cloud will experience a massive security attack
The worldwide public cloud will experience a massive security attack
The worldwide public cloud will experience a massive security attack
The worldwide public cloud will experience a massive security attack
The worldwide public cloud will experience a massive security attack
The worldwide public cloud will experience a massive security attack
The worldwide public cloud will experience a massive security attack
The worldwide public cloud services market is still taking shape, with many brands still looking to develop weapons in the creation of malicious executables.
</description>
            <atom:content type="html"><![CDATA[<p><em>Fed up with ridiculous infosec predictions for the upcoming year, I decided to aggregate them all and use the power of Markov Chains to generate my own list. What follows is the result, very lightly edited solely for readability. You can <a href="/blog/posts/2018-cybersecurity-predictions/">see last year’s edition here</a>.</em></p>
<p><img src="/blog/img/bad-cyberart-14.jpg" alt="An image of a digital chain"></p>
<p>In 2019, we predict 2019. Cyber espionage, cybercriminals — in 2019, they materialize. What if this is a dangerous reality? For example, consider how the world feels sometimes. According to Ponemon, security leaders around the world feel sometimes.</p>
<p>During 2019 we expect to see an increase in cyber space. The prospects are understatement. If a sophisticated attack involves not one but five top-notch threats synergistically working together, the defense panorama could become very blurry. Security experts have a recipe for disaster.</p>
<p>We predict that criminals will further focus their efforts injudiciously, ignoring the lower severity vulnerabilities with known exploits in favor of largely academic high severity vulnerabilities. In 2019, we will see a version of this fictional attacker.</p>
<p>The purchase of cybersecurity has led to expanding attacks that will become more sophisticated in 2019 and beyond. We will continue to influence societal expectations on security, which will trickle down to companies through hundreds of thousands of vulnerable and easy targets for attackers to profit. Driven by many falling victim to feature misconceptions, more will become key targets. Cyber products that provide consolidated feature sets have a hard time understanding each customer’s specific pain points and the bad guys know this.</p>
<p>In 2019, even more high-profile breaches will push the security and privacy, finally. Security is argued about until we die. That’s a particularly terrifying threat.</p>
<hr>
<h2 id="prediction-1-ai-techniques-attacks-will-result-hackers">Prediction #1: AI TECHNIQUES: ATTACKS WILL RESULT, HACKERS</h2>
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/bad-cyberart-15.gif" alt="Gif of robot saying 'I will destroy humans'">
<p>In this day and age of big data, artificial intelligence is the next weapon. The gold standard in hacking efficiency, weaponized AI offers attackers unparalleled insight into what, when, and where to strike. Attempts to weaponize AI offers attackers actual attacks. Systems could launch coordinated cyber criminals to increasingly AI. Is it a matter of anomalies.</p>
<p>AI could be exploited and could also leverage machine-learning and artificial intelligence and machine-learning technologies. The consistent threat is very real. In 2017, a Vietnamese security group claims to have created a mask that can learn incrementally from data scientists providing frequent feedback.</p>
<p>We predict AI-powered attacks become the keys for email scams. For example, imagine a fake AI-created phishing using AI to aid assaults. Unlike humans, machines can do it in seconds and continue even after business hours. They have gotten smarter about phishing and other human activities such as opening doors. Closer to home, AI will expose the mistakes they’ve made regarding human activities.</p>
<p>Automated systems powered by AI could also be used to evade detection by infrequently trained machine learning engines. This game of cat and machine-learning technology will be an investment in the new year. There will likely be future attacks focused on building robust centers for security breach infringement, but the AI bubble has many experts worried.</p>
<p>In 2019, we will see brute force attacks powered by AI. The attack requires automating out all the less interesting stuff so attackers can focus their resources on such attractive, data-rich environments, with no downtime to these utilities. More corporate attacks based on math will propel this trend forward.</p>
<hr>
<h2 id="prediction-2-the-ai-security-software-has-malicious-intent">Prediction #2: THE AI SECURITY SOFTWARE HAS MALICIOUS INTENT</h2>
<p>Skynet is becoming broader and more expansive. To combat this, organizations have turned to the promise of big data, artificial intelligence (AI), and machine learning. Automated systems powered by AI could help people better understand the tradeoffs involved when they give up personal information in their malicious software.</p>
<p>The fragility of some AI technologies will become the picklock that opens a much larger door. Certain algorithms may be too late. 2019 will demonstrate a lot of the “AI Winter” of 1969, in which Congress cut funding as results lagged behind lofty expectations. AI will bolster security in 2019 to a total of $206.2 billion, up from $175.8 billion in 2016, down to $14 billion by 2025.</p>
<p>The buzz for cybersecurity AI is expected to grow in popularity. As the report notes, the pure-play AI security story also has a dark side — they will start scamming you. In addition, certain algorithms may be too complex to understand what is driving a specific set of security firm activities that are popping up in Cyber Town, USA.</p>
<p>AI start-ups are going to exploit the growth of attacks. Analytics solutions will extort companies with 1,000 or more slippery endpoints. Based on developments we are seeing, this change will come as all teams recognize that cybersecurity AI in the purest sense is nonexistent, and we will continue raging.</p>
<hr>
<h2 id="prediction-3-cloud-will-slip-out-into-the-wild">Prediction #3: CLOUD WILL SLIP OUT INTO THE WILD</h2>
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/the-internet-was-a-mistake.gif" alt="Gif saying 'The internet was a mistake'">
Cloud adoption will begin to expand the world (though many dispute this story). By default, cloud is sensitive data. Also, the internet. In 2019, attackers will hold the internet hostage on a computer disc with Internet written on tape in sharpie.
<p>Cloud adoption is game-changing in the threat equation. Many of the tried and true attacks of five years ago don’t work very well in the cloud. Organizations are rapidly shifting content to the cloud, therefore we predict a shortfall of 3.5 million cyber threats that demonstrates a real demand for these easy pickings.</p>
<p>Organizations will struggle to manipulate public cloud and will experience a massive security priority for 2019. Emerging technologies used to protect the cloud not only help capture the big picture but also are less effective at mitigating. Cloud and DevOps teams’ security experts are worried.</p>
<hr>
<h2 id="prediction-4-criminals-grow-more-confident-in-demanding-that-risks-involve-the-cloud">Prediction #4: CRIMINALS GROW MORE CONFIDENT IN DEMANDING THAT RISKS INVOLVE THE CLOUD</h2>
<p>Cyber criminals will use big-scale platforms to create instead of just one, five top-notch threats in today’s landscape. Such threats would be very difficult for hackers. Attacks are usually centered on the use of one threat. Bad actors concentrate their efforts on iterating and evolving one threat at a time for effectiveness and evasion.</p>
<p>With an attack surface of automated prevention methods, like embedded human microchips, for example, attackers will generate new threats such as AWS and Azure. Large-scale data breaches will be attributed to misconfigured Amazon S3 buckets. This is clearly not the fault of AWS. IDG, for example, calls 2019 “a seminal year” on the criminal to-do list, since criminals can silently steal thousands of open buckets and credentials.</p>
<p>Still, I make a brilliant, contrarian, and very accurate prediction: You might lose the data. There will be surprises, too, says Captain Obvious.</p>
<hr>
<h2 id="prediction-5-iot-powered-distributed-denial-of-cat">Prediction #5: IOT-POWERED DISTRIBUTED DENIAL OF CAT</h2>
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/drone-battle.gif" alt="A gif of a drone preparing for battle">
The security breaches will be IoT. There is an ever-increasing probability that these devices make their vulnerabilities. The Future often uses an IoT botnet, which runs the entire network. In one example, an attacker could compromise or alter a chip or add source code to avoid or delay botnet takedowns.
<p>Another challenge is the newest form of an attack that combines card enumeration with smart gadgets, from plugs to TVs, coffee makers. In transportation, data has been accused of sneaking into a site connected to traffic lights. With IoT growth posing huge unknown risks to enterprises with the internet, which runs entirely in memory without effective mitigation, this tactic works. Refrigerators and washing malware will be undetected.</p>
<blockquote>
<p>“I think the big innovation is in best practice standards for IoT” — Damon Ponemon, Vice President of Technology to Detect Evil.</p>
</blockquote>
<hr>
<h2 id="prediction-6-evolving-definitions-of-privacy">Prediction #6: EVOLVING DEFINITIONS OF PRIVACY</h2>
<p>This year we highlighted privacy, finally, due to the European Union’s mid-2018 implementation of the internet. Nearly every nation has not been able to settle on a standard of constant privacy, which will continue to exacerbate in 2019. Singapore and India are consulting to adopt breach notification regimes, while Australia has already enforced GDPR-like legislation due to lack of attribution and accountability.</p>
<p>The Data Protection legislative and regulatory environment will become the de facto method for spreading malicious scripts directly on targeted subjects and organizations. The U.S. government will give birth to more advanced technology and employee training in order to distribute it quickly and surreptitiously to malware. Congress is already working on an RDP option.</p>
<blockquote>
<p>“Managing privacy will become a huge priority for the C-suite and board” — Prasad Woodridge, More Compliance Officer</p>
</blockquote>
<p>In 2019, black hat hackers will penetrate critical aspects of GDPR to become broadly deployed threats. The internet itself is ripe for the taking by someone with PCI or SOX. Well-crafted emails designed to avoid detection are likely to be life-threatening; however, we’re unlikely to see upticks in legislative and regulatory activity. With this in mind, even an organization that erased event logs and backups to avoid investigation will have to decide whether something that happened was supposed to happen.</p>
<hr>
<h2 id="prediction-7-malware-bots-tend-to-spread-chaos">Prediction #7: MALWARE BOTS TEND TO SPREAD CHAOS</h2>
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/bad-cyberart-16.gif" alt="Gif of someone typing in hacker-y things">
In 2019, we predict malware. Attackers will undoubtedly continue to evolve their tactics to steal credit cards and credentials. Malware authors will turn to either more targeted attacks using embedded chips on printers or use ransom techniques, including the manipulation of memory space and adding arbitrary code. Because the attack landscape continues to evade AI-based solutions, attackers will be able to use this naivete to their advantage and pull off a major attack with ransomware.
<p>There is a race to get the most troubling widespread ransomware-as-a-service. These attacks often have costs far beyond the ransom itself. There is evidence that the author of GandCrab is already working on their marketing campaign to extort companies by threatening the data lakes. What can we do? What is permissible? What if we are missing the reasons synergic threats are becoming more than just real people? We will continue to falter.</p>
<p>In 2019, we’ll see the emergence of new threats such as cryptocurrency and the overwhelming demand for the large amounts of computing. Inevitably, there will be a battle as to which is more convenient than ransomware. An example is WaterMiner, which simply stops its mining process when the consumer is just about die.</p>
<hr>
<h2 id="prediction-8-identity-supply-chain-meets-blockchain">Prediction #8: IDENTITY SUPPLY CHAIN MEETS BLOCKCHAIN</h2>
<p>In 2019, cyber activities collide with physical worlds. New techniques will use attacks on critical infrastructure of blockchain, with a touch of “Huh?”</p>
<p>In 2019, the next vector in attacks will continue — privileged accounts, because bots. Identity is a fundamental shift in risk. Identity providers are exposed to an increase in the Open Authorization standard. Access management solutions are actually the intended malware — one was launched by Fancy Bear, the Russian cyber espionage.</p>
<p>“Edge device” breaches will push the security industry to finally solve the username/password problem. The ineffective username/password conundrum has plagued consumers and businesses for years. AI could be used in the hope that 2019 will see a more concerted effort to replace passwords altogether.</p>
<p>A ‘zero trust’ approach requires an organization and AI-enabled malware. This ‘zero trust’ approach can open up several attack vectors. First, it transfers risk and no one can rest easy. Second, organizations end up creating their own criminal activities. The embrace of Google’s BeyondCorp is a strategic guess by taking intelligence, which will become more clear across the field.</p>
<hr>
<h2 id="prediction-9-nation-state-attacks-will-cause-the-un-to-come-to-some-consensus">Prediction #9: NATION-STATE ATTACKS WILL CAUSE THE U.N. TO COME TO SOME CONSENSUS</h2>
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/computer-fire.gif" alt="Gif of someone setting computers on fire">
2019 might just be the toughest in the United States to date. While a direct cyberwar is not on the horizon, a nation-state will launch a “Fire Sale” attack: electronics on fire. You may remember the fictional concept of a “fire sale” attack from the 4th Die Hard movie, in which a terrorist demonstrated this.
<p>Governments will be fed a false sense of security intelligence from tapped infected machines. Nation-states have launched huge distributed denial of services, Bitcoin mixers, and counter-antimalware services. These attacks mean governments are deeply suspicious of each threat actors’ criminal groups.</p>
<p>Brazil recently passed new process-injections and erased event logs to aid trade wars. North Korea, meanwhile, has allegedly attacked public and privacy needs. We are looking forward to seeing a steady increase in Iranian attackers that will continue to fall further and further behind in competency and integrity.</p>
<hr>
<h2 id="prediction-10-the-public-cloud-will-experience-a-massive-security-attack">Prediction #10: THE PUBLIC CLOUD WILL EXPERIENCE A MASSIVE SECURITY ATTACK</h2>
<p>The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud will experience a massive security attack<br>
The worldwide public cloud services market is still taking shape, with many brands still looking to develop weapons in the creation of malicious executables.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Analyzing the Black Hat USA 2018 Business Hall</title>
            <link>https://kellyshortridge.com/blog/posts/analyzing-blackhatusa-business-hall-2018/</link>
            <pubDate>Mon, 06 Aug 2018 18:40:42 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/analyzing-blackhatusa-business-hall-2018/</guid>
            <description>What type of vendors are showing themselves off in the Business Hall? Are they mostly startups? Not quite “mostly,” but 46% of vendors in the hall are indeed VC-backed companies at varying stages of maturity. Privately held companies are a non-trivial segment at 17%, and there are 30 Private Equity-owned companies making up 12% of the hall. There are also 12 companies who are acquisitions, primarily those who were acquired within the past year; booths require being booked far in advance, leaving insufficient time to be assimilated into their acquiror’s booth (if so desired). Nearly half (44%) of VC-backed companies were funded within the past year, shooting up to 82% within the past two years and 92% in the past three. Only 10 companies have not been funded in the past three years, and one could hypothesize that they are considered long in the tooth by their VCs.
How many VCs are dedicated to investing in infosec? The overwhelming majority (69%) of Venture Capital firms are investors in only one company exhibiting in the BH USA 2018 Business Hall. Out of 350 total investors in 118 VC-backed companies, 46% [led](https://avc.com/2013/09/leading-vs-following/) at least one deal — and of those 46%, 71% only led one deal.I’ve long held a hypothesis that the list of “dedicated” investors in infosec is actually quite small, and that the incredible amount of deal volume we presently see is driven by one-off investors who want to dip their toe in the infosec waters, having seen blazing, FUD-ridden headlines declaring its relevance. The data seem to support this hypothesis.
There are 21 VC firms who participated in four deals or more (i.e. invested in four or more companies exhibiting). Thus, I consider them the most “dedicated.” Data Collective, while following in four deals, did not lead any deals, meaning they just miss the cut to be part of the “Top 20 VCs”:
You can explore all of the Venture Capital firms, ordered by number of companies in the Business Hall in which they’ve invested by visiting the GitHub repo, or scrolling to the very bottom of this post.
What venture stage are vendors &amp; how much capital are they raising? As you might suspect given the steep price for Black Hat booths, there are few seed-stage companies present. The VC-backed vendors are highly concentrated between Series A and Series C (68% of all VC-backed vendors), which reflects general infosec funding trends, as few companies get funded to the late stage, either being acquired or quietly starved for capital.Note, “Venture Round” is a nebulous term and can mean basically anything — from earlier stage to later stage — so it should not be interpreted in the same timeline as the Series rounds.
The size of the funding round naturally grows the later stage the company reaches. Note that there are few data points for Series E and Series F rounds, so don’t read too much into the decline in size at Series F (a hypothesis might be that reaching such an exceedingly late stage means investors’ patience has worn thin).
How many Private Equity firms are backing companies on the floor? There are 30 companies backed by 27 total Private Equity firms in the Business Hall this year. While the majority (81%) are mostly one-off investors (though some are also participants in late-stage VC rounds), there are five PE firms that back two or more companies presenting on the floor:
How fresh is the Innovation City? Innovation City, with 41 “residents,” has a higher concentration of VC-backed companies than the total population (59% vs. 46%), as well as of privately-held companies (34% vs. 17%). What might be surprising is that there are still two acquired companies and one PE-backed company presenting in Innovation City.True to its marketing pitch of being a designated area for early-stage companies, the average amount of the latest raise by Innovation City residents who are VC-backed is $11.4mm, in contrast to the overall average of $27.2mm. 58% of residents most recently raised a Series A or Series B, while only 4% most recently raised a Series C or later (vs. 34% in the general VC-backed Business Hall population). Nearly double (25% vs. 14% in the general VC-backed population) raised a “Venture Round,” which again, being a nebulous term, is troublesome to place in the VC funding timeline.
How U.S.-centric are the vendors at Black Hat, anyway? The answer is pretty U.S.-centric, at 83% of all vendors in the Business Hall (and 86% of all VC-backed vendors). While I know Brexit hasn’t happened yet, I do consider the UK infosec market somewhat distinct from the EU, as there’s a decently active infosec startup scene there and an emergent VC ecosystem, too. There are 13 (5%) EU-based vendors, but less than half of those (6) are VC-backed. The UK is slightly more VC-weighted, with 6 VC-backed companies out of 10 companies total (4% of all companies).Although funding activity is exceptionally strong for Israel-based security startups, there are only 7 Israeli vendors (3%) in the hall this year, 3 of which are VC-backed startups. Given the stark contrast with the funding volume into Silicon Wadi I’ve personally witnessed this year, I suspect it’s either that the companies are too young to have reserved a booth in time for 2018, or that quite a few are playing the classic game of listing HQ in the U.S. so as not to deter customers or investors.Within the U.S., California makes up nearly half (47%) of all U.S.-based vendors presenting, and over half (52%) of VC-backed startups in the Business Hall. After California, the usual suspects of Massachusetts, New York, and the D.C. area (Virginia/Maryland) round out the bulk of companies, along with growing areas of VC interest like Colorado and Texas.
How many companies have some form of name collision? We’re all familiar with the infosec startup tropes, so I decided to see whether the data support it. Unsurprisingly, many companies (13%!) have some form of “Security” or “Secure” in their name. Net (which can include Networks), and Cy (which can include Cyber) present a solid showing as well. More recent tropes, such as “Dark” or “Deep” are less prevalent than I assumed — in fact, Deep only lurks in two vendors’ names.
A few notes on the data Vendors were retrieved from the Black Hat 2018 Business Hall Floorplan, and exclude any federal agencies, educational organizations, or nonprofits. I also excluded any companies in the Career Zone, as they are aiming to recruit security talent rather than sell products or services — for example, I presume Major League Baseball is not selling the latest Threat Intelligence Automation on the Blockchain.
Care to explore the data yourself? The raw table is available on GitHub, as is the general project repo (potentially more to come later!).
Appendix Exactly which VCs funded exactly how many companies exhibiting in the Black Hat USA Business Hall this year? </description>
            <atom:content type="html"><![CDATA[<h2 id="what-type-of-vendors-are-showing-themselves-off-in-the-business-hall-are-they-mostly-startups">What type of vendors are showing themselves off in the Business Hall? Are they mostly startups?</h2>
<img style="display:block; margin-right:auto; margin-left:auto; max-width:80%;" src="/blog/img/bhusa2018/vendors-type.png" alt="Chart of number of vendors by funding type for the Black Hat USA 2018 Vendor Hall">
<p>Not quite “mostly,” but 46% of vendors in the hall are indeed VC-backed companies at varying stages of maturity. Privately held companies are a non-trivial segment at 17%, and there are 30 Private Equity-owned companies making up 12% of the hall. There are also 12 companies who are acquisitions, primarily those who were acquired within the past year; booths require being booked far in advance, leaving insufficient time to be assimilated into their acquiror’s booth (if so desired).
<img style="float:right; max-width:50%; padding-left: 10px" src="/blog/img/bhusa2018/vc-by-age.png" alt="Chart of number of VC-backed companies by age of last raise"></p>
<p>Nearly half (44%) of VC-backed companies were funded within the past year, shooting up to 82% within the past two years and 92% in the past three. Only 10 companies have not been funded in the past three years, and one could hypothesize that they are considered long in the tooth by their VCs.</p>
<hr>
<h2 id="how-many-vcs-are-dedicated-to-investing-in-infosec">How many VCs are dedicated to investing in infosec?</h2>
<img style="float:right; max-width:50%; padding-left: 10px" src="/blog/img/bhusa2018/fund-numbers.png" alt="Chart of the number of VC funds by companies invested">
The overwhelming majority (69%) of Venture Capital firms are investors in only one company exhibiting in the BH USA 2018 Business Hall. Out of 350 total investors in 118 VC-backed companies, 46% [led](https://avc.com/2013/09/leading-vs-following/) at least one deal — and of those 46%, 71% only led one deal.
<p>I’ve long held a hypothesis that the list of “dedicated” investors in infosec is actually quite small, and that the incredible amount of deal volume we presently see is driven by one-off investors who want to dip their toe in the infosec waters, having seen blazing, FUD-ridden headlines declaring its relevance. The data seem to support this hypothesis.</p>
<p>There are 21 VC firms who participated in four deals or more (i.e. invested in four or more companies exhibiting). Thus, I consider them the most “dedicated.” Data Collective, while following in four deals, did not lead any deals, meaning they just miss the cut to be part of the “Top 20 VCs”:</p>
<script src="https://gist.github.com/swagitda/c27200dedbac7090090d7a0a2fe98ba1.js"></script>
<p>You can explore all of the Venture Capital firms, ordered by number of companies in the Business Hall in which they’ve invested by <a href="https://github.com/swagitda/bhusa2018-bizhall/blob/master/vc-analysis/vc-fund-count-with-names.png">visiting the GitHub repo</a>, or scrolling to the very bottom of this post.</p>
<hr>
<h2 id="what-venture-stage-are-vendors--how-much-capital-are-they-raising">What venture stage are vendors &amp; how much capital are they raising?</h2>
<img style="float:right; max-width:50%; padding-left: 10px" src="/blog/img/bhusa2018/vc-by-round.png" alt="Chart of the number of VC-backed vendors, by latest round">
As you might suspect given the steep price for Black Hat booths, there are few seed-stage companies present. The VC-backed vendors are highly concentrated between Series A and Series C (68% of all VC-backed vendors), which reflects general infosec funding trends, as few companies get funded to the late stage, either being acquired or quietly starved for capital.
<p>Note, “Venture Round” is a nebulous term and can mean basically anything — from earlier stage to later stage — so it should not be interpreted in the same timeline as the Series rounds.</p>
<p>The size of the funding round naturally grows the later stage the company reaches. Note that there are few data points for Series E and Series F rounds, so don’t read too much into the decline in size at Series F (a hypothesis might be that reaching such an exceedingly late stage means investors’ patience has worn thin).</p>
<img style="display:block; margin-right:auto; margin-left:auto; max-width:80%" src="/blog/img/bhusa2018/vc-dollars-by-round.png" alt="Chart of the average size of funding rounds, by stage">
<hr>
<h2 id="how-many-private-equity-firms-are-backing-companies-on-the-floor">How many Private Equity firms are backing companies on the floor?</h2>
<p>There are 30 companies backed by 27 total Private Equity firms in the Business Hall this year. While the majority (81%) are mostly one-off investors (though some are also participants in late-stage VC rounds), there are five PE firms that back two or more companies presenting on the floor:</p>
<script src="https://gist.github.com/swagitda/5115bd90d76dfe7b67029dc3f665c90d.js"></script>
<hr>
<h2 id="how-fresh-is-the-innovation-city">How fresh is the Innovation City?</h2>
<img style="float:right; max-width:50%; padding-left: 10px" src="/blog/img/bhusa2018/innovation-city.png" alt="Chart of Innovation City 'residents,' by type">
Innovation City, with 41 “residents,” has a higher concentration of VC-backed companies than the total population (59% vs. 46%), as well as of privately-held companies (34% vs. 17%). What might be surprising is that there are still two acquired companies and one PE-backed company presenting in Innovation City.
<p>True to its marketing pitch of being a designated area for early-stage companies, the average amount of the latest raise by Innovation City residents who are VC-backed is $11.4mm, in contrast to the overall average of $27.2mm. 58% of residents most recently raised a Series A or Series B, while only 4% most recently raised a Series C or later (vs. 34% in the general VC-backed Business Hall population). Nearly double (25% vs. 14% in the general VC-backed population) raised a “Venture Round,” which again, being a nebulous term, is troublesome to place in the VC funding timeline.</p>
<hr>
<h2 id="how-us-centric-are-the-vendors-at-black-hat-anyway">How U.S.-centric are the vendors at Black Hat, anyway?</h2>
<img style="float:right; max-width:60%; padding-left: 10px" src="/blog/img/bhusa2018/geo-all.png" alt="Chart of all companies, by geography">
The answer is pretty U.S.-centric, at 83% of all vendors in the Business Hall (and 86% of all VC-backed vendors). While I know Brexit hasn’t happened yet, I do consider the UK infosec market somewhat distinct from the EU, as there’s a decently active infosec startup scene there and an emergent VC ecosystem, too. There are 13 (5%) EU-based vendors, but less than half of those (6) are VC-backed. The UK is slightly more VC-weighted, with 6 VC-backed companies out of 10 companies total (4% of all companies).
<img style="float:right; max-width:40%; padding-left: 10px" src="/blog/img/bhusa2018/geo-vc.png" alt="Chart of VC-backed companies, by geography">
Although funding activity is exceptionally strong for Israel-based security startups, there are only 7 Israeli vendors (3%) in the hall this year, 3 of which are VC-backed startups. Given the stark contrast with the funding volume into Silicon Wadi I’ve personally witnessed this year, I suspect it’s either that the companies are too young to have reserved a booth in time for 2018, or that quite a few are playing the classic game of listing HQ in the U.S. so as not to deter customers or investors.
<p>Within the U.S., California makes up nearly half (47%) of all U.S.-based vendors presenting, and over half (52%) of VC-backed startups in the Business Hall. After California, the usual suspects of Massachusetts, New York, and the D.C. area (Virginia/Maryland) round out the bulk of companies, along with growing areas of VC interest like Colorado and Texas.</p>
<img style="display:block; margin-right:auto; margin-left:auto; max-width:80%" src="/blog/img/bhusa2018/state-all.png" alt="Chart of all U.S. companies, by state">
<hr>
<h2 id="how-many-companies-have-some-form-of-name-collision">How many companies have some form of name collision?</h2>
<p>We’re all familiar with the infosec startup tropes, so I decided to see whether the data support it. Unsurprisingly, many companies (13%!) have some form of “Security” or “Secure” in their name. Net (which can include Networks), and Cy (which can include Cyber) present a solid showing as well. More recent tropes, such as “Dark” or “Deep” are less prevalent than I assumed — in fact, Deep only lurks in two vendors’ names.</p>
<img style="display:block; margin-right:auto; margin-left:auto; max-width:80%" src="/blog/img/bhusa2018/name-trope.png" alt="Chart of the number of companies, by trope in name">
<hr>
<h2 id="a-few-notes-on-the-data">A few notes on the data</h2>
<p>Vendors were retrieved from the <a href="http://www.expocad.com/host/fx/ubm/18blckh/exfx.html#exhibitors">Black Hat 2018 Business Hall Floorplan</a>, and exclude any federal agencies, educational organizations, or nonprofits. I also excluded any companies in the Career Zone, as they are aiming to recruit security talent rather than sell products or services — for example, I presume Major League Baseball is not selling the latest Threat Intelligence Automation on the Blockchain.</p>
<hr>
<p>Care to explore the data yourself? <a href="https://github.com/swagitda/bhusa2018-bizhall/blob/master/data-table/bhusa18-bizhall-list.md">The raw table is available on GitHub</a>, as is the <a href="https://github.com/swagitda/bhusa2018-bizhall">general project repo</a> (potentially more to come later!).</p>
<hr>
<h2 id="appendix">Appendix</h2>
<h3 id="exactly-which-vcs-funded-exactly-how-many-companies-exhibiting-in-the-black-hat-usa-business-hall-this-year">Exactly which VCs funded exactly how many companies exhibiting in the Black Hat USA Business Hall this year?</h3>
<img style="display:block; margin-right:auto; margin-left:auto" src="/blog/img/bhusa2018/fund-names-tall.png" alt="List of VCs, sorted by number of business hall vendors funded">
]]></atom:content>
        </item>
        
        <item>
            <title>The Red Pill of Resilience in InfoSec</title>
            <link>https://kellyshortridge.com/blog/posts/red-pill-of-resilience-infosec/</link>
            <pubDate>Wed, 01 Aug 2018 17:26:47 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/red-pill-of-resilience-infosec/</guid>
            <description>
What follows is the full text of my keynote, also available as a video and as slides.
There has been insufficient exploration of the first principles of resilience in the context of information security, despite the term being superficially peppered in our common discourse. Too often, resilience is conflated with robustness — to the detriment of us all.
To state more poetically, through the pen of the notable fantasy author Robert Jordan referencing one of Aesop’s fables, “The oak fought the wind and was broken, the willow bent when it must and survived.” To speak of protection without resilience is to believe you can always beat the wind. To speak of deterrence without resilience is to believe you can deter the wind from blowing at all.
There are times when attacks may be deterred or halted before damage is wrought, but one cannot always predict which way the wind will blow. We cannot predict the adversary we will face and the amount of resources they are willing to expend to compromise our systems.
Protection or deterrence can serve as a valuable tactic to grant some level of peace, but resilience — the ability to absorb change and survive — is the foundation on which survival rests. Again, far more poetically than I could ever say, Generalissimo Chiang Kai-shek of the Republic of China advised, “The more you sweat in peace, the less you bleed in war.” Attempt the peace, but assure you can still survive the war.
But the discourse about resilience in information security has, to date, been far from poetic. Rather, it’s been sprinkled as a buzzword, floating along at a shallow level primarily in discussions of cyber insurance and how companies can transfer risk by buying policies. I believe this is a waste of its potentially torrential conceptual power. Resilience is ultimately about accepting reality and building a defensive strategy around reality.
As in the Matrix, resilience serves as the red pill — once you accept the reality, it’s impossible to go back to a strategy that ignores it. My goal in this talk is to show you how digging into this heart of resilience can drive a paradigm shift in how we architect security strategy (and I don’t throw the term “paradigm shift” around lightly).
I’ll first explore why the time for the resilience paradigm is nigh, and how our grief as an industry has led us to this point. I’ll next briefly share the etymology of resilience. Its application to information security is one of the newest, and there exists a rich history of its use in other domains — a history worth exploring to see if there are principles and findings to be shared.
Information security is not the only complex system, and ecological systems dealing with climate change and urban areas dealing with natural disasters represent analogical systems in which dynamics are nearly impossible to predict, and the number of interrelating factors is prohibitive to enumerate.
This cross-domain research presents a concept of resilience based on robustness, adaptability, and transformability. I’ll cover these three concepts in detail so we can understand how they fit under the umbrella of resilience. I’ll expand on this cross-domain lens, and it will serve as the one through which I’ll examine what resilience means in information security.
I am humbly attempting to define and establish a notion around what resilience means for information security — both in an intellectual and practical sense. I would be shocked if all of my thinking is on the mark. More than anything, I desperately want to encourage a discussion of first principles around resilience, and to begin a fruitful conversation around how practical implementations of resilience ideology for defensive security should look. I believe we have a fighting chance.
Stages of Grief in Information Security But first, why is there rising talk about the term “resilience,” despite the industry not having solidly established what it means? I propose that it is the result of the final stage of grief — acceptance — of the fact that there is not, and likely will not ever be, such a thing as an un-hackable organization or system.
Over the past twenty years, the infosec industry has grieved the fact that companies are ever-vulnerable to attack. Very little can be done against the latest exploit or attack vector that is currently unpublished and unknown. The industry has not fully coped with this grief. Like most grief, it isn’t a linear process, and these cycles have ebbed and flowed at various times in the industry’s history.
The first stage of grief is denial — clinging to a false, preferable reality. This manifested as companies being hesitant to deploy security solutions at all, believing that they weren’t truly at risk.
Anger — recognizing the denial cannot continue, and becoming frustrated — seeing security as an unwanted necessity. This manifested in harsh penalties and legislation, prosecuting and punishing vulnerability researchers — even ones doing work for free or disclosing responsibly.
Bargaining — the hope that the cause of grief can be avoided — resulted in the explosion of new security tools in an attempt to stop the problem
Depression — despair at the recognition of the predicament — led to the refrain of more recent years in the vein of, “You’re going to get hacked, there’s nothing you can do about it.”
Finally, acceptance — the stage into which I believe we are now transitioning, understanding that there is an inevitability of successful attack, but recognition that there is an ability to prepare and that not all is hopeless.
Unfortunately, I fear the bargaining stage has set us up for a challenging road to implementing acceptance. The explosion of security tools in an attempt to avoid the problem has resulted in an untenable market for lemons in which tools and services are prescribed regardless of real need.
Fear, uncertainty, and doubt (FUD) specifically preys on this desperation to bargain, and its use by marketing departments is little different than selling hope of a cure to those who are chronically ill and in their own bargaining stage of grief through alternative, disproved methods. More simply put, the bargaining stage is the demand for which snake oil is the supply.
The resulting depression is unfortunately not the antidote for snake oil — security nihilism is an inaccurate conclusion and does little to incentivize practitioners to pursue more resilient strategies than unproven ones. Therefore, in this blossoming acceptance phase, it is important we have a conversation about what actually works — and to begin, I’d like to delve into what resilience has meant before we as an industry began to espouse it.
Etymology of Resilience Up until the early 19th century, the primary meaning of resilience was to “rebound.” Its first use in the context of engineering was in 1858, to imply strength and ductility, or a material’s ability to stretch under tensile stress.[1] The abstraction from this physical characteristic — the time it takes for a system to return to a pre-determined, single equilibrium — is the one which has persisted in the common understanding of resilience.
However, in the 1970s, resilience began being used in psychology — understood as a process of changing current behaviors to cope with an adverse condition, for reverting to a prior psychological state post-incident is unrealistic and can actually represent unhealthy coping strategies. The 1970s also saw the beginning of resilience’s use in ecology, but over time, the concept gradually expanded into use across the social sciences and with the two domains’ coupling, or socio-ecological systems.
Most recently, it began to be applied to climate change adaption in the early 2010s, including the natural disaster risk management space due to increases in natural disasters that by all evidence are caused by climate change.
What does resilience mean for complex systems? A complex system is one in which many of its underlying components interact with each other, and one in which it is very difficult to predict behavior. More simply put, it is a system with non-linear activity in the aggregate. Examples of complex systems in our daily lives include our universe, our planet’s climate, our cities, our brains, and even living cells.
Information security is also complex system. Defensively speaking, it is plagued by an inability to predict attacker actions, and it also consists of highly interconnected, dynamic relationships. Both sides of the security equation — defenders and attackers — are human. But there are additional relationships beyond the direct conflict, including users, governments, software vendors, service providers, and so forth. To unfurl these relationships and attempt to fit them into a predictive model is very simply prohibitive.
The first application of resilience through the lens of complex systems was by C.S. Holling in 1973, an ecologist who was one of the founders of ecological economics. Ecological resilience, he said, is measured by the amount of change that could be absorbed before the system’s underlying structure changes.[2] He asserted that an ecological system can be highly resilient, but also exhibit a high degree of instability — and, in fact, that the proper reaction of an ecological system was to continually adapt, rather than attempt to return to a static equilibrium.[1] Heidelbach, W. (September 28, 2016). Chestnut.
For example, eastern North American forests were once full of chestnut trees until a chestnut blight in the first half of the 20th century wiped them out.[3] However, oak and hickory trees began spreading in its stead. The forests changed in appearance and composition, but still survived as forests.
Evolutionary resilience, borne from analyzing socio-ecological systems, operates under the assumption of complex systems that co-evolve, focusing on adaptation and transformation. Rejecting the idea of thresholds within which a system should fluctuate, it instead suggests multiple levels of controls and the ability to adapt the status quo by reorganizing or regenerating around the change, thereby creating a new status quo.
For example, communities can diversify their agricultural landscapes and production systems, designating some areas for soil conservation and organic agriculture while promoting multicropping in others. They can protect some forested areas while designating others for the community and focusing reforestation efforts there.
This notion of evolutionary resilience can be summarized as consisting of three central characteristics: robustness, adaptability, and transformability.[4] The core notion is that in order for a complex system to be resilient, it must be able to withstand a shock, adjust so as to incur less damage, and be open to challenging previous decisions and goals.
I will be keeping in mind these three core characteristics in the context of information security. With robustness, you must be able to withstand an attack; with adaptability, you must be able to adjust your environment so you incur less damage when attacked; with transformability, you challenge your existing assumptions and decisions, and potentially migrate from existing infrastructure as well as defensive strategies or current methods used in the way you model and understand your threats.
NASA/NOAA. (August 26, 2017). Hurricane Harvey’s Eye.In most domains, robustness proved dominant in defensive strategy, and can be linked to the concept of engineering resilience — a mistake from the evolutionary resilience perspective. For example, barriers are a form of robustness, blocking storm surges. However, as seen recently with Hurricane Harvey, the primary source of damage was flooding from ongoing rain — highlighting the need for adaptability and transformability to incur less damage going forward and rethink your existing strategies.[5]
Instead, evolutionary resilience must also include adaptation and dynamic change towards the goal of preservation, with robustness as an ingredient rather than the sole objective.
Although the expression, “it’s not about the destination, it’s about the journey” is somewhat trite, it’s quite true for resilience. Resilience must be framed as a continuous, evolving, but sustainable process rather than a goal. As ecological economics scholar Peter Timmerman described, resilience is the building of “buffering capacity” into a system, to improve its ability to continually cope going forward.[6]
A focus only on robustness can also lead to a misleading presentation of the problem as one only based on reducing the risk itself. As in the previous example, the problem could be seen only as, “how can we withstand the hurricane?” instead of “we know the hurricane will hit us, how can we change so that it doesn’t damage our community as much?” This highlights the contrast between robustness and the adaptability and transformability characteristics, which accept that the risk will exist, and instead stress the need to reduce the potential damage from the risk and restructure around the risk.
Furthermore, the efforts around attack prediction represent yet another symptom of collective grief — it’s an endeavor to regain the illusion of control. I’ve given another related presentation at length about why attack prediction should not be our goal, so I will not elaborate further here. Suffice to say, prediction was attempted in other complex systems and not only failed miserably, but wasted precious time, money, and brainpower that could have been spent on a pragmatic aim: resilience — the need to design systems under the assumption the negative shock will not be predicted. As eloquently stated by Susan Elizabeth Hough[7],
“A building doesn’t care if an earthquake or shaking was predicted or not; it will withstand the shaking, or it won’t.”
While our industry has come to accept that there are many “unknown unknowns,” our strategy is still one based in hubris — that we can save ourselves in a breach with systems that can withstand unknown risks at unknown times with unknown faces. The evolutionary resilience approach embraces these unknowns, understanding that change is inevitable — ensuring the system survives by absorbing these unknown changes, naturally adapting and reorganizing around this unknown risk, keeping the option open of bearing its own new, unknown face.
Robustness Image by Hieu Vu Minh
Robustness involves withstanding and resisting a negative event. Engineering used the concept of resilience only in terms of robustness, measured by how long it takes a system to return to its equilibrium after a shock. However, experiencing an acute stress event implies the normal state was vulnerable to the stress, and that it is thus an “undesirable state to go back to because it would perpetuate this vulnerability.”[8]
In disaster recovery, it’s dangerous to present the problem of flooding, for example, as simply one about excess water. If it’s simply about a physical issue, then solutions are presented that are restricted to just the physical issue. In reality, flooding is a problem because of people, who understandably don’t want to lose their homes or drown. It is unnecessarily restrictive to only consider technical solutions to address the excess water, rather than broader solutions to address the problem in a societal context.[9]
When it is believed that a technical control will help prevent a shock, then it tends to lead to larger potential damage. This is called the safe development paradox.[10] The reason why it’s a paradox is that the stability and presumed safety gained by building a structural mitigation to the problem actually allows risk to accumulate over time due to the false sense of security, leading to a higher chance of catastrophic consequences.
The safe development paradox represents a maladaptive feedback loop — once a structural mitigation is in place, more development happens where it should not.[11] As the development becomes entrenched, the need for structural mitigations becomes even greater — and once the mitigation is in place, more development occurs.
Picture by Thomas Bormans.When fires are suppressed in forests that are fire-adapted, fuel builds up in the form of trees or shrubs.[12] As more time passes without a fire, the probability of a ruinously-intense fire grows, posing more danger to nearby human settlements. This is exactly what happened in the mid-1990s in Florida as urban development expanded into fire-adapted pine forests and enjoyed trees and shrubs in their yards.[12] The result was fires during dry periods that resulted in higher damage than usual, destroying many homes in the process.
In security, implementing technical controls can lead to increased damage as well. Retroactively hardening or patching legacy systems in which vulnerabilities are frequently found, can lead to further development on top of these systems, and further entrenchment of those systems within the organization. Feeling like the threat is being prevented leads to development that relies on that assumption — and thus isn’t designed to absorb an attack.
In flood risk management, it’s known as the “levee paradox.”[13] Building a levee can lead to a sense of the problem being prevented, supporting further development and construction on the risky floodplain.[9] For example, less than 3% of people living in Illinois in floodplains with levees in place carry flood insurance.[13] The levee clearly lowers people’s awareness of the risk and ability to respond appropriately to it.[14]
When implementing a robustness control, it’s essential to ensure that it isn’t encouraging further development within a vulnerable system that leaves it open to cataclysmic risk when the control fails. Don’t focus just on resistance in your controls. Doing so will simply “treat the symptoms of bad planning with structures.”[11]
There’s also a lesson here for cyber insurance. Back to the levee paradox, oftentimes areas with levees in place aren’t categorized as official floodplains. This means that homes or offices in those areas don’t have flood-related insurance requirements. The clear lesson I see is: firms offering cyber insurance should consider very carefully whether they exempt companies from certain requirements based on technical controls being in place.
Related to the safe development paradox is the fact that preventing a system from negative exposure means that the system will only function in the artificially stable state. In the levee paradox, it actually creates an artificially stable system which can only survive in dry conditions.
Picture by Linus Nylund.Another example is with coral reefs. Marine reserves are maintained to protect coral from the damaging effects of climate change, such as ocean acidification and thermally-induced coral bleaching. However, unprotected coral actually proves more resilient to climate disturbance, since they’ve faced ongoing degradation due to exposure to the stressor and thus recomposed to have more disturbance-tolerant species.[15]
In information security, you must likewise expose your systems to stressors. Even if you’re building something to be internal-only, like APIs, you should design them with the same threat model as an externally-facing service — for instance, making sure you have data sanitization. Test your systems as if they were externally-exposed, to see if they are sufficiently resilient to global stressors. If it would take years to rebuild, reconsider what data you allow within the system.
The overwhelming focus to date in information security has been on robustness — how to withstand or resist an attack, before rebounding back to “normal.” The traditional components of security — firewalls, anti-virus, system hardening — are all components of a robustness strategy. Even when examining how startup security products are marketed, the words “stop,” or the more creative “thwart” are used, implying an improved ability to withstand an attack.
Remediation even plays into this singular focus on robustness. The goal of remediation within security is to most often fix any vulnerabilities, and, ideally, to return to “business as usual” by reversing damage from an attack. As we saw with the Equifax breach, there is absolutely no chance of “business as usual” when immutable data is compromised. Penetration tests often solely focus on vulnerabilities and what is needed to fix them, rather than proposing new technologies or architecture that would prove less vulnerable long-term.
Other domains have typically held this singular focus, as well. The engineering-led approaches sought to defy nature itself rather than allow the system to flux with nature. For example, the single equilibrium in flood risk management is to have dry conditions in floodplains so that people can continue living in them. Dikes, storm-surge barriers, and dams are all attempts to withstand a flood, and reflect engineering resilience approaches. Their goal is to keep the same artificial equilibrium, in spite of the water system’s natural behavior.[9]
An engineering-only focus leads to the current challenge of companies needing to constantly stay up to date on patches, but facing many hurdles in doing so — and having this be their primary line of defense. The model that must be embraced is one in which the system can survive even if patches aren’t immediately updated, or users still click on phishing links. Your systems must survive even if users download a pdf.zip.exe.
As we saw with coral, without palpable vulnerability through exposure to risk, it is unlikely that resilience will develop.[11] You need to assume that attackers will gain access to a system, and figure out how to reduce the impact. You need to actually practice and embrace disaster recovery, rather than just having a plan.
With all that said, robustness is absolutely important to resilience. But robustness needs to be performed correctly. Drawing from flood risk management, diversity is a cornerstone of robustness — there needs to be layers of controls and diversity of solutions.[9] For example, there are storm surge barriers, dikes, and dams for flood prevention.
Picture by Michael Discenza.New York City has published guidelines for climate change resiliency which also recommend a combination of controls. For example, for dealing with excess heat, they recommend backup generators to hybrid-power systems, using systems with higher heat tolerance, as well as passive cooling and ventilation through window shades or high-performance glazing.[16]
Diversity of controls helps provide redundancy in uncertain conditions. When complementing measures are in place, it’s less likely that there will be catastrophic damage through the failure of a singular control. But, the tradeoff is between efficiency and effectiveness. The easier route with lower upfront costs is to implement a single control. The effective route is to implement layered controls, which may cost more now, but will pay dividends in reduced consequences long-term.
I don’t believe this will be a new concept for many of you. For example, you could deploy a so-called APT-blocking appliance (aka the BlinkyBoxTM) on your network that purports to stop all attacks. However, what then happens when legitimate credentials are used to access a cloud-based service? Or, as we’ve seen recently with Kaspersky, what happens when the APT-blocking-box is hacked by the APT itself to gain access?
Diversity can also be seen through the lens of systems. While we think of fragmentation generally as poor to have, particularly in the context of asset management, there is an argument in its favor. Shared hosting providers can increase correlated risk. If there is a breach at one provider, or vulnerability in a key component or library used across all applications, then your risk exposure is far greater than it might have been otherwise.
The financial crisis in 2008 serves as a pertinent example of the dangers of ignoring correlated risk. There is something to be said for ensuring you have some level of diversity in your architecture. I am by no means the first to suggest that heterogeneity is important — Dan Geer was fired from @stake in 2003 for making that suggestion, specifically in regards to Microsoft’s hegemony.[17]
This sort of diversity also plays into the efficiency vs. effectiveness tradeoff. However, efficiency can actually lead to a more limited space in which you can operate. Being able to function using fragmented technologies and controls will ensure you can adapt much better to uncertainty. Systems diversity, through this lens, can provide the instability that can ensure survival. I posit that it is up for debate whether it is more optimal to have manageability through uniformity or limited impact of any one stressor through diversity.
Thinking in decision trees can help ensure robustness through proper diversity of controls. I’ve discussed decision trees towards information security strategy in prior talks, most notably at Black Hat. Briefly, the goal should be to walk through what steps an attacker would take to reach their goal in your organization. Naturally, there is not just one path an attacker will take; you have to consider what path they will take if they encounter a mitigation as well. From there, you can begin determining what cascading controls are necessary in order to raise the cost to the attacker as much as possible.
Raising the cost to the attacker serves as a bridge between robustness and adaptability. As frequently referenced, Dino Dai Zovi said, “Attackers will take the least cost path through an attack graph from their start node to their goal node.”[18] If you can raise attacker cost, you can begin deterring attackers. Attackers will need greater resources and a greater level of sophistication if you do so. One way to raise cost is through robustness with strong, diversified controls. Another way is through adaptability.
Adaptability Image by Cécile Brasseur
Adaptability concerns reducing the costs of damage incurred and keeping your options open to support transformability. The evolutionary approach is one in which the assumption is that conditions will naturally change over time, and thus the system itself needs to incur long-term change. Reversion to the preexisting state is not necessarily — and often wholly — undesirable.
The Intergovernmental Panel on Climate Change (IPCC) highlights the need for realism and warns about the dangers of incremental changes under the guise of adaptation.[19] They specifically recommend questioning underlying assumptions and existing structures, acknowledging the inevitability of macro-level change, and making managed transformation the goal. Pretending you’re adapting while only undergoing incremental change creates a false sense of security — similar to the safe development paradox. You may alleviate symptoms in the short-term, but you can only cultivate resilience through meaningful change towards adapting to reality.
Picture by Debbie Molle.A macro-level example of adaptability is in the realm of climate change. While traditional protection strategies for wildlife at risk due to climate change have been focused on preserving their existing habitats, more recent research proposes alternative approaches. Protected areas are in static locations, and tend to become increasingly isolated, leaving nowhere to go. Preserving a species in such an isolated, at-risk area results in “genetic ghettos.”[20] The species becomes increasingly acclimated to this limited environment, which consequently staves off any potential for evolutionary adaptation.
Instead, wildlife naturally has shifted ranges in response to previous instances of climate change, in which preferable conditions are “tracked.” Recommendations now include helping connect disparate ecosystems together so that wildlife can more easily migrate. For example, in areas with urban environments, a narrow strip of land can be preserved, or another sort of route created, so that populations can connect to a different climactic area.
One can think of existing territories like legacy systems. We try to “preserve” these habitats through patching and retroactive hardening. The adaptive model from nature is to move to new territories that fit preferred conditions — ones in which the species can survive — which is similar, in effect, to moving to new infrastructure or a new mode of operation that is more resilient to the new threat.
As a highly tangible example, consider the case of database queries. The organization’s status quo might be that they use inline PHP code within the HTML of their web apps to perform database queries. If an injection vulnerability is discovered in an instance of this inline PHP code, they’ll fix that instance, but likely not conduct a full review across all of their inline PHP code. In this case, they’d be improving robustness by patching the code, but they’d be returning to their so-called “stable equilibrium.”
In contrast, embracing adaptability would mean the organization should instead remove inline queries, and use one class that accesses the database. This one class would be completely responsible for all sanitization. The result is not only that now you only have to fix issues in one place, but also that developer turnover can be managed — rather than writing their own new inline code, they can use the new library that you’ve built instead.
Preservation can also lead to misleading indicators of resilience. For example, static measurements such as high coral cover or fish abundance can be poor indicators of coral reef resilience.[21] These measures can just reflect favorable conditions in the past and not accurately reflect when resilience is being eroded.
Likewise, in information security, organizations using a library with no known vulnerabilities may currently treat their security model as complete and not perform continuous revisions. The issue is that the release of new vulnerabilities or attacker methods is not always well-publicized. Instead, organizations should frequently review their security posture to ensure threat models are not based on past favorable conditions, even if the product does not change. As a recent example, you likely had to update your threat models after the release of EternalBlue — but it was still privately operational well before disclosure.
In the realm of climate change, moving members of a species that are used to warm areas to intermingle with its kind who live in colder locations can help the cold-adapted population actually survive long-term.[20] Applications built using legacy systems and libraries which have never been exposed to the outside world, which suddenly need exposure to external APIs, tend to fare extremely poorly in security terms.
As mentioned in the example of unprotected coral, the lack of the system’s exposure to the threat over their lifespan has led them to exist in a weakened, unpatched state. Security-wise, you should intermingle your internally-facing systems with your externally-facing systems to ensure they meet the standards of the evolving “global” threat model.
The goal for cities in the face of natural disasters is to maintain a flexible approach in order to properly adapt their response to the changing nature of their risks. If cities do not cultivate a process which assumes uncertainty and surprise in their model, then it’s safe to say they are being wholly unrealistic about the ways of the world.
As defenders, you should test attacker playbooks against yourself to determine how quickly you can adapt to attacker methods. I’m sure many of you wish you could have in-house red teams. For those who do have them, use them to your advantage in this way. I mentioned decision trees earlier as a way to determine which diverse set of controls to use — have your red teams map out the decision trees they created during their course of action to add realistic data into your own trees.
You also must test your ability to absorb the impact of an attack, and minimize the damage. One such test is through failure injection. Chaos Monkey, part of Netflix’s suite of tools called the “Simian Army,” is a service which randomly kills instances in order to test their ability to withstand failure. In fact, Chaos Monkey is described as a resiliency tool.
While it was designed with a performance use case in mind, it can be repurposed for security. If your infrastructure is continually fluctuating, with instances killed at random, it makes it exceptionally difficult for attackers to persist. Attackers would have to conduct whatever they needed within an uncertain time frame. This is, of course, not impossible, but it absolutely raises the attacker’s cost and level of skill required.
Netflix’s goal with Chaos Monkey is to “design a cloud architecture where individual components can fail without affecting the availability of the entire system.”[22] Defenders should make it their goal to design a security architecture where individual controls can fail without affecting the security of the entire system. As I mentioned earlier, if your system becomes completely compromised because a user clicks on a malicious link, you must rethink your security architecture.
Rethinking security architecture is no easy feat. Defenders are considerably hindered in their ability to be adaptive and flexible. Most commonly, people think of organizational pressure as the key deterrent, but I would argue the infosec industry itself is the primary limiter. Defenders face an overwhelming level of complexity and uncertainty due to the sheer number of security vendors and the fragmentation of the solution space.
I believe some of the challenges can be solved by changing the types of infrastructure that are used to promote adaptability and support transformability. Deploying Chaos Monkey is one such example centered on adaptability, but a grander example that blends into transformability is using a container-based ecosystem.
Many of you have likely heard of the container revolution, though may not have used them yourselves. While I’m not a container expert, I’ll explain why containers are a natural fit for evolutionary resilience. Jess Frazelle — “the Keyser Söze of containers”— highlighted in her DevOpsDays talk that containers represent potential salvation from the tradeoff between usability and security.[23] I believe she’s absolutely correct.
As per Microsoft, containers are “a way to wrap up an application in its own isolated box” and are “an isolated, resource-controlled, and portable runtime environment.”[24] A container serves as a layer of abstraction between an application and the host server — which can be of any kind, whether virtualized or bare metal. Because of this, it allows for easier migration to and from underlying infrastructure without having to rebuild applications.
The most common buzzwords I hear for containers are flexibility, portability, and scalability, making them a natural fit for both the adaptability and transformability characteristics. Just as attackers need repeatability and scalability, so do defenders — as well as something that can adapt over time to changes in attacker methods. It cannot be overstated how much a container environment bolsters flexibility and flattens complexity.
When something goes wrong — whether security related or not — the legacy approach makes determining the root cause an effort in untangling and dependency management. With containers, verifiably working systems are available in one neat package, facilitating far less messy remediation. Even implementing them into existing legacy systems can help more easily manage dependencies and licenses.
In the vein of Chaos Monkey, if applications are attacked while running inside a container, all that must be done is kill the container and restart it. There is no need for vulnerability scanning, firewalls, anti-virus, and all the other fragments of the security solution space. You can instead isolate and shut down infected containers as it happens.
Picture by Dustan Woodhouse.You have to ensure adaptability to manage resilience erosion as well. In the case of coral, there are “pulse-type stressors,” or acute stressors, which include tropical cyclones, coral bleaching events, and destructive fishing.[21] But there are also “press-type stressors,” which are stressors occurring over longer periods of time, such as pollution, sedimentation, overfishing, ocean warming, and acidification. With enough of the press-type stressors wearing it down, coral reef resilience is overwhelmed when a pulse-type stressor occurs.Pulse-type stressors in information security can be thought of as new vulnerabilities or a new data breach. Press-type stressors can include large turnover of employees — particularly ones working on large projects — but I would say the most prevalent is complexity. As you add complexity to your applications and systems, it becomes more difficult to test every possible path to compromise, because the paths begin trending towards infinity. If you can no longer test every path because your system is too complex, you have eroded your resilience, a key part of which is flexibility — and will have neither adaptability or transformability.
Transformability Image by Erin Wilson
Transformability can be thought of challenging your existing assumptions and reorganizing your system.
Returning to our previous example of an organization removing inline PHP database queries in favor of a single class, the latter approach also bolsters transformability. Because it is just one library, it allows for easier migration as-needed depending on how the company’s environment, or the threat environment, changes. You are not leaving your options open when your web app is riddled with inline code. You must be able to review and revise your previous choices — for example by moving to new tools or libraries.
Research from other domains has explored the policy implications of transformability, and how to implement the concept on a practical level. Disaster recovery in urban areas is one of the most well-researched domains in this regard. Given urban areas are dynamic systems, evolutionary resilience suggests that policy should encourage recovery efforts that prioritize re-building the urban area into an improved — or even better, optimized — system.[8] For example, in flood-prone areas, the policy should be to change the location and not build in those areas, while also implementing flood-proof construction for periphery areas.
NZ Defence Force. (February 23, 2011). Christchurch Cathedral.As a tangible example of transformability, let’s explore Christchurch. In 2011, a devastating magnitude 6.3 earthquake hit Christchurch, the second most populous city in New Zealand at the time. It killed 185 people and damaged over 100,000 houses, with a financial cost to rebuild estimated at over $40 billion.After the quake, the Canterbury Earthquake Recovery Authority (CERA) designated a new “red zone” throughout the area. This red zone includes damaged or vulnerable land where they believe rebuilding would be “prolonged and uneconomic.”[25] The assessment embraces transformability, rejecting the need to return to the status quo, and instead challenging the assumption that there should be buildings on the land at all.
As security professionals, you should work to identify what the red zones are within your IT systems. Organizations should identify which infrastructure or technologies present the most security challenges — whether through vulnerabilities or ongoing maintenance costs — and put them in the red zone for being phased out.
Defining the components of your own red zone calculation will be subjective, but I submit the following as potential criteria — systems that are directly exposed to external attacks, or entirely public facing, in particular:
Those which expose complex or critical functionality and are accessible publicly Newly deployed systems or architectures, particularly those developed by inexperienced professionals Legacy systems using outdated libraries, software, or languages Systems with no backups, or which can’t easily be restored Any system with critical personally identifiable information (PII) or immutable data — such as in Equifax’s case Systems with privileged access to other systems or accounts Any system that has known or “accepted” risk associated with it Easily fingerprintable or overly verbose systems Anything that could be deemed a single point of failure for your organization Systems that are prohibitive to patch or update Defining your security red zones isn’t about examining potential vulnerabilities or path to compromise in each of your systems. Instead, you want to identify any assets that fall under the red zone criteria, and attempt to move them out of the zone, into healthier systems.
For example, an organization might have an existing asset built using legacy technology, which now has to be exposed to public APIs. Furthermore, this asset consumes critical data and also has privileged access to backend APIs. This asset should likely be classified as being in the “red zone,” without actually assessing whether or not there are vulnerabilities. The goal is to move or rebuild the asset outside of the red zone and make it a safer system.
In this case, such measures could include:
Locking down public exposure so that it’s only accessible via VPN Rebuilding the asset using newer, non-legacy technologies (such as containers) Avoiding storing critical data on this asset and proxy encrypted data further into the architecture’s core Introducing security logging and monitoring Locking down privileged access and enforcing the principle of least privilege By implementing all, or at least some, of these, the system would no longer be in the red zone. This would be similar to moving a power plant out of a flood plain, and instead building it in an elevated area with fortified materials and an early warning system.
Using the example of levees, researchers have proposed having planned decommission of levees ahead of known maintenance hurdles. This way, levees can be used as a stop-gap as communities embrace transformability and relocate. It’s unrealistic to assume that a community could uproot overnight, but it’s important that it isn’t treated as a permanent Band-Aid.
In security, it’s similarly unrealistic to assume you can transform overnight — and your organization would probably not be pleased with you. But, you need to be able to migrate. Like levees, you could have planned decommission of retroactive hardening and patching before moving off of legacy systems. This ensures you don’t renew on software or hardware before it becomes too embedded in your organization or costly to maintain.
In general, you should have plans in place to decommission technologies that will eventually be obsolete or replaced, even down to libraries, hashes, and software versions. Continually consider how you can prepare in advance for migration.
Evolutionary resilience research has highlighted the need for more collaborative planning across stakeholders in a complex system. Rather than relying on inferred knowledge towards a pre-defined goal, local groups should work together and compile their own information towards optimizing relevant processes in their system.
Drawing from flood risk management, those in charge of risk management should be the ones to communicate what realistic level of protection each sort of approach provides.[9] Otherwise, it’s difficult for communities to cultivate knowledge on their own of what protection is in place and what limitations remain for risk reduction.
The security function within an organization is also not an isolated unit. Security should foster collaboration across the business towards optimizing security processes. You all should also be open in sharing knowledge of what protections are in place, what risks the protection realistically reduces, what risk remains, and any uncertainties around the approach.
The most obvious group with whom security should partner is engineering, which I discuss further in another keynote, Security as Product. As is in the aforementioned case of containers, it’s possible that there are improvements engineering desires that might facilitate more adaptable security as well. The trend towards flexibility is perhaps the strongest in software development and engineering today, and security must embrace this trend as well. I highly recommend reading the recently released O’Reilly book on Agile Application Security for more in this vein.[26]
In this collaborative setting, the role of planners should be to manage transitions between states rather than create or mediate.[27] I don’t think security professionals can fully remove the need to create or implement solutions to some extent. Focusing on managing the transition from the current state to a more secure state can potentially reduce some labor burden, however.
For example, drawing again on the transition to containers, the engineering group will conduct the majority of the labor towards this endeavor. Security should manage the project to ensure necessary controls are in place, such as detection of container compromise.
Image by NESA by MakersAs another example, security ideally should work with engineering to implement solutions like two-factor authentication. John “Four” Flynn at Facebook gave a great talk a few years ago about implementing 2FA at Facebook. Their goal was to put 2FA on SSH to make it hard for attackers to pivot into Facebook’s production environment.[[28]](#cite-28) During the process, they thought very carefully about the user experience, realizing they needed to support frequent use, allow for a flexible range of factors, and minimize help desk requests.They decided to use DuoSecurity and YubiKey Nano — with the YubiKey, the developers only needed to touch the side of their laptop to SSH, and DuoSec’s cloud-based tokens ensured they’d still have access even if they lost the YubiKey. One of their key discoveries during this project was that:
“You can actually implement security controls that affect every single thing people are doing and still make them love it in the process.”
I recognize that while “software is eating the world,” not every company is yet a technology company. Potentially, you will have a limited IT group with whom to collaborate on technical efforts. This doesn’t mean you can’t collaborate, however. It’s well recognized that there is division between even security and risk or fraud groups, let alone general counsel and financial functions. There is someone at your organization who wants their job to be easier. Your job then needs to be how security can make that happen, or at least fit into their existing workflows.
For this sort of transformability to happen, responsive governance systems are needed. Defenders must implement decision-making processes that are quick to identify and respond to emerging threats. Part of this in ensuring that your organization is learning from prior experiences — such as through the decision tree process I mentioned before, in which you can update your models after a breach.
However, your organization’s entire community must be involved in this learning process and be prepared to continually evaluate strategy. Implementing a security culture in your organization is perhaps the best chance of doing so.
Conclusion This is my humble attempt at a definition of resilience in information security:
Resilience in security means a flexible system that can absorb an attack and reorganize around the threat.
The system, in this case, likely is your organization, although this can apply to its underlying systems as well. Crucially, resilience in security is not the ability to withstand or prevent an attack. That’s the blue pill.
The red pill is that the reality is that attacks will happen to you, and you must architect your security around this fact. I showed you how deep the rabbit hole goes on what security strategy fits this reality. Robustness, adaptability, and transformability are the keys to survival in Wonderland.
Robustness, while not the silver bullet, should be optimized through diversity of controls. Adaptability seeks to minimize the impact of an attack and keep your options open, and new types of infrastructure, such as containers, can enable it. Transformability demands you challenge your assumptions and reorganize your system around the reality — a reality that affects communities, which requires a collaborative effort.
My favorite fictional character growing up was Ian Malcolm, from Michael Crichton’s “Jurassic Park” novels. I believe the full quote of one of his most notable lines summarizes how you, as defenders, should think of your strategy [29]:
“Because the history of evolution is that life escapes all barriers. Life breaks free. Life expands to new territories. Painfully, perhaps even dangerously. But life finds a way.”
Consider how you can escape barriers, consider how you can expand to new territories, consider how you can find a way to evolve — because attackers are doing all of these things. Doing so will likely be painful at first for your organization. Your job, as part of implementing transformability, is to manage these transitions and minimize the pain and danger. Much like with life, as per Malcolm’s quote, you can think of it as the survival of your organization’s data being at stake.
Attacks will happen. Attackers will continue to evolve their methods. We can evolve our methods, too. We face a choice, as an industry: we can either continue to indulge ourselves in anger, bargaining, and depression, or strive towards acceptance.If we take this red pill of resilience, we can defend ourselves effectively and realistically. If we take the blue pill, we will keep attempting to rebound to an artificial equilibrium — relegating us to the role of a firefighting cat who is drunk on snake oil.
I am certain most of you are fed up with these dynamics. Instead of accepting snake oil, I encourage you to take the red pill of resilience instead.
References [1] Alexander, D. E. (2013). Resilience and disaster risk reduction: an etymological journey. Natural hazards and earth system sciences, 13(11), 2707–2716.
[2] Holling, C. S. (1996). Engineering resilience versus ecological resilience. Engineering within ecological constraints, 31(1996), 32.
[3] The American Chestnut Foundation. How Chestnut Blight Devastated the American Chestnut. Retrieved from https://www.acf.org/the-american-chestnut/ (accessed September 2017).
[4] Restemeyer, B., Woltjer, J., &amp; van den Brink, M. (2015). A strategy-based framework for assessing the flood resilience of cities–A Hamburg case study. Planning Theory &amp; Practice, 16(1), 45–62.
[5] Blake, E.S. &amp; Zelinsky, D.A. (2018). National Hurricane Center Tropical Cyclone Report: Hurricane Harvey. National Oceanic and Atmospheric Administration.
[6] Timmermann, P. (1981). Vulnerability, resilience and the collapse of society. Environmental Monograph, 1, 1–42.
[7] Hough, S. E. (2016). Predicting the unpredictable: the tumultuous science of earthquake prediction. Princeton University Press.
[8] Sanchez, A. X., Osmond, P., &amp; van der Heijden, J. (2017). Are some forms of resilience more sustainable than others?. Procedia engineering, 180, 881–889.
[9] Tempels, B. (2016). Flood resilience: a co-evolutionary approach. Residents, spatial developments and flood risk management in the Dender basin.
[10] Burby, R. J. (2006). Hurricane Katrina and the paradoxes of government disaster policy: bringing about wise governmental decisions for hazardous areas. The Annals of the American Academy of Political and Social Science, 604(1), 171–191.
[11] Wenger, C. (2017). The oak or the reed: how resilience theories are translated into disaster management policies. Ecology and Society 22(3):18.
[12] Gunderson, L. (2010). Ecological and Human Community Resilience in Response to Natural Disasters. Ecology and Society 15(2): 18.
[13] Martindale, B., &amp; Osman P. (2007) Why the concerns with levees? They’re safe, right?. IASFM Fall 2007 Newsletter.
[14] Liao, K. H. (2012). A theory on urban resilience to floods — a basis for alternative planning practices. Ecology and Society 17(4): 48.
[15] Côté, I. M., &amp; Darling, E. S. (2010). Rethinking ecosystem resilience in the face of climate change. PLoS biology, 8(7), e1000438.
[16] NYC Mayor’s Office of Recovery and Resiliency. (2018). Climate Resiliency Design Guidelines.
[17] Verton, D. (October 1, 2003). Former @stake CTO Dan Geer on Microsoft report, firing. Retrieved from https://www.computerworld.com/article/2572315/security0/former--stake-cto-dan-geer-on-microsoft-report--firing.html
[18] Dai Zovi, D. Attacker “Math” 101.
[19] Intergovernmental Panel on Climate Change. (2014). Climate Change 2014 Synthesis Report Summary for Policymakers.
[20] Sgro, C. M., Lowe, A. J., &amp; Hoffmann, A. A. (2011). Building evolutionary resilience for conserving biodiversity under climate change. Evolutionary Applications, 4(2), 326–337.
[21] Anthony, K. R., Marshall, P. A., Abdulla, A., Beeden, R., Bergh, C., Black, R., … &amp; Green, A. (2015). Operationalizing resilience for adaptive coral reef management under global environmental change. Global change biology, 21(1), 48–61.
[22] Izrailevsky, Y., &amp; Tseitlin A. (July 18, 2011). The Netflix Simian Army. Retrieved from https://medium.com/netflix-techblog/the-netflix-simian-army-16e57fbab116.
[23] Frazelle, J. (July 27, 2017). A Rant on Usable Security. Retrieved from https://blog.jessfraz.com/post/a-rant-on-usable-security/
[24] Brown, T., et al. Windows Containers. Retrieved from https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/ (accessed October 2017).
[25] Blundell, S. (April 19, 2016). Christchurch’s Game of Zones. Retrieved from https://www.noted.co.nz/currently/social-issues/christchurchs-game-of-zones/
[26] Bell, L., Bird, J., Brunton-Spall, M., Smith, R. (2017). Agile Application Security. O’Reilly Media.
[27] Batty, M. (2013). Complexity and Planning: Systems, Assemblages and Simulations, edited by Gert de Roo, Jean Hillier, and Joris van Wezemael. 2012. Farnham, UK and Burlington, Vermont: Ashgate Publishing. 443&#43; xviii. Journal of Regional Science, 53(4), 724–727.
[28] Flynn, J. (February 6, 2014). 2FAC: Facebook’s Internal Multi-factor Auth Platform — Security @ Scale 2014. Retrieved from https://www.youtube.com/watch?v=pY4FBGI7bHM
[29] Crichton, M. (1990). Jurassic Park. Random House.
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/resilience-01.jpg" alt="Image of two hands clasping each other"></p>
<p><em>What follows is the full text of my keynote, also available as <a href="https://www.youtube.com/watch?v=ux--pHFpeac">a video</a> and <a href="/speaking/Red-Pill-of-Resilience-Shortridge-Countermeasure-2017.pdf">as slides</a>.</em></p>
<hr>
<p>There has been insufficient exploration of the first principles of resilience in the context of information security, despite the term being superficially peppered in our common discourse. Too often, resilience is conflated with robustness — to the detriment of us all.</p>
<p>To state more poetically, through the pen of the notable fantasy author Robert Jordan referencing one of Aesop’s fables, <em>“The oak fought the wind and was broken, the willow bent when it must and survived.”</em> To speak of protection without resilience is to believe you can always beat the wind. To speak of deterrence without resilience is to believe you can deter the wind from blowing at all.</p>
<p>There are times when attacks may be deterred or halted before damage is wrought, but one cannot always predict which way the wind will blow. We cannot predict the adversary we will face and the amount of resources they are willing to expend to compromise our systems.</p>
<p>Protection or deterrence can serve as a valuable tactic to grant some level of peace, but resilience — the ability to absorb change and survive — is the foundation on which survival rests. Again, far more poetically than I could ever say, Generalissimo <a href="https://en.wikipedia.org/wiki/Chiang_Kai-shek">Chiang Kai-shek</a> of the Republic of China advised, <em>“The more you sweat in peace, the less you bleed in war.”</em> Attempt the peace, but assure you can still survive the war.</p>
<p>But the discourse about resilience in information security has, to date, been far from poetic. Rather, it’s been sprinkled as a buzzword, floating along at a shallow level primarily in discussions of cyber insurance and how companies can transfer risk by buying policies. I believe this is a waste of its potentially torrential conceptual power. Resilience is ultimately about accepting reality and building a defensive strategy around reality.</p>
<p>As in the Matrix, resilience serves as the red pill — once you accept the reality, it’s impossible to go back to a strategy that ignores it. My goal in this talk is to show you how digging into this heart of resilience can drive a paradigm shift in how we architect security strategy (and I don’t throw the term “paradigm shift” around lightly).</p>
<p>I’ll first explore why the time for the resilience paradigm is nigh, and how our grief as an industry has led us to this point. I’ll next briefly share the etymology of resilience. Its application to information security is one of the newest, and there exists a rich history of its use in other domains — a history worth exploring to see if there are principles and findings to be shared.</p>
<p>Information security is not the only complex system, and ecological systems dealing with climate change and urban areas dealing with natural disasters represent analogical systems in which dynamics are nearly impossible to predict, and the number of interrelating factors is prohibitive to enumerate.</p>
<p>This cross-domain research presents a concept of resilience based on robustness, adaptability, and transformability. I’ll cover these three concepts in detail so we can understand how they fit under the umbrella of resilience. I’ll expand on this cross-domain lens, and it will serve as the one through which I’ll examine what resilience means in information security.</p>
<p>I am humbly attempting to define and establish a notion around what resilience means for information security — both in an intellectual and practical sense. I would be shocked if all of my thinking is on the mark. More than anything, I desperately want to encourage a discussion of first principles around resilience, and to begin a fruitful conversation around how practical implementations of resilience ideology for defensive security should look. I believe we have a fighting chance.</p>
<hr>
<h2 id="stages-of-grief-in-information-security">Stages of Grief in Information Security</h2>
<p><img src="/blog/img/resilience-02.jpg" alt="Image of a burning rose"></p>
<p>But first, why is there rising talk about the term “resilience,” despite the industry not having solidly established what it means? I propose that it is the result of the final stage of grief — acceptance — of the fact that there is not, and likely will not ever be, such a thing as an un-hackable organization or system.</p>
<p>Over the past twenty years, the infosec industry has grieved the fact that companies are ever-vulnerable to attack. Very little can be done against the latest exploit or attack vector that is currently unpublished and unknown. The industry has not fully coped with this grief. Like most grief, it isn’t a linear process, and these cycles have ebbed and flowed at various times in the industry’s history.</p>
<p>The first stage of grief is <strong>denial</strong> — clinging to a false, preferable reality. This manifested as companies being hesitant to deploy security solutions at all, believing that they weren’t truly at risk.</p>
<p><strong>Anger</strong> — recognizing the denial cannot continue, and becoming frustrated — seeing security as an unwanted necessity. This manifested in harsh penalties and legislation, prosecuting and punishing vulnerability researchers — even ones doing work for free or disclosing responsibly.</p>
<p><strong>Bargaining</strong> — the hope that the cause of grief can be avoided — resulted in the explosion of new security tools in an attempt to stop the problem</p>
<p><strong>Depression</strong> — despair at the recognition of the predicament — led to the refrain of more recent years in the vein of, “You’re going to get hacked, there’s nothing you can do about it.”</p>
<p>Finally, <strong>acceptance</strong> — the stage into which I believe we are now transitioning, understanding that there is an inevitability of successful attack, but recognition that there is an ability to prepare and that not all is hopeless.</p>
<p>Unfortunately, I fear the bargaining stage has set us up for a challenging road to implementing acceptance. The explosion of security tools in an attempt to avoid the problem has resulted in an untenable <a href="https://en.wikipedia.org/wiki/The_Market_for_Lemons">market for lemons</a> in which tools and services are prescribed regardless of real need.</p>
<p>Fear, uncertainty, and doubt (FUD) specifically preys on this desperation to bargain, and its use by marketing departments is little different than selling hope of a cure to those who are chronically ill and in their own bargaining stage of grief through alternative, disproved methods. More simply put, the bargaining stage is the demand for which snake oil is the supply.</p>
<p>The resulting depression is unfortunately not the antidote for snake oil — security nihilism is an inaccurate conclusion and does little to incentivize practitioners to pursue more resilient strategies than unproven ones. Therefore, in this blossoming acceptance phase, it is important we have a conversation about what actually works — and to begin, I’d like to delve into what resilience has meant before we as an industry began to espouse it.</p>
<hr>
<h2 id="etymology-of-resilience">Etymology of Resilience</h2>
<p>Up until the early 19th century, the primary meaning of resilience was to “rebound.” Its first use in the context of engineering was in 1858, to imply strength and ductility, or a material’s ability to stretch under tensile stress.<a name="back-1"></a><a href="#cite-1">[1]</a> The abstraction from this physical characteristic — the time it takes for a system to return to a pre-determined, single equilibrium — is the one which has persisted in the common understanding of resilience.</p>
<p>However, in the 1970s, resilience began being used in psychology — understood as a process of changing current behaviors to cope with an adverse condition, for reverting to a prior psychological state post-incident is unrealistic and can actually represent unhealthy coping strategies. The 1970s also saw the beginning of resilience’s use in ecology, but over time, the concept gradually expanded into use across the social sciences and with the two domains’ coupling, or socio-ecological systems.</p>
<p>Most recently, it began to be applied to climate change adaption in the early 2010s, including the natural disaster risk management space due to increases in natural disasters that by all evidence are caused by climate change.</p>
<h3 id="what-does-resilience-mean-for-complex-systems">What does resilience mean for complex systems?</h3>
<p>A complex system is one in which many of its underlying components interact with each other, and one in which it is very difficult to predict behavior. More simply put, it is a system with non-linear activity in the aggregate. Examples of complex systems in our daily lives include our universe, our planet’s climate, our cities, our brains, and even living cells.</p>
<p>Information security is also complex system. Defensively speaking, it is plagued by an inability to predict attacker actions, and it also consists of highly interconnected, dynamic relationships. Both sides of the security equation — defenders and attackers — are human. But there are additional relationships beyond the direct conflict, including users, governments, software vendors, service providers, and so forth. To unfurl these relationships and attempt to fit them into a predictive model is very simply prohibitive.</p>
<p>The first application of resilience through the lens of complex systems was by C.S. Holling in 1973, an ecologist who was one of the founders of ecological economics. Ecological resilience, he said, is measured by the amount of change that could be absorbed before the system’s underlying structure changes.<a name="back-2"></a><a href="#cite-2">[2]</a> He asserted that an ecological system can be highly resilient, but also exhibit a high degree of instability — and, in fact, that the proper reaction of an ecological system was to continually adapt, rather than attempt to return to a static equilibrium.[1]
Heidelbach, W. (September 28, 2016). Chestnut.</p>
<p>For example, eastern North American forests were once full of chestnut trees until a chestnut blight in the first half of the 20th century wiped them out.<a name="back-3"></a><a href="#cite-3">[3]</a> However, oak and hickory trees began spreading in its stead. The forests changed in appearance and composition, but still survived as forests.</p>
<p>Evolutionary resilience, borne from analyzing socio-ecological systems, operates under the assumption of complex systems that co-evolve, focusing on adaptation and transformation. Rejecting the idea of thresholds within which a system should fluctuate, it instead suggests multiple levels of controls and the ability to adapt the status quo by reorganizing or regenerating around the change, thereby creating a new status quo.</p>
<p>For example, communities can diversify their agricultural landscapes and production systems, designating some areas for soil conservation and organic agriculture while promoting multicropping in others. They can protect some forested areas while designating others for the community and focusing reforestation efforts there.</p>
<p>This notion of evolutionary resilience can be summarized as consisting of three central characteristics: robustness, adaptability, and transformability.<a name="back-4"></a><a href="#cite-4">[4]</a> The core notion is that in order for a complex system to be resilient, it must be able to withstand a shock, adjust so as to incur less damage, and be open to challenging previous decisions and goals.</p>
<p>I will be keeping in mind these three core characteristics in the context of information security. With robustness, you must be able to withstand an attack; with adaptability, you must be able to adjust your environment so you incur less damage when attacked; with transformability, you challenge your existing assumptions and decisions, and potentially migrate from existing infrastructure as well as defensive strategies or current methods used in the way you model and understand your threats.</p>
<figure style="float:right; max-width:40%; padding-left: 10px">
	<img src="/blog/img/resilience-03.jpg" alt="Picture of Hurricane Harvey's Eye">
	<figcaption>NASA/NOAA. (August 26, 2017). Hurricane Harvey’s Eye.<figcaption>
</figure>
<p>In most domains, robustness proved dominant in defensive strategy, and can be linked to the concept of engineering resilience — a mistake from the evolutionary resilience perspective. For example, barriers are a form of robustness, blocking storm surges. However, as seen recently with Hurricane Harvey, the primary source of damage was flooding from ongoing rain — highlighting the need for adaptability and transformability to incur less damage going forward and rethink your existing strategies.<a name="back-5"></a><a href="#cite-5">[5]</a></p>
<p>Instead, evolutionary resilience must also include adaptation and dynamic change towards the goal of preservation, with robustness as an ingredient rather than the sole objective.</p>
<p>Although the expression, “it’s not about the destination, it’s about the journey” is somewhat trite, it’s quite true for resilience. Resilience must be framed as a continuous, evolving, but sustainable process rather than a goal. As ecological economics scholar Peter Timmerman described, resilience is the building of “buffering capacity” into a system, to improve its ability to continually cope going forward.<a name="back-6"></a><a href="#cite-6">[6]</a></p>
<p>A focus only on robustness can also lead to a misleading presentation of the problem as one only based on reducing the risk itself. As in the previous example, the problem could be seen only as, “how can we withstand the hurricane?” instead of “we know the hurricane will hit us, how can we change so that it doesn’t damage our community as much?” This highlights the contrast between robustness and the adaptability and transformability characteristics, which accept that the risk will exist, and instead stress the need to reduce the potential damage from the risk and restructure around the risk.</p>
<p>Furthermore, the efforts around attack prediction represent yet another symptom of collective grief — it’s an endeavor to regain the illusion of control. I’ve given another related presentation at length about <a href="/speaking/Dangerous-Folly-Attack-Prediction-Shortridge-Art-into-Science-2018.pdf">why attack prediction should not be our goal</a>, so I will not elaborate further here. Suffice to say, prediction was attempted in other complex systems and not only failed miserably, but wasted precious time, money, and brainpower that could have been spent on a pragmatic aim: resilience — the need to design systems under the assumption the negative shock will not be predicted. As eloquently stated by Susan Elizabeth Hough<a name="back-7"></a><a href="#cite-7">[7]</a>,</p>
<blockquote>
<p>“A building doesn’t care if an earthquake or shaking was predicted or not; it will withstand the shaking, or it won’t.”</p>
</blockquote>
<p>While our industry has come to accept that there are many “unknown unknowns,” our strategy is still one based in hubris — that we can save ourselves in a breach with systems that can withstand unknown risks at unknown times with unknown faces. The evolutionary resilience approach embraces these unknowns, understanding that change is inevitable — ensuring the system survives by absorbing these unknown changes, naturally adapting and reorganizing around this unknown risk, keeping the option open of bearing its own new, unknown face.</p>
<hr>
<h2 id="robustness">Robustness</h2>
<p><img src="/blog/img/resilience-08.jpg" alt="Image of a bridge"><em>Image by <a href="https://unsplash.com/@spoony">Hieu Vu Minh</a></em></p>
<p>Robustness involves withstanding and resisting a negative event. Engineering used the concept of resilience only in terms of robustness, measured by how long it takes a system to return to its equilibrium after a shock. However, experiencing an acute stress event implies the normal state was vulnerable to the stress, and that it is thus an “undesirable state to go back to because it would perpetuate this vulnerability.”<a name="back-8"></a><a href="#cite-8">[8]</a></p>
<p>In disaster recovery, it’s dangerous to present the problem of flooding, for example, as simply one about excess water. If it’s simply about a physical issue, then solutions are presented that are restricted to just the physical issue. In reality, flooding is a problem because of people, who understandably don’t want to lose their homes or drown. It is unnecessarily restrictive to only consider technical solutions to address the excess water, rather than broader solutions to address the problem in a societal context.<a name="back-9"></a><a href="#cite-9">[9]</a></p>
<p>When it is believed that a technical control will help prevent a shock, then it tends to lead to larger potential damage. This is called the safe development paradox.<a name="back-10"></a><a href="#cite-10">[10]</a> The reason why it’s a paradox is that the stability and presumed safety gained by building a structural mitigation to the problem actually allows risk to accumulate over time due to the false sense of security, leading to a higher chance of catastrophic consequences.</p>
<p>The safe development paradox represents a maladaptive feedback loop — once a structural mitigation is in place, more development happens where it should not.<a name="back-11"></a><a href="#cite-11">[11]</a> As the development becomes entrenched, the need for structural mitigations becomes even greater — and once the mitigation is in place, more development occurs.</p>
<figure style="float:right; max-width:40%; padding-left: 10px">
	<img src="/blog/img/resilience-04.jpg" alt="Picture of pinecones burning">
	<figcaption>Picture by <a href="https://unsplash.com/@thomasbormans">Thomas Bormans</a>.<figcaption>
</figure>
<p>When fires are suppressed in forests that are fire-adapted, fuel builds up in the form of trees or shrubs.<a name="back-12"></a><a href="#cite-12">[12]</a> As more time passes without a fire, the probability of a ruinously-intense fire grows, posing more danger to nearby human settlements. This is exactly what happened in the mid-1990s in Florida as urban development expanded into fire-adapted pine forests and enjoyed trees and shrubs in their yards.[12] The result was fires during dry periods that resulted in higher damage than usual, destroying many homes in the process.</p>
<p>In security, implementing technical controls can lead to increased damage as well. Retroactively hardening or patching legacy systems in which vulnerabilities are frequently found, can lead to further development on top of these systems, and further entrenchment of those systems within the organization. Feeling like the threat is being prevented leads to development that relies on that assumption — and thus isn’t designed to absorb an attack.</p>
<p>In flood risk management, it’s known as the “levee paradox.”<a name="back-13"></a><a href="#cite-13">[13]</a> Building a levee can lead to a sense of the problem being prevented, supporting further development and construction on the risky floodplain.[9] For example, less than 3% of people living in Illinois in floodplains with levees in place carry flood insurance.[13] The levee clearly lowers people’s awareness of the risk and ability to respond appropriately to it.<a name="back-14"></a><a href="#cite-14">[14]</a></p>
<p>When implementing a robustness control, it’s essential to ensure that it isn’t encouraging further development within a vulnerable system that leaves it open to cataclysmic risk when the control fails. Don’t focus just on resistance in your controls. Doing so will simply “treat the symptoms of bad planning with structures.”[11]</p>
<p>There’s also a lesson here for cyber insurance. Back to the levee paradox, oftentimes areas with levees in place aren’t categorized as official floodplains. This means that homes or offices in those areas don’t have flood-related insurance requirements. The clear lesson I see is: firms offering cyber insurance should consider very carefully whether they exempt companies from certain requirements based on technical controls being in place.</p>
<p>Related to the safe development paradox is the fact that preventing a system from negative exposure means that the system will only function in the artificially stable state. In the levee paradox, it actually creates an artificially stable system which can only survive in dry conditions.</p>
<figure style="float:right; max-width:40%; padding-left: 10px">
	<img src="/blog/img/resilience-05.jpg" alt="Picture of coral">
	<figcaption>Picture by <a href="https://unsplash.com/@doto">Linus Nylund</a>.<figcaption>
</figure>
<p>Another example is with coral reefs. Marine reserves are maintained to protect coral from the damaging effects of climate change, such as ocean acidification and thermally-induced coral bleaching. However, unprotected coral actually proves more resilient to climate disturbance, since they’ve faced ongoing degradation due to exposure to the stressor and thus recomposed to have more disturbance-tolerant species.<a name="back-15"></a><a href="#cite-15">[15]</a></p>
<p>In information security, you must likewise expose your systems to stressors. Even if you’re building something to be internal-only, like APIs, you should design them with the same threat model as an externally-facing service — for instance, making sure you have data sanitization. Test your systems as if they were externally-exposed, to see if they are sufficiently resilient to global stressors. If it would take years to rebuild, reconsider what data you allow within the system.</p>
<p>The overwhelming focus to date in information security has been on robustness — how to withstand or resist an attack, before rebounding back to “normal.” The traditional components of security — firewalls, anti-virus, system hardening — are all components of a robustness strategy. Even when examining how startup security products are marketed, the words “stop,” or the more creative “thwart” are used, implying an improved ability to withstand an attack.</p>
<p>Remediation even plays into this singular focus on robustness. The goal of remediation within security is to most often fix any vulnerabilities, and, ideally, to return to “business as usual” by reversing damage from an attack. As we saw with the Equifax breach, there is absolutely no chance of “business as usual” when immutable data is compromised. Penetration tests often solely focus on vulnerabilities and what is needed to fix them, rather than proposing new technologies or architecture that would prove less vulnerable long-term.</p>
<p>Other domains have typically held this singular focus, as well. The engineering-led approaches sought to defy nature itself rather than allow the system to flux with nature. For example, the single equilibrium in flood risk management is to have dry conditions in floodplains so that people can continue living in them. Dikes, storm-surge barriers, and dams are all attempts to withstand a flood, and reflect engineering resilience approaches. Their goal is to keep the same artificial equilibrium, in spite of the water system’s natural behavior.[9]</p>
<p>An engineering-only focus leads to the current challenge of companies needing to constantly stay up to date on patches, but facing many hurdles in doing so — and having this be their primary line of defense. The model that must be embraced is one in which the system can survive even if patches aren’t immediately updated, or users still click on phishing links. Your systems must survive even if users download a pdf.zip.exe.</p>
<p>As we saw with coral, without palpable vulnerability through exposure to risk, it is unlikely that resilience will develop.[11] You need to assume that attackers will gain access to a system, and figure out how to reduce the impact. You need to actually practice and embrace disaster recovery, rather than just having a plan.</p>
<p>With all that said, robustness is absolutely important to resilience. But robustness needs to be performed correctly. Drawing from flood risk management, diversity is a cornerstone of robustness — there needs to be layers of controls and diversity of solutions.[9] For example, there are storm surge barriers, dikes, and dams for flood prevention.</p>
<figure style="float:right; max-width:40%; padding-left: 10px">
	<img src="/blog/img/resilience-06.jpg" alt="Picture of New York City">
	<figcaption>Picture by <a href="https://unsplash.com/@mdisc">Michael Discenza</a>.<figcaption>
</figure>
<p>New York City has published guidelines for climate change resiliency which also recommend a combination of controls. For example, for dealing with excess heat, they recommend backup generators to hybrid-power systems, using systems with higher heat tolerance, as well as passive cooling and ventilation through window shades or high-performance glazing.<a name="back-16"></a><a href="#cite-16">[16]</a></p>
<p>Diversity of controls helps provide redundancy in uncertain conditions. When complementing measures are in place, it’s less likely that there will be catastrophic damage through the failure of a singular control. But, the tradeoff is between efficiency and effectiveness. The easier route with lower upfront costs is to implement a single control. The effective route is to implement layered controls, which may cost more now, but will pay dividends in reduced consequences long-term.</p>
<p>I don’t believe this will be a new concept for many of you. For example, you could deploy a so-called APT-blocking appliance (aka the BlinkyBoxTM) on your network that purports to stop all attacks. However, what then happens when legitimate credentials are used to access a cloud-based service? Or, as we’ve seen recently with Kaspersky, what happens when the APT-blocking-box is hacked by the APT itself to gain access?</p>
<p>Diversity can also be seen through the lens of systems. While we think of fragmentation generally as poor to have, particularly in the context of asset management, there is an argument in its favor. Shared hosting providers can increase correlated risk. If there is a breach at one provider, or vulnerability in a key component or library used across all applications, then your risk exposure is far greater than it might have been otherwise.</p>
<p>The financial crisis in 2008 serves as a pertinent example of the dangers of ignoring correlated risk. There is something to be said for ensuring you have some level of diversity in your architecture. I am by no means the first to suggest that heterogeneity is important — Dan Geer was fired from @stake in 2003 for making that suggestion, specifically in regards to Microsoft’s hegemony.<a name="back-17"></a><a href="#cite-17">[17]</a></p>
<p>This sort of diversity also plays into the efficiency vs. effectiveness tradeoff. However, efficiency can actually lead to a more limited space in which you can operate. Being able to function using fragmented technologies and controls will ensure you can adapt much better to uncertainty. Systems diversity, through this lens, can provide the instability that can ensure survival. I posit that it is up for debate whether it is more optimal to have manageability through uniformity or limited impact of any one stressor through diversity.</p>
<p>Thinking in decision trees can help ensure robustness through proper diversity of controls. I’ve discussed decision trees towards information security strategy in prior talks, <a href="/speaking/us-17-Shortridge-Big-Game-Theory-Hunting.pdf">most notably at Black Hat</a>. Briefly, the goal should be to <a href="/blog/posts/choice-architecture-infosec-blue-teams">walk through what steps</a> an attacker would take to reach their goal in your organization. Naturally, there is not just one path an attacker will take; you have to consider what path they will take if they encounter a mitigation as well. From there, you can begin determining what cascading controls are necessary in order to raise the cost to the attacker as much as possible.</p>
<p>Raising the cost to the attacker serves as a bridge between robustness and adaptability. As frequently referenced, Dino Dai Zovi said, “Attackers will take the least cost path through an attack graph from their start node to their goal node.”<a name="back-18"></a><a href="#cite-18">[18]</a> If you can raise attacker cost, you can begin deterring attackers. Attackers will need greater resources and a greater level of sophistication if you do so. One way to raise cost is through robustness with strong, diversified controls. Another way is through adaptability.</p>
<hr>
<h2 id="adaptability">Adaptability</h2>
<p><img src="/blog/img/resilience-09.jpg" alt="Image of a chameleon"><em>Image by <a href="https://unsplash.com/@_cecilencieux">Cécile Brasseur</a></em></p>
<p>Adaptability concerns reducing the costs of damage incurred and keeping your options open to support transformability. The evolutionary approach is one in which the assumption is that conditions will naturally change over time, and thus the system itself needs to incur long-term change. Reversion to the preexisting state is not necessarily — and often wholly — undesirable.</p>
<p>The Intergovernmental Panel on Climate Change (IPCC) highlights the need for realism and warns about the dangers of incremental changes under the guise of adaptation.<a name="back-19"></a><a href="#cite-19">[19]</a> They specifically recommend questioning underlying assumptions and existing structures, acknowledging the inevitability of macro-level change, and making managed transformation the goal. Pretending you’re adapting while only undergoing incremental change creates a false sense of security — similar to the safe development paradox. You may alleviate symptoms in the short-term, but you can only cultivate resilience through meaningful change towards adapting to reality.</p>
<figure style="float:right; max-width:40%; padding-left: 10px">
	<img src="/blog/img/resilience-07.jpg" alt="Picture of a panda">
	<figcaption>Picture by <a href="https://unsplash.com/@djmle29n">Debbie Molle</a>.<figcaption>
</figure>
<p>A macro-level example of adaptability is in the realm of climate change. While traditional protection strategies for wildlife at risk due to climate change have been focused on preserving their existing habitats, more recent research proposes alternative approaches. Protected areas are in static locations, and tend to become increasingly isolated, leaving nowhere to go. Preserving a species in such an isolated, at-risk area results in “genetic ghettos.”<a name="back-20"></a><a href="#cite-20">[20]</a> The species becomes increasingly acclimated to this limited environment, which consequently staves off any potential for evolutionary adaptation.</p>
<p>Instead, wildlife naturally has shifted ranges in response to previous instances of climate change, in which preferable conditions are “tracked.” Recommendations now include helping connect disparate ecosystems together so that wildlife can more easily migrate. For example, in areas with urban environments, a narrow strip of land can be preserved, or another sort of route created, so that populations can connect to a different climactic area.</p>
<p>One can think of existing territories like legacy systems. We try to “preserve” these habitats through patching and retroactive hardening. The adaptive model from nature is to move to new territories that fit preferred conditions — ones in which the species can survive — which is similar, in effect, to moving to new infrastructure or a new mode of operation that is more resilient to the new threat.</p>
<p>As a highly tangible example, consider the case of database queries. The organization’s status quo might be that they use inline PHP code within the HTML of their web apps to perform database queries. If an injection vulnerability is discovered in an instance of this inline PHP code, they’ll fix that instance, but likely not conduct a full review across all of their inline PHP code. In this case, they’d be improving robustness by patching the code, but they’d be returning to their so-called “stable equilibrium.”</p>
<p>In contrast, embracing adaptability would mean the organization should instead remove inline queries, and use one class that accesses the database. This one class would be completely responsible for all sanitization. The result is not only that now you only have to fix issues in one place, but also that developer turnover can be managed — rather than writing their own new inline code, they can use the new library that you’ve built instead.</p>
<p>Preservation can also lead to misleading indicators of resilience. For example, static measurements such as high coral cover or fish abundance can be poor indicators of coral reef resilience.<a name="back-21"></a><a href="#cite-21">[21]</a> These measures can just reflect favorable conditions in the past and not accurately reflect when resilience is being eroded.</p>
<p>Likewise, in information security, organizations using a library with no known vulnerabilities may currently treat their security model as complete and not perform continuous revisions. The issue is that the release of new vulnerabilities or attacker methods is not always well-publicized. Instead, organizations should frequently review their security posture to ensure threat models are not based on past favorable conditions, even if the product does not change. As a recent example, you likely had to update your threat models after the release of <a href="https://en.wikipedia.org/wiki/EternalBlue">EternalBlue</a> — but it was still privately operational well before disclosure.</p>
<p>In the realm of climate change, moving members of a species that are used to warm areas to intermingle with its kind who live in colder locations can help the cold-adapted population actually survive long-term.[20] Applications built using legacy systems and libraries which have never been exposed to the outside world, which suddenly need exposure to external APIs, tend to fare extremely poorly in security terms.</p>
<p>As mentioned in the example of unprotected coral, the lack of the system’s exposure to the threat over their lifespan has led them to exist in a weakened, unpatched state. Security-wise, you should intermingle your internally-facing systems with your externally-facing systems to ensure they meet the standards of the evolving “global” threat model.</p>
<p>The goal for cities in the face of natural disasters is to maintain a flexible approach in order to properly adapt their response to the changing nature of their risks. If cities do not cultivate a process which assumes uncertainty and surprise in their model, then it’s safe to say they are being wholly unrealistic about the ways of the world.</p>
<p>As defenders, you should test attacker playbooks against yourself to determine how quickly you can adapt to attacker methods. I’m sure many of you wish you could have in-house red teams. For those who do have them, use them to your advantage in this way. I mentioned decision trees earlier as a way to determine which diverse set of controls to use — have your red teams map out the decision trees they created during their course of action to add realistic data into your own trees.</p>
<p>You also must test your ability to absorb the impact of an attack, and minimize the damage. One such test is through failure injection. <a href="https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey">Chaos Monkey</a>, part of Netflix’s suite of tools called the “Simian Army,” is a service which randomly kills instances in order to test their ability to withstand failure. In fact, Chaos Monkey is described as a resiliency tool.</p>
<p>While it was designed with a performance use case in mind, it can be repurposed for security. If your infrastructure is continually fluctuating, with instances killed at random, it makes it exceptionally difficult for attackers to persist. Attackers would have to conduct whatever they needed within an uncertain time frame. This is, of course, not impossible, but it absolutely raises the attacker’s cost and level of skill required.</p>
<p>Netflix’s goal with Chaos Monkey is to “design a cloud architecture where individual components can fail without affecting the availability of the entire system.”<a name="back-22"></a><a href="#cite-22">[22]</a> Defenders should make it their goal to design a security architecture where individual controls can fail without affecting the security of the entire system. As I mentioned earlier, if your system becomes completely compromised because a user clicks on a malicious link, you must rethink your security architecture.</p>
<p>Rethinking security architecture is no easy feat. Defenders are considerably hindered in their ability to be adaptive and flexible. Most commonly, people think of organizational pressure as the key deterrent, but I would argue the infosec industry itself is the primary limiter. Defenders face an overwhelming level of complexity and uncertainty due to the sheer number of security vendors and the fragmentation of the solution space.</p>
<p>I believe some of the challenges can be solved by changing the types of infrastructure that are used to promote adaptability and support transformability. Deploying Chaos Monkey is one such example centered on adaptability, but a grander example that blends into transformability is using a container-based ecosystem.</p>
<p>Many of you have likely heard of the container revolution, though may not have used them yourselves. While I’m not a container expert, I’ll explain why containers are a natural fit for evolutionary resilience. Jess Frazelle — “the Keyser Söze of containers”— highlighted in her DevOpsDays talk that containers represent potential salvation from the tradeoff between usability and security.<a name="back-23"></a><a href="#cite-23">[23]</a> I believe she’s absolutely correct.</p>
<p>As per Microsoft, containers are “a way to wrap up an application in its own isolated box” and are “an isolated, resource-controlled, and portable runtime environment.”<a name="back-24"></a><a href="#cite-24">[24]</a> A container serves as a layer of abstraction between an application and the host server — which can be of any kind, whether virtualized or bare metal. Because of this, it allows for easier migration to and from underlying infrastructure without having to rebuild applications.</p>
<p>The most common buzzwords I hear for containers are flexibility, portability, and scalability, making them a natural fit for both the adaptability and transformability characteristics. Just as attackers need repeatability and scalability, so do defenders — as well as something that can adapt over time to changes in attacker methods. It cannot be overstated how much a container environment bolsters flexibility and flattens complexity.</p>
<p>When something goes wrong — whether security related or not — the legacy approach makes determining the root cause an effort in untangling and dependency management. With containers, verifiably working systems are available in one neat package, facilitating far less messy remediation. Even implementing them into existing legacy systems can help more easily manage dependencies and licenses.</p>
<p>In the vein of Chaos Monkey, if applications are attacked while running inside a container, all that must be done is kill the container and restart it. There is no need for vulnerability scanning, firewalls, anti-virus, and all the other fragments of the security solution space. You can instead isolate and shut down infected containers as it happens.</p>
<figure style="float:right; max-width:40%; padding-left: 10px">
	<img src="/blog/img/resilience-11.jpg" alt="Picture of trash on a beach">
	<figcaption>Picture by <a href="https://unsplash.com/@dwoodhouse">Dustan Woodhouse</a>.<figcaption>
</figure>
You have to ensure adaptability to manage resilience erosion as well. In the case of coral, there are “pulse-type stressors,” or acute stressors, which include tropical cyclones, coral bleaching events, and destructive fishing.[21] But there are also “press-type stressors,” which are stressors occurring over longer periods of time, such as pollution, sedimentation, overfishing, ocean warming, and acidification. With enough of the press-type stressors wearing it down, coral reef resilience is overwhelmed when a pulse-type stressor occurs.
<p>Pulse-type stressors in information security can be thought of as new vulnerabilities or a new data breach. Press-type stressors can include large turnover of employees — particularly ones working on large projects — but I would say the most prevalent is complexity. As you add complexity to your applications and systems, it becomes more difficult to test every possible path to compromise, because the paths begin trending towards infinity. If you can no longer test every path because your system is too complex, you have eroded your resilience, a key part of which is flexibility — and will have neither adaptability or transformability.</p>
<hr>
<h2 id="transformability">Transformability</h2>
<p><img src="/blog/img/resilience-10.jpg" alt="Image of a butterfly"><em>Image by <a href="https://unsplash.com/@erinw">Erin Wilson</a></em></p>
<p>Transformability can be thought of challenging your existing assumptions and reorganizing your system.</p>
<p>Returning to our previous example of an organization removing inline PHP database queries in favor of a single class, the latter approach also bolsters transformability. Because it is just one library, it allows for easier migration as-needed depending on how the company’s environment, or the threat environment, changes. You are not leaving your options open when your web app is riddled with inline code. You must be able to review and revise your previous choices — for example by moving to new tools or libraries.</p>
<p>Research from other domains has explored the policy implications of transformability, and how to implement the concept on a practical level. Disaster recovery in urban areas is one of the most well-researched domains in this regard. Given urban areas are dynamic systems, evolutionary resilience suggests that policy should encourage recovery efforts that prioritize re-building the urban area into an improved — or even better, optimized — system.[8] For example, in flood-prone areas, the policy should be to change the location and not build in those areas, while also implementing flood-proof construction for periphery areas.</p>
<figure style="float:right; max-width:40%; padding-left: 10px">
	<img src="/blog/img/resilience-12.jpg" alt="Picture of Christchurch Cathedral">
	<figcaption>NZ Defence Force. (February 23, 2011). Christchurch Cathedral.<figcaption>
</figure>
As a tangible example of transformability, let’s explore Christchurch. In 2011, a devastating magnitude 6.3 earthquake hit Christchurch, the second most populous city in New Zealand at the time. It killed 185 people and damaged over 100,000 houses, with a financial cost to rebuild estimated at over $40 billion.
<p>After the quake, the Canterbury Earthquake Recovery Authority (CERA) designated a new “red zone” throughout the area. This red zone includes damaged or vulnerable land where they believe rebuilding would be “prolonged and uneconomic.”<a name="back-25"></a><a href="#cite-25">[25]</a> The assessment embraces transformability, rejecting the need to return to the status quo, and instead challenging the assumption that there should be buildings on the land at all.</p>
<p>As security professionals, you should work to identify what the red zones are within your IT systems. Organizations should identify which infrastructure or technologies present the most security challenges — whether through vulnerabilities or ongoing maintenance costs — and put them in the red zone for being phased out.</p>
<p>Defining the components of your own red zone calculation will be subjective, but I submit the following as potential criteria — systems that are directly exposed to external attacks, or entirely public facing, in particular:</p>
<ul>
<li>Those which expose complex or critical functionality and are accessible publicly</li>
<li>Newly deployed systems or architectures, particularly those developed by inexperienced professionals</li>
<li>Legacy systems using outdated libraries, software, or languages</li>
<li>Systems with no backups, or which can’t easily be restored</li>
<li>Any system with critical personally identifiable information (PII) or immutable data — such as in Equifax’s case</li>
<li>Systems with privileged access to other systems or accounts</li>
<li>Any system that has known or “accepted” risk associated with it</li>
<li>Easily fingerprintable or overly verbose systems</li>
<li>Anything that could be deemed a single point of failure for your organization</li>
<li>Systems that are prohibitive to patch or update</li>
</ul>
<p>Defining your security red zones isn’t about examining potential vulnerabilities or path to compromise in each of your systems. Instead, you want to identify any assets that fall under the red zone criteria, and attempt to move them out of the zone, into healthier systems.</p>
<p>For example, an organization might have an existing asset built using legacy technology, which now has to be exposed to public APIs. Furthermore, this asset consumes critical data and also has privileged access to backend APIs. This asset should likely be classified as being in the “red zone,” without actually assessing whether or not there are vulnerabilities. The goal is to move or rebuild the asset outside of the red zone and make it a safer system.</p>
<p>In this case, such measures could include:</p>
<ul>
<li>Locking down public exposure so that it’s only accessible via VPN</li>
<li>Rebuilding the asset using newer, non-legacy technologies (such as containers)</li>
<li>Avoiding storing critical data on this asset and proxy encrypted data further into the architecture’s core</li>
<li>Introducing security logging and monitoring</li>
<li>Locking down privileged access and enforcing the principle of least privilege</li>
</ul>
<p>By implementing all, or at least some, of these, the system would no longer be in the red zone. This would be similar to moving a power plant out of a flood plain, and instead building it in an elevated area with fortified materials and an early warning system.</p>
<p>Using the example of levees, researchers have proposed having planned decommission of levees ahead of known maintenance hurdles. This way, levees can be used as a stop-gap as communities embrace transformability and relocate. It’s unrealistic to assume that a community could uproot overnight, but it’s important that it isn’t treated as a permanent Band-Aid.</p>
<p>In security, it’s similarly unrealistic to assume you can transform overnight — and your organization would probably not be pleased with you. But, you need to be able to migrate. Like levees, you could have planned decommission of retroactive hardening and patching before moving off of legacy systems. This ensures you don’t renew on software or hardware before it becomes too embedded in your organization or costly to maintain.</p>
<p>In general, you should have plans in place to decommission technologies that will eventually be obsolete or replaced, even down to libraries, hashes, and software versions. Continually consider how you can prepare in advance for migration.</p>
<p>Evolutionary resilience research has highlighted the need for more collaborative planning across stakeholders in a complex system. Rather than relying on inferred knowledge towards a pre-defined goal, local groups should work together and compile their own information towards optimizing relevant processes in their system.</p>
<p>Drawing from flood risk management, those in charge of risk management should be the ones to communicate what realistic level of protection each sort of approach provides.[9] Otherwise, it’s difficult for communities to cultivate knowledge on their own of what protection is in place and what limitations remain for risk reduction.</p>
<p>The security function within an organization is also not an isolated unit. Security should foster collaboration across the business towards optimizing security processes. You all should also be open in sharing knowledge of what protections are in place, what risks the protection realistically reduces, what risk remains, and any uncertainties around the approach.</p>
<p>The most obvious group with whom security should partner is engineering, which I discuss further in another keynote, <a href="/blog/posts/security-as-a-product">Security as Product</a>. As is in the aforementioned case of containers, it’s possible that there are improvements engineering desires that might facilitate more adaptable security as well. The trend towards flexibility is perhaps the strongest in software development and engineering today, and security must embrace this trend as well. I highly recommend reading the recently released O’Reilly book on Agile Application Security for more in this vein.<a name="back-26"></a><a href="#cite-26">[26]</a></p>
<p>In this collaborative setting, the role of planners should be to manage transitions between states rather than create or mediate.<a name="back-27"></a><a href="#cite-27">[27]</a> I don’t think security professionals can fully remove the need to create or implement solutions to some extent. Focusing on managing the transition from the current state to a more secure state can potentially reduce some labor burden, however.</p>
<p>For example, drawing again on the transition to containers, the engineering group will conduct the majority of the labor towards this endeavor. Security should manage the project to ensure necessary controls are in place, such as detection of container compromise.</p>
<figure style="float:right; max-width:40%; padding-left: 10px">
	<img src="/blog/img/resilience-13.jpg" alt="Picture of people talking about tech">
	<figcaption>Image by <a href="https://unsplash.com/@nesabymakers">NESA by Makers</a><figcaption>
</figure>
As another example, security ideally should work with engineering to implement solutions like two-factor authentication. John “Four” Flynn at Facebook gave a great talk a few years ago about implementing 2FA at Facebook. Their goal was to put 2FA on SSH to make it hard for attackers to pivot into Facebook’s production environment.<a name="back-28"></a>[[28]](#cite-28) During the process, they thought very carefully about the user experience, realizing they needed to support frequent use, allow for a flexible range of factors, and minimize help desk requests.
<p>They decided to use DuoSecurity and YubiKey Nano — with the YubiKey, the developers only needed to touch the side of their laptop to SSH, and DuoSec’s cloud-based tokens ensured they’d still have access even if they lost the YubiKey. One of their key discoveries during this project was that:</p>
<blockquote>
<p>“You can actually implement security controls that affect every single thing people are doing and still make them love it in the process.”</p>
</blockquote>
<p>I recognize that while “software is eating the world,” not every company is yet a technology company. Potentially, you will have a limited IT group with whom to collaborate on technical efforts. This doesn’t mean you can’t collaborate, however. It’s well recognized that there is division between even security and risk or fraud groups, let alone general counsel and financial functions. There is someone at your organization who wants their job to be easier. Your job then needs to be how security can make that happen, or at least fit into their existing workflows.</p>
<p>For this sort of transformability to happen, responsive governance systems are needed. Defenders must implement decision-making processes that are quick to identify and respond to emerging threats. Part of this in ensuring that your organization is learning from prior experiences — such as through the decision tree process I mentioned before, in which you can update your models after a breach.</p>
<p>However, your organization’s entire community must be involved in this learning process and be prepared to continually evaluate strategy. Implementing a security culture in your organization is perhaps the best chance of doing so.</p>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>This is my humble attempt at a definition of resilience in information security:</p>
<blockquote>
<p><strong>Resilience in security means a flexible system that can absorb an attack and reorganize around the threat.</strong></p>
</blockquote>
<p>The system, in this case, likely is your organization, although this can apply to its underlying systems as well. Crucially, resilience in security is not the ability to withstand or prevent an attack. That’s the blue pill.</p>
<p>The red pill is that the reality is that attacks will happen to you, and you must architect your security around this fact. I showed you how deep the rabbit hole goes on what security strategy fits this reality. Robustness, adaptability, and transformability are the keys to survival in Wonderland.</p>
<p><strong>Robustness</strong>, while not the silver bullet, should be optimized through diversity of controls. <strong>Adaptability</strong> seeks to minimize the impact of an attack and keep your options open, and new types of infrastructure, such as containers, can enable it. <strong>Transformability</strong> demands you challenge your assumptions and reorganize your system around the reality — a reality that affects communities, which requires a collaborative effort.</p>
<p>My favorite fictional character growing up was Ian Malcolm, from Michael Crichton’s “Jurassic Park” novels. I believe the full quote of one of his most notable lines summarizes how you, as defenders, should think of your strategy <a name="back-29"></a><a href="#cite-29">[29]</a>:</p>
<blockquote>
<p>“Because the history of evolution is that life escapes all barriers. Life breaks free. Life expands to new territories. Painfully, perhaps even dangerously. But life finds a way.”</p>
</blockquote>
<p>Consider how you can escape barriers, consider how you can expand to new territories, consider how you can find a way to evolve — because attackers are doing all of these things. Doing so will likely be painful at first for your organization. Your job, as part of implementing transformability, is to manage these transitions and minimize the pain and danger. Much like with life, as per Malcolm’s quote, you can think of it as the survival of your organization’s data being at stake.</p>
<figure style="float:right; max-width:40%; padding-left: 10px">
	<img src="/blog/img/resilience-14.jpg" alt="Picture of a cat wearing a firefighter hat">
</figure>
Attacks will happen. Attackers will continue to evolve their methods. We can evolve our methods, too. We face a choice, as an industry: we can either continue to indulge ourselves in anger, bargaining, and depression, or strive towards acceptance.
<p>If we take this red pill of resilience, we can defend ourselves effectively and realistically. If we take the blue pill, we will keep attempting to rebound to an artificial equilibrium — relegating us to the role of a firefighting cat who is drunk on snake oil.</p>
<p>I am certain most of you are fed up with these dynamics. Instead of accepting snake oil, I encourage you to take the red pill of resilience instead.</p>
<hr>
<h2 id="references">References</h2>
<p><a name="cite-1"></a><a href="#back-1">[1]</a> Alexander, D. E. (2013). Resilience and disaster risk reduction: an etymological journey. Natural hazards and earth system sciences, 13(11), 2707–2716.</p>
<p><a name="cite-2"></a><a href="#back-2">[2]</a> Holling, C. S. (1996). Engineering resilience versus ecological resilience. Engineering within ecological constraints, 31(1996), 32.</p>
<p><a name="cite-3"></a><a href="#back-3">[3]</a> The American Chestnut Foundation. How Chestnut Blight Devastated the American Chestnut. Retrieved from <a href="https://www.acf.org/the-american-chestnut/">https://www.acf.org/the-american-chestnut/</a> (accessed September 2017).</p>
<p><a name="cite-4"></a><a href="#back-4">[4]</a> Restemeyer, B., Woltjer, J., &amp; van den Brink, M. (2015). A strategy-based framework for assessing the flood resilience of cities–A Hamburg case study. Planning Theory &amp; Practice, 16(1), 45–62.</p>
<p><a name="cite-5"></a><a href="#back-5">[5]</a> Blake, E.S. &amp; Zelinsky, D.A. (2018). National Hurricane Center Tropical Cyclone Report: Hurricane Harvey. National Oceanic and Atmospheric Administration.</p>
<p><a name="cite-6"></a><a href="#back-6">[6]</a> Timmermann, P. (1981). Vulnerability, resilience and the collapse of society. Environmental Monograph, 1, 1–42.</p>
<p><a name="cite-7"></a><a href="#back-7">[7]</a> Hough, S. E. (2016). Predicting the unpredictable: the tumultuous science of earthquake prediction. Princeton University Press.</p>
<p><a name="cite-8"></a><a href="#back-8">[8]</a> Sanchez, A. X., Osmond, P., &amp; van der Heijden, J. (2017). Are some forms of resilience more sustainable than others?. Procedia engineering, 180, 881–889.</p>
<p><a name="cite-9"></a><a href="#back-9">[9]</a> Tempels, B. (2016). Flood resilience: a co-evolutionary approach. Residents, spatial developments and flood risk management in the Dender basin.</p>
<p><a name="cite-10"></a><a href="#back-10">[10]</a> Burby, R. J. (2006). Hurricane Katrina and the paradoxes of government disaster policy: bringing about wise governmental decisions for hazardous areas. The Annals of the American Academy of Political and Social Science, 604(1), 171–191.</p>
<p><a name="cite-11"></a><a href="#back-11">[11]</a> Wenger, C. (2017). The oak or the reed: how resilience theories are translated into disaster management policies. Ecology and Society 22(3):18.</p>
<p><a name="cite-12"></a><a href="#back-12">[12]</a> Gunderson, L. (2010). Ecological and Human Community Resilience in Response to Natural Disasters. Ecology and Society 15(2): 18.</p>
<p><a name="cite-13"></a><a href="#back-13">[13]</a> Martindale, B., &amp; Osman P. (2007) Why the concerns with levees? They’re safe, right?. IASFM Fall 2007 Newsletter.</p>
<p><a name="cite-14"></a><a href="#back-14">[14]</a> Liao, K. H. (2012). A theory on urban resilience to floods — a basis for alternative planning practices. Ecology and Society 17(4): 48.</p>
<p><a name="cite-15"></a><a href="#back-15">[15]</a> Côté, I. M., &amp; Darling, E. S. (2010). Rethinking ecosystem resilience in the face of climate change. PLoS biology, 8(7), e1000438.</p>
<p><a name="cite-16"></a><a href="#back-16">[16]</a> NYC Mayor’s Office of Recovery and Resiliency. (2018). Climate Resiliency Design Guidelines.</p>
<p><a name="cite-17"></a><a href="#back-17">[17]</a> Verton, D. (October 1, 2003). Former @stake CTO Dan Geer on Microsoft report, firing. Retrieved from <a href="https://www.computerworld.com/article/2572315/security0/former--stake-cto-dan-geer-on-microsoft-report--firing.html">https://www.computerworld.com/article/2572315/security0/former--stake-cto-dan-geer-on-microsoft-report--firing.html</a></p>
<p><a name="cite-18"></a><a href="#back-18">[18]</a> Dai Zovi, D. Attacker “Math” 101.</p>
<p><a name="cite-19"></a><a href="#back-19">[19]</a> Intergovernmental Panel on Climate Change. (2014). Climate Change 2014 Synthesis Report Summary for Policymakers.</p>
<p><a name="cite-20"></a><a href="#back-20">[20]</a> Sgro, C. M., Lowe, A. J., &amp; Hoffmann, A. A. (2011). Building evolutionary resilience for conserving biodiversity under climate change. Evolutionary Applications, 4(2), 326–337.</p>
<p><a name="cite-21"></a><a href="#back-21">[21]</a> Anthony, K. R., Marshall, P. A., Abdulla, A., Beeden, R., Bergh, C., Black, R., … &amp; Green, A. (2015). Operationalizing resilience for adaptive coral reef management under global environmental change. Global change biology, 21(1), 48–61.</p>
<p><a name="cite-22"></a><a href="#back-22">[22]</a> Izrailevsky, Y., &amp; Tseitlin A. (July 18, 2011). The Netflix Simian Army. Retrieved from <a href="https://medium.com/netflix-techblog/the-netflix-simian-army-16e57fbab116">https://medium.com/netflix-techblog/the-netflix-simian-army-16e57fbab116</a>.</p>
<p><a name="cite-23"></a><a href="#back-23">[23]</a> Frazelle, J. (July 27, 2017). A Rant on Usable Security. Retrieved from <a href="https://blog.jessfraz.com/post/a-rant-on-usable-security/">https://blog.jessfraz.com/post/a-rant-on-usable-security/</a></p>
<p><a name="cite-24"></a><a href="#back-24">[24]</a> Brown, T., et al. Windows Containers. Retrieved from <a href="https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/">https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/</a> (accessed October 2017).</p>
<p><a name="cite-25"></a><a href="#back-25">[25]</a> Blundell, S. (April 19, 2016). Christchurch’s Game of Zones. Retrieved from <a href="https://www.noted.co.nz/currently/social-issues/christchurchs-game-of-zones/">https://www.noted.co.nz/currently/social-issues/christchurchs-game-of-zones/</a></p>
<p><a name="cite-26"></a><a href="#back-26">[26]</a> Bell, L., Bird, J., Brunton-Spall, M., Smith, R. (2017). Agile Application Security. O’Reilly Media.</p>
<p><a name="cite-27"></a><a href="#back-27">[27]</a> Batty, M. (2013). Complexity and Planning: Systems, Assemblages and Simulations, edited by Gert de Roo, Jean Hillier, and Joris van Wezemael. 2012. Farnham, UK and Burlington, Vermont: Ashgate Publishing. 443+ xviii. Journal of Regional Science, 53(4), 724–727.</p>
<p><a name="cite-28"></a><a href="#back-28">[28]</a> Flynn, J. (February 6, 2014). 2FAC: Facebook’s Internal Multi-factor Auth Platform — Security @ Scale 2014. Retrieved from <a href="https://www.youtube.com/watch?v=pY4FBGI7bHM">https://www.youtube.com/watch?v=pY4FBGI7bHM</a></p>
<p><a name="cite-29"></a><a href="#back-29">[29]</a> Crichton, M. (1990). Jurassic Park. Random House.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Security as a Product</title>
            <link>https://kellyshortridge.com/blog/posts/security-as-a-product/</link>
            <pubDate>Fri, 18 May 2018 17:15:09 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/security-as-a-product/</guid>
            <description>Originally given as the keynote at BSides Knoxville.
Security is a product, but we treat it like a sacred, immutable grail to preserve, unblemished by the sublunary needs of users. And yet, we wonder why defense remains stagnant, why we fail so consistently in progressing towards the glorious ideal of a “secure organization.” We will continue to fail — unless we treat security as a product. Are we trying to respect the phantasmal Elder Deities of Infosec and their stringent doctrine, or are we trying to ensure our organization can still thrive while operating in a perilous digital world?
One definition of a product I prefer is “something created through a process that provides benefits to a market.” Security as product, therefore, is created through a process that provides benefits to a market — in this case, the organization in which it operates. The somewhat religious belief I hear espoused is that designing security to benefit your organization will result in a blasphemous mimicry of true security. That couldn’t be further from the truth. It’s a mimicry of your duty as a security professional to follow your personal beliefs rather than pursue strategies that benefit your organization.
But perhaps you don’t believe me. You think there’s some level of objective “truth” that is foolish to discard in the name of benefitting your organization. Whatever that truth is, that’s now your product, and if it doesn’t benefit your organization, you’re attempting to sell it into a market that doesn’t want it.
I’m often left perplexed at how some security professionals can see victory in forcing through a change that users viscerally dislike, as if their dissatisfaction represents a blood sacrifice. How is that possibly success? Success is solving a real problem in a way that delivers consistent value. Success is fostering consensus so that you are supported by the organization in effecting meaningful change — even if you implement something adding to your customer’s burden.
For example, when requiring multi-round hashing rather than storing credentials in plaintext, relevant stakeholders in your organization must be included and understand the need for a noble sacrifice. You will fail if the security of your organization rests on users adopting a strategy that neither provides them value, nor is one they support.
As Sarah Jamie Lewis insightfully tweeted:
&#34;our software is secure if you use it correctly&#34; means &#34;our software is not secure&#34;
— Sarah Jamie Lewis (@SarahJamieLewis) May 14, 2018Similarly, if your organization is secure if users follow your security policies correctly, your organization is not secure. Maintaining the dogmatic view that it’s “the users that must be wrong,” rather than accepting the situation for what it is — the failure of your security program — is why we continue to fail. How many more years are we going to lament that our users are poor at security before we actually start working on pragmatic solutions?
Pragmatism doesn’t require security sacrilege. All products, including security, are shared problems within an organization. Each stakeholder must feel they have a personal stake in whatever course of action is taken — a process called building consensus.
As a product manager, there are times when we will release changes or new features that may be contentious. If I pursue a strategy without regard for how my colleagues who interact with customers or potential customers every day feel, the product won’t succeed in the market, because my colleagues won’t have the confidence to sell it. If I come to my colleagues with evidence for why it is necessary, describe how it works towards the broader product vision, and actively listen to their concerns, we can design a strategy collectively in which all parties are confident — even in the face of uncomfortable change.
Security as a product doesn’t require the wearing down of strategies through compromise until they are rendered ineffective. It requires a purposeful strategy through an overarching vision of how security can support the organization’s survival in light of the fact that computers are somewhat terrible, but necessary for success.
At this point, I’ve mastered a stolid expression for when security professionals nonchalantly explain the improvisational nature of their strategy-making. Most assume I’m asking whether they have a strategy for a specific project and seem surprised when I ask if they’ve defined their long-term vision for their overall security program. In the same conversation, the CISO or security engineer with whom I’m speaking will unleash a passionate rant — perhaps you have heard some of these grievances, or uttered them yourself:
“Things do sometimes sort of get accomplished… but slowly.” “We don’t ever actually make progress, we’re just running around.” “We keep making the same mistakes over and over — it doesn’t get better.” “I don’t have any time to do research, I’m constantly in meetings where we don’t actually get anything done.” “I just don’t even give a shit anymore, nothing changes.” I really do feel for your plight, but y’all can be tedious. There is clearly something amiss here that better or more tech nor people can fix — they will simply be likewise wasted. I hear these remonstrances nearly everywhere in infosec — from the smallest of teams to teams sprawling over multiple functional areas at Fortune 500 companies. There are countless passionate people working tirelessly, whom consistently feel like they aren’t accomplishing anything that is meaningfully improving security.
What is perhaps even worse is hearing that security teams have adopted “agile” methodology, then discovering that their tasks are based on the whims of the individual, the epics are ill-defined and focused on functional areas, and no one is looking at a higher level to see how many resources are being dedicated towards each effort.
What’s even more jarring is every time I play surrogate therapist — asking probing questions to discern why their teams are so inefficient at a macro level — they unabashedly disclose that their teams don’t have any overarching goals defined, let alone metrics to track progress.
And we remain shocked that we aren’t progressing?
Of the “three pillars” of infosec — people, processes, and technology — I believe processes are most ignored and undervalued. I’ve grown exhausted by the number of articles about the “cyber skills shortage” as well as listening to — and speaking about — the pernicious complexity and misguidedness of the security technology space. Yet I don’t see nearly the same volume of fiery headlines and hot takes on Twitter about how our processes are failing us, despite the fact that processes are the underpinnings of how people work together and with technology.
As an example, I’ve been amazed at what our customers accomplished with Excel and two people prior to adopting our solution — managing vendor risk programs covering thousands of vendors across many lines of business. A trait in common with each of those customers succeeding despite their people and technology constraints is how easily they can articulate their process — and it’s because they’ve comprehensively defined it.
A process is “a series of actions or steps taken in order to achieve a particular end.” You can have the best people and the best technology, but if you cannot define to what end they can be used, and how they can be used, success is unlikely to manifest. You also must determine the “what” before the “how” — it’s prohibitive to determine the steps necessary until you define what the particular end should be. I believe there is insufficient attention paid in security programs to what particular ends should be. Is “making Organization, Inc. secure” really the pinnacle of defining goals for our security programs?
The foundation for any product is understand your goal for the product. Fundamentally, what is the product’s purpose? What are you trying to help users accomplish? Viewing security as a product forces you to define your goals and come to terms with your team’s purpose. It also ensures you’re prioritizing actions appropriately — honing in on what will actually improve the product and your customer’s experience.
If your company is publicly traded, have you read their annual report? Can you summarize the Risk Factors they outline in their 10-K? The Risk Factors section is quite literally a cheat sheet, a ranking of your organization’s risks in order of their priority. If you do not understand the risks to the business’ ongoing operations from the organization’s perspective of priority, how could you possibly understand what is most essential to protect?
I assure you that you do not have to dive deeply into the mysterious waters of product management to improve your security program. The aforementioned rants by my blue team friends are painful primarily because they include examples of what you definitely shouldn’t do in product management if you want to create continuously successful products. Even not doing those things will help significantly — and doing the right things will empower you even more.
Because what lurks beneath the frustration expressed by so many in our industry is a sense of helplessness. We don’t feel empowered, we feel stifled and downtrodden. I would argue any profession in which you expend a lot of intellectual effort and time-capital into improving a problem, only to feel like you are running in place, will rapidly burn people out.
In infosec, despite a common understanding that reactive approaches to defense are misguided, we maintain reactive processes. Security teams are accustomed to receiving direction externally, feeling burdened with priorities that defy their beliefs of what is important — as if a secular organization should dictate the priorities of such a sacred order.
Once you adopt the mindset of security as a product, you can begin to take control. One of the “basics” of product management is that solely delivering exactly what customers demand, without understanding the motivation for their demands, will lead to poor outcomes and potentially monstrously disjointed user experiences. You have to proactively understand your customer’s perspective and look beneath the surface of what they are requesting to discern the underlying challenge or desire.
How many of you have worked in retail or other customer-facing service jobs? I have as well, at a department store and later at a frozen yogurt shop, and if security professionals believe they are treated poorly, I promise that you cannot fathom the depths of brutality customers can reach. I ask, because a cornerstone of many customer-facing service jobs is the notion of anticipating needs.
Anticipating needs means understanding your customer’s challenges, desires, and beliefs. For example, one method by which I sold higher-SKU merchandise in the department store was by efficiently learning about my individual customer. I asked questions about why they were shopping and what frustrates them sartorially.
Since I was in the contemporary dresses department, usually the woman was shopping in anticipation of an event, whether a date or party. I listened carefully to pick up on any clues indicating her challenges — for example, one with which I deeply relate is “I hate wearing dresses,” or “I’m going to be on my feet all night.”
Even a morsel of such data was sufficient for me to find additional options for her beyond the items she had chosen. Perhaps a dress with pockets and an elastic waist, that still looks chic while maximizing comfort. Many of the dress-wearing people reading can likely relate to the ecstasy of wearing a dress with pockets, which can both cache snacks or conceal fidgeting hands due to social anxiety. Or, I might offer a maxi dress, which conveniently veils one’s shoes, allowing the option of feet-sparing flats rather than heels.
I’m cognizant that I’m essentially describing a robust recommendation engine (more Netflix than Amazon) — but as it happens, humans can excel in this effort, too. I would imagine most would appreciate a professional faerie godparent constantly anticipating your needs and making your life easier, all without you having to request or nag.
Afforded the cover of not being, strictly speaking, a “security professional,” people are quite honest with me in how they perceive the security team. Security teams frequently are considered the opposite of the faerie godparent— more like a sulking demon that seems to relish an arduous professional life and decrees you are forbidden from doing the things you need to, without ever seeming to care about what those things are.
Security teams both rely primarily on direction and yet seem resentful of this dependence — but ironically also begrudge the notion of reaching out proactively to their organization’s stakeholders to discern what needs to be done.
This inconsistency of thinking leads me to somewhat believe that many security people fundamentally want to dictate what’s important to the company from a security perspective, based on their own opinions, so as to serve the Elder Infosec Deities. Frankly, it sometimes takes considerable effort not to adopt my best Regina George face after listening to a security person elaborately envisioning their Blue Team utopia (what I call a “Blutopia”) at great length and ask pointedly, “Have you ever considered that your opinions might be wrong?”
Part of the reason I don’t ask is that I truly don’t require additional help in amplifying social awkwardness. But the larger part is also that I don’t believe they would be deterred if their opinions are deemed “wrong” by someone else. My conclusion is this is because of the Steve Jobs Myth.
I don’t like Steve Jobs. My personal opinion is that he was a jerk and is a wretched role model for leadership. However, I recognize that he is idolized by the type of people who are prioritizing their personal opinion over what their organization actually needs, because of this Steve Jobs Myth. The myth is that through the spellbinding magic of Jobs’ gut instinct alone, and defying all evidence and user analysis, Apple forged ahead with the iPhone and consequently revolutionized the cellular device market.
That isn’t actually what happened.
What actually happened is there was an experimental project initiated without Jobs’ knowledge, which received lukewarm reception by Jobs once presented to him because he believed cell phones “sucked.” However, he trusted the team to work through the technical details and even allowed the head of the project to hire Apple engineers from other projects. His insisted in return on seeing “an interface that might be intuitive and exciting to lay-users” before he’d be convinced.
The Steve Jobs Myth perpetuates the idea that Jobs gave minimal thought to user needs — which generally makes some people feel empowered to not care, either — and that it is the only way to conceive brilliance and truly leave users awestruck. Jobs’ concern was actually that you cannot simply ask people, “What’s the next big thing?” and that market research is insufficient to conceive a product that customers will love.
However, he viewed user research as essential — as seen in his requirement for the continued development of the iPhone. What he understood is that people won’t always say — or even know — what they want, but through user research, you can see which preferences they truly hold based on how they behave.
Within behavioral economics, there’s a clear hierarchy between stated vs. revealed preferences. Humans can be proficient in fooling themselves in what their preferences are, or if they’re being interviewed, in saving face. For example, if someone asked me, “are you more likely to prepare chicken and broccoli for dinner, or a can of tuna?” I am not necessarily inclined to reveal that I’m sometimes indistinguishable from a cat in behavior and will answer the chicken and broccoli with my ideal self in mind. But if you observed the dinners I made that week, you would see cans of tuna — a vastly firmer source of truth for answering that question.
If you ask your organization, “Do you find SSO easy to use?” you might discover a variety of answers. Maybe they answer “yes” because they don’t want to feel less intelligent by not finding it easy, or they use it so infrequently that they’ve forgotten the frustration of their last use. Maybe they answer “no” because in their customer meeting an hour ago they unsuccessfully accessed a crucial piece of data because of an issue, which made them feel embarrassed in front of the customer. You might even find that people’s answers switch between one week and the next. None of this is particularly helpful.
You can examine revealed preferences instead, looking for the number customer support tickets filed for SSO, the number of multiple push notifications in a row, the number of password reset requests, or how many people re-enter the URL of the service after being directed to SSO. These metrics more accurately tell the “truth” of the user’s experience, and how much it’s aiding or hindering their work.
Another issue arising from the Jobs Myth is that its believers use it to justify proceeding with their projects, generally with the assumption that “users will learn to love it,” because they believe Steve Jobs’ ideas were so provocative and progressive that even if users didn’t know they wanted it, they’d want it in time. That is also thoroughly inaccurate. As Jobs himself stated:
“And one of the things I’ve always found is that you’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.”
Simply because you personally believe something is valuable or important, does not mean it is. You have to understand the problems that are actually meaningful, and work backwards to how to solve them. This is not just an issue with blue teams, but also with infosec startup founders — the classic blunder of creating a hammer in search of a nail.
Your job is not to determine priorities in your sandboxed mindspace and convince the organization that securing something is of vital importance when it does not present material business risk. Your job is to determine priorities based on what veritably helps the organization and explain why your solution is the right one to help.
An extension of this fallacy is also the reverse — that security people can be presented with a valid solution to the organization’s problem but reject it because they personally don’t believe the problem is important.
As a real-world example, one security professional I know pushed for a specific product to be purchased in their organization. They presented the four-figure cost and offered a variety of use cases where it could be of use — such as simplifying the ability for engineers to implement early detection in the company’s infrastructure. They shopped around the idea to non-security groups on its usefulness and gained their buy-in as well. However, the person in charge of procurement held the personal opinion that this product type isn’t useful, and consequently pushed back on the request.
I see this so regularly I began calling it “Security Morals,” but I now think it should really be “Security Dogma” instead. What I mean by that specifically is that there are somewhat rigid “principles” common among security professionals that are treated as dogma. As aforementioned, there is a seemingly insatiable desire to please the Elder Infosec Deities by strictly adhering to their doctrine, even if it defies the organization’s needs.
In a SaaS product, if an engineer refuses to add a print button because they personally think it’s useless when you can “just” right click and select print, despite all user research indicating that users are at present confused how to print, their personal opinion will be demoted in favor of concrete evidence. If they did this regularly enough, they might be placed on a performance plan.
In security, similar behavior seems rewarded, as if performance is measured by how steadfast your belief is in Security Dogma. Such behavior would not be rewarded if viewing security as a product.
When speaking with defenders, I notice a non-trivial amount bristle at the notion that they have customers — that they aren’t a neutral force above the fray, akin to the Federal Reserve. It was in thinking through why so many defenders hate the concept of having customers that my notion of Security Dogma solidified — that there are principles of security treated as incontrovertibly true and mandatory to implement regardless of the reality of the organization, what determines its fortunes, or what endangers its continuing operation most.
Security professionals may view themselves as a heroic knight, but to others in the organization, they might look like the Knights Templar. As in the trope, even minor security offenses are treated as critical, enforcing “justice” is considered paramount and non-negotiable, and an egotistical complex emerges of interpreting resistance to your “noble” intentions as evidence of principles in need of correction. While you are not, in fact, a knight, you do have the opportunity to be a hero — but not by rescuing someone who is not actually in distress because you believe they need rescuing.
Your customer is your organization. Imagine if you attempted to order food in an app and it told you “no” because the food was insufficiently healthy, while also never explaining how it defined “healthy” food. Would you enjoy the app? Being realistic, most of us would repeatedly rage at it, even when you begrudgingly conceded that it had a point about your midnight pizza endeavors. This is more than not being likable — which isn’t strictly necessary to be effective. You need to be respected. If you are perceived as dogmatic, I promise you that you will not engender the respect you need to be effective.
I’m usually astonished at how little security teams work on cultivating organizational buy-in, since that’s a core part of my job as a product manager. I personally don’t believe a security program can succeed without it. This doesn’t mean everything becomes watered down, worn meaningless by only acting on things which have perfect agreement.
It instead means ensuring that the organization feels as if it is a stakeholder in security, that it’s along for the journey, and that security is not their adversary, but a fellow team attempting to better the organization. You could actually never implement things that other teams specifically request and still foster a sense of consensus by presenting your point of view with a sense of empathy.
I’ve personally struggled to practice empathy consistently. Particularly for those of us who are on the spectrum, actively seeing the world from someone else’s vantage can feel unnatural. But I assure you it’s not impossible, and your job will become substantially easier when you begin listening to people and ensuring you understand their point of view, rather than trying to dismiss theirs and ram your own point of view down their throat. Active listening is one of the most useful life skills you can develop.
Cultivating customer empathy is the first step you should take in your transition to treating security as a product. An example method is through the 5 Whys. The goal is to dig deeper into why something is a problem and identify its root causes. For example:
“Why do you not want to implement 2FA for Salesforce?” “Why do you not want to add a step for salespeople to login to Salesforce?” “Why can’t salespeople afford to take the additional time?” “Why do salespeople need to log their call notes immediately after a call?” “Why do salespeople need to transfer notes from Google Docs to Salesforce?” The root cause is arguably that there is friction between the notes salespeople take during a call and where they are meant to log the call. The solution might be to integrate Google Docs into Salesforce, meaning the user has to log into only one service during the course of their work — meaning implementing 2FA will be more palatable. As in this case, you may hear answers that do not seem to be pertinent to the security team, as they are squarely in the business domain — but your role is to connect the dots between business operations and security risks that threaten them.
I strongly believe your highest value as a security professional is, perhaps, in empathizing with the organization’s business risk and identifying where digital risks arise that amplify or solidify business risk. Your customer knows what endangers them — but they do not know how that danger manifests through digital means.
Once you feel you truly understand where customers are struggling, you can begin architecting your vision. Consider your vision for your security program as its story that will unfold over time. Themes serve as the heart of stories, the foundation for the central idea the author is attempting to convey. The plot — or the events that unfold within the story — supports the theme and carries the story towards its goal.
In a security as product model, you will also have themes. Those themes will also have plots — courses of work that drive towards the stated goal and the actions you need to take within those courses of work. Before defining any of the work, however, you have to envision the overarching story. Few people are naturally proficient storytellers, but you can practice by expressing your program’s story through a caricatured, fairy-tale lens.
At the dawn of the year, our band of heroes embarked on their quest in Engineersville. They heard the cries from the local farmers of meager yields and slow harvests due to bugs. It would not suffice simply for the heroes to kill all bugs as they appeared — after all, there are many quests elsewhere to complete. They knew their noble purpose was now to help the farmers ensure a bountiful, efficient harvest that they could sustain on their own.
Our heroes’ first goal was to reduce the amount of time it took to squash bugs spotted in the fields, as the bugs could hurt the harvest if they were left alive. Come spring’s first blossom, our heroes transitioned to their second goal — ensuring fewer bugs were being introduced to the crops. They helped the farmers map out how their field architecture would look ahead of planting to determine where bugs could spring up.
As summer began sizzling, they toiled to ensure that their tools could be used by the locals as well, beginning the work on crafting one master tool the locals could use that would automatically determine which specialized tool was best for reducing bugs in the type of fields being sown.
As the first leaves of autumn fell, our heroes tested this magical tool among a small group of farmers, carefully analyzing results and finally releasing it to all locals so that they could begin their next year empowered to have a bug-free harvest. This meant the heroes would have to do even less work of patching and helping locals tend to their fields, allowing them to focus on new quests.
(A wizard hat is optional in crafting stories, but recommended.)
Are there any security principles truly sacrificed in this story? The overarching goal is to reduce the number of vulnerabilities in production. As in this story, there may be multiple themes that are part of the same story — reducing the mean time to fix vulnerabilities, adding threat modelling in the design phase to introduce fewer bugs, and creating an automated tool that abstracts multiple security products away from the engineer so they can test their code easily and efficiently during development.
The goal is still fundamentally a security goal, but the themes show customer empathy. The engineers want minimal friction in their workflows. As close as you can provide “push button, get security,” the more productive they will be. Your team, as a stakeholder, is also not ignored. The two initial themes are enablers to the longer-term goal, reducing workload off your team to support progress towards an even more efficient solution that will reduce workloads further.
It is essential to view it as a full story and not be disheartened that your end-goal cannot be accomplished immediately. Setting themes and dreaming up your vision can inspire you so fully that you find yourself with a cornucopia of ideas. Unless you are exceptionally fortunate, the vast majority of teams will not have the resources to pursue every theme and must prioritize them.
Prioritization is one of those tasks that’s very easily said — you “just” rank which themes are most important to you — but is formidable in practice. When I build roadmaps in my work, there is often an excruciating “this or that” decision that requires you to push back work on something which still would absolutely benefit customers… just not as much as the other theme.
My first word of caution to you is avoiding prioritizing themes based on what you feel is most important. Waging a war of opinions is one in which everyone loses — and that’s ultimately what you will be doing, unless you prefer a dictatorship style, if you use your personal views as the basis for your prioritization.
Instead, you must again return to the perspective of your customer. While you personally may believe the theme of “reducing the volume of emails with malicious attachments” is the most important one, your organization may have their deployment frequency and lead time metrics hampered by an arduous appsec process, which more tangibly affects business performance.
How do you differentiate which themes to prioritize? You collect and analyze data — both qualitative and quantitative. A good engineering program will be tracking metrics such as availability, customer tickets, deployment frequency, error rates, lead time, mean time to detect (MTTD), and mean time to repair or recovery (MTTR). Ask engineering how those metrics are being impacted by security requirements. Ask engineers how they would explain some of the mutual challenges you face — you may be surprised at how aligned DevOps engineering teams are with security (but that is a topic for another time).
If you aren’t tracking metrics on your security program, you should be, as it’s essential for measuring progress in a product. This includes your own MTTD and MTTR — such as how quickly you remediate product security tickets. It should also include measuring the frequency of configuration management changes, such as firewall rule updates, patching, hardening — anything to measure the tempo of your program.
You can also measure how resources — specifically your security team’s time — are being used. Are they spending half of their time extinguishing fires? Is a third of their day dedicated to configuring your SIEM? Do they lose a week each month asking routine questions for threat modelling exercises? These represent opportunities for automation, as there is benefit in reducing the cost of your recurring security tasks and freeing up resources for more impactful streams of work. You should also poll how they want to spend their time, to ensure you retain your talent and avoid needing to worry about the “pipeline problem” in the first place.
Beyond this, you also need to quantitatively measure how your organization perceives the efficacy of your program. For example, conduct the equivalent of NPS surveys for the security organization, where teams with whom security interacts rate how satisfied they are with the security team. I’d recommend keeping the NPS anonymous with the option of entering a comment to give more detail. After all, security people can sometimes come across as a bit intimidating, and you want to find out the truth.
Quantitative data won’t necessarily tell the entire picture, however. Qualitative data helps fill in detail and may even expose concerns that are difficult to discern from quantitative data. Talk with a selection of individuals across different roles and levels in your organization to hear their feedback on how security can better meet their needs and work with them. You should also ask people on your team, from junior to senior, to give their feedback as well. Again, anonymous surveys can be your friend here in order to promote honesty.
My security fairy tale above could be an example of hitting the nexus of what your data is telling you, thus rising in priority. Your engineers are dissatisfied with having to wrestle with security testing products themselves, and their lead time to deploy is suffering. Half of your product security team’s time is spent on patching and last-minute security testing before GA, because engineering finds it too onerous to currently conduct earlier in the process. If you have three product security people making $100,000 each, you are spending $12,500 per month on something your customer doesn’t like anyway. And perhaps as a last data point, your product security team has expressed the desire to do more research and build custom tools.
A project to build a custom tool that lets engineers self-serve security testing in the development process and to standardize a threat model for the design stage would tangibly improve the data points you have collected. It also happens to be straightforward to measure, which makes likelihood of success even greater, since you can more easily determine what more needs to be done to drive the story.
There are also a few economic angles to consider when prioritizing. First is opportunity cost. By supporting legacy tech with time and money, from what else are you taking away resources? Some of the CISOs I most admire share — coincidentally or not — the trait of thinking in terms of monetary costs of work. This importantly includes pricing in the “total cost” of a security product, which includes the amount of maintenance, tuning, tweaking, and troubleshooting that your team will have to perform on an ongoing basis. Any expenditure of effort by your security team on an action is directly taking away investment into another action.
Second is the sunk cost fallacy. Just because you’ve invested a lot of time and money into something already, doesn’t mean it’s still worth pursuing. Throwing strong resources at weak purposes will deteriorate your product. As in the aforementioned example of opportunity cost, if a legacy security product requires substantial ongoing maintenance to perform as you need, prioritizing a theme of moving to a newer, less burdensome product might be necessary. While this may add a short-term resource sink, it will allow the plot in your story to ultimately move forward.
You now feel confident which with themes you prioritized — you know what your story will tell, in what order. However, this story is a shared one, as any security initiatives will inherently be shared due to the nature of affecting the organization. Your customer must be brought along in your journey, and feel like they have a stake in your story.
When you’re soliciting feedback from other people, it’s an opportunity to grow the working relationship — and ultimately engender trust. Rather than nixing their ideas on the spot if you don’t think they’re worthwhile, use language like, “I hadn’t considered that — my team will have to look into it.” You don’t want to promise that all suggestions will be implemented, or you’ll result in a lot of disappointed people, but you do want to make people feel as if they’ve been heard. And, if you do end up implementing something they suggested, or a use case they emphasized, they’ll be delighted.
Be transparent with your story. To start, determine who the right stakeholders are in each organization and ask if you can bring by coffee and treats while you present the story to them. Ask them what they think of it — are there any assumptions with which they disagree? Are there any risks that haven’t been captured? How do they feel it will impact them? Ask open-ended questions so as not to guide them. Before trust is established, phrasing a question as “How will this help you or not?” may compel them to be supportive rather than expressing the full range of their impressions.
As someone working on a product with a third party risk management use case, I can attest that no matter your industry, some of your organization’s prospective customers are asking sales about your security practices. Presenting your vision and progress towards that vision gives them a differentiator to reference, even if far removed from the primary use case of whatever your organization is offering.
Connect with product managers or whomever is designing whatever your organization offers. Not only will it benefit you by receiving feedback, whether to prioritize or to determine the “how,” but it will inspire them to keep you abreast of their own roadmaps. Having security included earlier in the product process will only serve to benefit the entire organization.
As far as how you present your story, some sort of visual aid is generally advisable, rather than purely speaking to it. If you’ve seen my slide presentations before, you can likely guess that I expend substantial effort into how my ideas are visually presented. As a product manager, the slides I create describing my project are not nearly so sparse and beautiful — but I do always consider what I want the listener to take away and leave the rest to voiceover rather than text on the slide.
Bear in mind that while technical meat may sound delicious to you, it can be a repellent to colleagues elsewhere in the business. Your goal is to cultivate consensus around your themes — around the journey of your security program, not the intricate details of the plot. You need to express, in accessible terms, what the theme is, the value it brings to the organization, and any risks or considerations that will be shared challenges across the organization.
Returning to our fairy tale, a slide deck could be presented as follows:
Our vision is to reduce the number of vulnerabilities in production Our goals are to reduce the lead time to deployment, mean time to patch, and security team time spent on application testing The primary benefit Organization, Inc. is less friction for engineers to test for security vulnerabilities, allowing for our products to be released more quickly The secondary benefit to Organization, Inc. is reducing the cost of security testing, helping with scalability as well as freeing up resources to accomplish other security goals We will need to partner with the Engineering team to understand workflows and ensure a security testing orchestrator is deployed appropriately into workflows We will need to partner with PM to introduce threat modelling during the design phase, which will require a near-term time tradeoff for longer-term cost reduction You begin by inspiring stakeholders, then end with what you need from them to accomplish the vision. This has another benefit of putting those requirements on their radar in advance of when they will need to execute upon them, resulting in quicker turnaround times for you.
By the end of the prioritization process, I hope you feel emboldened by the knowledge of which themes are most important to accomplish, and in which order. Now is the time of execution, defining which steps need to be taken towards your goal.
If you have a program manager, this is exactly where to loop them in. If you do not, or cannot loan one from another team for advice, then please do not pretend to be one. That is, you should not assume you understand the abilities and constraints of your team members and assign tasks to them without checking with them first.
While you can lead the charge on the “what,” you must include others in figuring out the “how.” Look to the real Steve Jobs, not the fallacious Steve Jobs Myth, and recall how he trusted the project lead to determine the underlying technical detail so long as his requirements were met. Depending on the size of your team, there will perhaps only be one or two individuals able to take on tasks. Where I see security managers often fail is vacillating between extremes of giving sparse direction and minimal feedback to delving too far in the weeds.
Through the process I outlined, you already defined the requirements of the project through the need to present across the organization — so there isn’t necessarily much more work to be done on your end, save for clarifying requirements on request.
If you want to really make an impact, begin tackling your security debt. You may be familiar with technical debt, which is when quality is sacrificed for speed, typically with the false promise of “we’ll fix it later.” Ironically, by not treating security as a product, you are vastly more likely to accumulate security debt as part of your crusade to integrate your gospel.
Embracing security as a product involves treating it almost as a living thing, one which decays and requires nurturing to stay alive. For each “shortcut” you take, are you considering what challenges will be created later? Did you document why you can’t address it effectively today, for example because there will be a superior way to fix the problem if you wait? How frequently are you returning to those shortcuts and paying down your debt?
There is power in ownership. The endowment effect is a discovery by behavioral economics that people ascribe more value to things they own, far more than they “rationally” should. Your security program being a product means you own that product — it is your vision and your story. In a fashion, you can consciously nudge yourself into a mindset that will inherently encourage you to take better care of your security program.
I stress this because a lamentable consequence of the nihilism I see in defenders is they cease to care about the security of their organization on a time horizon that surpasses their planned tenure. With the tumultuous turnover of security talent in most organizations, if the strategy is Security Dogma, devotion to it dies when the believer leaves, and a new messenger of the Elder Infosec Deities comes in to spread their own interpretation of the Dogma.
By creating your vision for the security program, you describe a map for the security program’s journey. An incoming hero unburdened by zealotry can see where they are in the journey and the end destination. It’s unlikely that every stop on the journey will be entirely discarded unless there is scant evidence for its value. What is more likely is that the “how” will change most drastically, while the overarching quest — your vision, and to some extent, your legacy — remains intact.
The product process even aids you when switching organizations. What is changing is the end customer — not the process. Even so, just as when I helped women pick out dresses, there will be customers with characteristics in common to each other, rendering your maps meaningful beyond the initial customer.
And if you want to rise the ranks, the practice of articulating a clear vision and fostering consensus will only serve to demonstrate competency. It can demonstrate to your executives or your board of directors that you understand them as a customer and will nourish their trust in your ability to deftly manage risk in a way that supports their success.
This is the fairy-tale ending to my own vision I shared with you today — that you can ride into the sunset knowing you were a hero in the way that helped the realm prosper. The fanatics who sought to serve spurious justice will never reach their dream of security nirvana, wailing relentlessly into the wind about their persecution at the hands of locals wanting to prosper.
Inspire your organization with your story of how security can allow it to thrive and make them feel they have a part to play in it. Your band of heroes can and should include colleagues outside of security, who will be far more willing to aid you in your quest — however long or arduous — if you take the time to discuss its purpose to them from a position of empathy.
Security is a product, and reluctance to embrace that is like rejecting scientific evidence in deference to zealotry. There is a way forward that does not rely on worshiping the Elder Infosec Gods through enforcing Security Dogma, which is, in fact, the path of least resistance — despite being less dictatorial.
Quixotism in the name of security purity will crumble as a foundation for a “Blutopia,” but a pragmatic approach, the support of devoted followers within your organization, and a visionary quest just might be the right start to our collective journey.
</description>
            <atom:content type="html"><![CDATA[<p><em>Originally given as <a href="https://www.youtube.com/watch?v=Ia80fg7ivN4">the keynote</a> at BSides Knoxville.</em></p>
<p><img src="/blog/img/ball-of-lights.jpeg" alt="person clutching a ball of lights"></p>
<p>Security is a product, but we treat it like a sacred, immutable grail to preserve, unblemished by the sublunary needs of users. And yet, we wonder why defense remains stagnant, why we fail so consistently in progressing towards the glorious ideal of a “secure organization.” We will continue to fail — unless we treat security as a product. Are we trying to respect the phantasmal Elder Deities of Infosec and their stringent doctrine, or are we trying to ensure our organization can still thrive while operating in a perilous digital world?</p>
<p>One definition of a product I prefer is “something created through a process that provides benefits to a market.” Security as product, therefore, is created through a process that provides benefits to a market — in this case, the organization in which it operates. The somewhat religious belief I hear espoused is that designing security to benefit your organization will result in a blasphemous mimicry of true security. That couldn’t be further from the truth. It’s a mimicry of your duty as a security professional to follow your personal beliefs rather than pursue strategies that benefit your organization.</p>
<p>But perhaps you don’t believe me. You think there’s some level of objective “truth” that is foolish to discard in the name of benefitting your organization. Whatever that truth is, that’s now your product, and if it doesn’t benefit your organization, you’re attempting to sell it into a market that doesn’t want it.</p>
<p>I’m often left perplexed at how some security professionals can see victory in forcing through a change that users viscerally dislike, as if their dissatisfaction represents a blood sacrifice. How is that possibly success? Success is solving a real problem in a way that delivers consistent value. Success is fostering consensus so that you are supported by the organization in effecting meaningful change — even if you implement something adding to your customer’s burden.</p>
<p>For example, when requiring multi-round hashing rather than storing credentials in plaintext, relevant stakeholders in your organization must be included and understand the need for a noble sacrifice. You will fail if the security of your organization rests on users adopting a strategy that neither provides them value, nor is one they support.</p>
<p>As Sarah Jamie Lewis insightfully tweeted:</p>
<div class="center">
	<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">&quot;our software is secure if you use it correctly&quot; means &quot;our software is not secure&quot;</p>&mdash; Sarah Jamie Lewis (@SarahJamieLewis) <a href="https://twitter.com/SarahJamieLewis/status/996033014269296640?ref_src=twsrc%5Etfw">May 14, 2018</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div>
<p>Similarly, <em>if</em> your organization is secure if users follow your security policies correctly, your organization is not secure. Maintaining the dogmatic view that it’s “the users that must be wrong,” rather than accepting the situation for what it is — the failure of your security program — is why we continue to fail. How many more years are we going to lament that our users are poor at security before we actually start working on pragmatic solutions?</p>
<p>Pragmatism doesn’t require security sacrilege. All products, including security, are shared problems within an organization. Each stakeholder must feel they have a personal stake in whatever course of action is taken — a process called building consensus.</p>
<p>As a product manager, there are times when we will release changes or new features that may be contentious. If I pursue a strategy without regard for how my colleagues who interact with customers or potential customers every day feel, the product won’t succeed in the market, because my colleagues won’t have the confidence to sell it. If I come to my colleagues with evidence for why it is necessary, describe how it works towards the broader product vision, and actively listen to their concerns, we can design a strategy collectively in which all parties are confident — even in the face of uncomfortable change.</p>
<p>Security as a product doesn’t require the wearing down of strategies through compromise until they are rendered ineffective. It requires a purposeful strategy through an overarching vision of how security can support the organization’s survival in light of the fact that computers are somewhat terrible, but necessary for success.</p>
<p>At this point, I’ve mastered a stolid expression for when security professionals nonchalantly explain the improvisational nature of their strategy-making. Most assume I’m asking whether they have a strategy for a specific project and seem surprised when I ask if they’ve defined their long-term vision for their overall security program. In the same conversation, the CISO or security engineer with whom I’m speaking will unleash a passionate rant — perhaps you have heard some of these grievances, or uttered them yourself:</p>
<ul>
<li>“Things do sometimes sort of get accomplished… but slowly.”</li>
<li>“We don’t ever actually make progress, we’re just running around.”</li>
<li>“We keep making the same mistakes over and over — it doesn’t get better.”</li>
<li>“I don’t have any time to do research, I’m constantly in meetings where we don’t actually get anything done.”</li>
<li>“I just don’t even give a shit anymore, nothing changes.”</li>
</ul>
<p>I really do feel for your plight, but y’all can be tedious. There is clearly something amiss here that better or more tech nor people can fix — they will simply be likewise wasted. I hear these remonstrances nearly everywhere in infosec — from the smallest of teams to teams sprawling over multiple functional areas at Fortune 500 companies. There are countless passionate people working tirelessly, whom consistently feel like they aren’t accomplishing anything that is meaningfully improving security.</p>
<p>What is perhaps even worse is hearing that security teams have adopted “agile” methodology, then discovering that their tasks are based on the whims of the individual, the epics are ill-defined and focused on functional areas, and no one is looking at a higher level to see how many resources are being dedicated towards each effort.</p>
<p>What’s even more jarring is every time I play surrogate therapist — asking probing questions to discern why their teams are so inefficient at a macro level — they unabashedly disclose that their teams don’t have any overarching goals defined, let alone metrics to track progress.</p>
<p>And we remain shocked that we aren’t progressing?</p>
<hr>
<p>Of the “three pillars” of infosec — people, processes, and technology — I believe processes are most ignored and undervalued. I’ve grown exhausted by the number of articles about the “cyber skills shortage” as well as listening to — and speaking about — the pernicious complexity and misguidedness of the security technology space. Yet I don’t see nearly the same volume of fiery headlines and hot takes on Twitter about how our processes are failing us, despite the fact that processes are the underpinnings of how people work together and with technology.</p>
<p>As an example, I’ve been amazed at what our customers accomplished with Excel and two people prior to adopting our solution — managing vendor risk programs covering thousands of vendors across many lines of business. A trait in common with each of those customers succeeding despite their people and technology constraints is how easily they can articulate their process — and it’s because they’ve comprehensively defined it.</p>
<p>A process is “a series of actions or steps taken in order to achieve a particular end.” You can have the best people and the best technology, but if you cannot define to what end they can be used, and how they can be used, success is unlikely to manifest. You also must determine the “what” before the “how” — it’s prohibitive to determine the steps necessary until you define what the particular end should be. I believe there is insufficient attention paid in security programs to what particular ends should be. Is “making Organization, Inc. secure” really the pinnacle of defining goals for our security programs?</p>
<p>The foundation for any product is understand your goal for the product. Fundamentally, what is the product’s purpose? What are you trying to help users accomplish? Viewing security as a product forces you to define your goals and come to terms with your team’s purpose. It also ensures you’re prioritizing actions appropriately — honing in on what will actually improve the product and your customer’s experience.</p>
<p>If your company is publicly traded, have you read their annual report? Can you summarize the Risk Factors they outline in their <a href="https://en.wikipedia.org/wiki/Form_10-K">10-K</a>? The <a href="https://www.sec.gov/fast-answers/answersreada10khtm.html">Risk Factors section</a> is quite literally a cheat sheet, a ranking of your organization’s risks in order of their priority. If you do not understand the risks to the business’ ongoing operations from the organization’s perspective of priority, how could you possibly understand what is most essential to protect?</p>
<p>I assure you that you do not have to dive deeply into the mysterious waters of product management to improve your security program. The aforementioned rants by my blue team friends are painful primarily because they include examples of what you definitely <em>shouldn’t</em> do in product management if you want to create continuously successful products. Even <em>not</em> doing those things will help significantly — and doing the <em>right</em> things will empower you even more.</p>
<p>Because what lurks beneath the frustration expressed by so many in our industry is a sense of helplessness. We don’t feel empowered, we feel stifled and downtrodden. I would argue any profession in which you expend a lot of intellectual effort and time-capital into improving a problem, only to feel like you are running in place, will rapidly burn people out.</p>
<p>In infosec, despite a common understanding that reactive approaches to defense are misguided, we maintain reactive processes. Security teams are accustomed to receiving direction externally, feeling burdened with priorities that defy their beliefs of what is important — as if a secular organization should dictate the priorities of such a sacred order.</p>
<p>Once you adopt the mindset of security as a product, you can begin to take control. One of the “basics” of product management is that solely delivering exactly what customers demand, without understanding the motivation for their demands, will lead to poor outcomes and potentially monstrously disjointed user experiences. You have to proactively understand your customer’s perspective and look beneath the surface of what they are requesting to discern the underlying challenge or desire.</p>
<p>How many of you have worked in retail or other customer-facing service jobs? I have as well, at a department store and later at a frozen yogurt shop, and if security professionals believe they are treated poorly, I promise that you cannot fathom the depths of brutality customers can reach. I ask, because a cornerstone of many customer-facing service jobs is the notion of anticipating needs.</p>
<p>Anticipating needs means understanding your customer’s challenges, desires, and beliefs. For example, one method by which I sold higher-SKU merchandise in the department store was by efficiently learning about my individual customer. I asked questions about why they were shopping and what frustrates them sartorially.</p>
<p>Since I was in the contemporary dresses department, usually the woman was shopping in anticipation of an event, whether a date or party. I listened carefully to pick up on any clues indicating her challenges — for example, one with which I deeply relate is “I hate wearing dresses,” or “I’m going to be on my feet all night.”</p>
<p>Even a morsel of such data was sufficient for me to find additional options for her beyond the items she had chosen. Perhaps a dress with pockets and an elastic waist, that still looks chic while maximizing comfort. Many of the dress-wearing people reading can likely relate to the ecstasy of wearing a dress with pockets, which can both cache snacks or conceal fidgeting hands due to social anxiety. Or, I might offer a maxi dress, which conveniently veils one’s shoes, allowing the option of feet-sparing flats rather than heels.</p>
<p>I’m cognizant that I’m essentially describing a robust recommendation engine (more Netflix than Amazon) — but as it happens, humans can excel in this effort, too. I would imagine most would appreciate a professional faerie godparent constantly anticipating your needs and making your life easier, all without you having to request or nag.</p>
<p>Afforded the cover of not being, strictly speaking, a “security professional,” people are quite honest with me in how they perceive the security team. Security teams frequently are considered the opposite of the faerie godparent— more like a sulking demon that seems to relish an arduous professional life and decrees you are forbidden from doing the things you need to, without ever seeming to care about what those things are.</p>
<p>Security teams both rely primarily on direction and yet seem resentful of this dependence — but ironically also begrudge the notion of reaching out proactively to their organization’s stakeholders to discern what needs to be done.</p>
<p>This inconsistency of thinking leads me to somewhat believe that many security people fundamentally want to dictate what’s important to the company from a security perspective, based on their own opinions, so as to serve the Elder Infosec Deities. Frankly, it sometimes takes considerable effort not to adopt my best Regina George face after listening to a security person elaborately envisioning their Blue Team utopia (what I call a “Blutopia”) at great length and ask pointedly, “Have you ever considered that your opinions might be wrong?”</p>
<p>Part of the reason I don’t ask is that I truly don’t require additional help in amplifying social awkwardness. But the larger part is also that I don’t believe they would be deterred if their opinions are deemed “wrong” by someone else. My conclusion is this is because of the Steve Jobs Myth.</p>
<hr>
<p>I don’t like Steve Jobs. My personal opinion is that he was a jerk and is a wretched role model for leadership. However, I recognize that he is idolized by the type of people who are prioritizing their personal opinion over what their organization actually needs, because of this Steve Jobs Myth. The myth is that through the spellbinding magic of Jobs’ gut instinct alone, and defying all evidence and user analysis, Apple forged ahead with the iPhone and consequently revolutionized the cellular device market.</p>
<p>That isn’t actually what happened.</p>
<p>What actually happened is there was an experimental project initiated without Jobs’ knowledge, which received lukewarm reception by Jobs once presented to him because he believed cell phones “sucked.” However, he trusted the team to work through the technical details and even allowed the head of the project to hire Apple engineers from other projects. His insisted in return on seeing <a href="https://www.cnbc.com/2017/06/16/steve-jobs-iphone-creation-story-proves-even-the-smartest-executives-need-help-making-decisions.html">“an interface that might be intuitive and exciting to lay-users”</a> before he’d be convinced.</p>
<p>The Steve Jobs Myth perpetuates the idea that Jobs gave minimal thought to user needs — which generally makes some people feel empowered to not care, either — and that it is the only way to conceive brilliance and truly leave users awestruck. Jobs’ concern was actually that you cannot simply ask people, “What’s the next big thing?” and that <em>market</em> research is insufficient to conceive a product that customers will love.</p>
<p>However, he viewed <em>user</em> research as essential — as seen in his requirement for the continued development of the iPhone. What he understood is that people won’t always say — or even know — what they want, but through user research, you can see which preferences they truly hold based on how they behave.</p>
<p>Within behavioral economics, there’s a clear hierarchy between stated vs. revealed preferences. Humans can be proficient in fooling themselves in what their preferences are, or if they’re being interviewed, in saving face. For example, if someone asked me, “are you more likely to prepare chicken and broccoli for dinner, or a can of tuna?” I am not necessarily inclined to reveal that I’m sometimes indistinguishable from a cat in behavior and will answer the chicken and broccoli with my ideal self in mind. But if you observed the dinners I made that week, you would see cans of tuna — a vastly firmer source of truth for answering that question.</p>
<p>If you ask your organization, “Do you find SSO easy to use?” you might discover a variety of answers. Maybe they answer “yes” because they don’t want to feel less intelligent by not finding it easy, or they use it so infrequently that they’ve forgotten the frustration of their last use. Maybe they answer “no” because in their customer meeting an hour ago they unsuccessfully accessed a crucial piece of data because of an issue, which made them feel embarrassed in front of the customer. You might even find that people’s answers switch between one week and the next. None of this is particularly helpful.</p>
<p>You can examine revealed preferences instead, looking for the number customer support tickets filed for SSO, the number of multiple push notifications in a row, the number of password reset requests, or how many people re-enter the URL of the service after being directed to SSO. These metrics more accurately tell the “truth” of the user’s experience, and how much it’s aiding or hindering their work.</p>
<p>Another issue arising from the Jobs Myth is that its believers use it to justify proceeding with their projects, generally with the assumption that “users will learn to love it,” because they believe Steve Jobs’ ideas were so provocative and progressive that even if users didn’t know they wanted it, they’d want it in time. That is also thoroughly inaccurate. As Jobs himself <a href="http://www.businessinsider.com/steve-jobs-response-to-an-insult-is-an-example-everyone-should-follow-2017-7">stated</a>:</p>
<blockquote>
<p>“And one of the things I’ve always found is that you’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.”</p>
</blockquote>
<p>Simply because you personally believe something is valuable or important, does not mean it is. You have to understand the problems that are actually meaningful, and work backwards to how to solve them. This is not just an issue with blue teams, but also with infosec startup founders — the classic blunder of creating a hammer in search of a nail.</p>
<p>Your job is not to determine priorities in your sandboxed mindspace and convince the organization that securing something is of vital importance when it does not present material business risk. Your job is to determine priorities based on what veritably helps the organization and explain why your solution is the right one to help.</p>
<p>An extension of this fallacy is also the reverse — that security people can be presented with a valid solution to the organization’s problem but reject it because they personally don’t believe the problem is important.</p>
<p>As a real-world example, one security professional I know pushed for a specific product to be purchased in their organization. They presented the four-figure cost and offered a variety of use cases where it could be of use — such as simplifying the ability for engineers to implement early detection in the company’s infrastructure. They shopped around the idea to non-security groups on its usefulness and gained their buy-in as well. However, the person in charge of procurement held the personal opinion that this product type isn’t useful, and consequently pushed back on the request.</p>
<p>I see this so regularly I began calling it “Security Morals,” but I now think it should really be “Security Dogma” instead. What I mean by that specifically is that there are somewhat rigid “principles” common among security professionals that are treated as dogma. As aforementioned, there is a seemingly insatiable desire to please the Elder Infosec Deities by strictly adhering to their doctrine, even if it defies the organization’s needs.</p>
<p>In a SaaS product, if an engineer refuses to add a print button because they personally think it’s useless when you can “just” right click and select print, despite all user research indicating that users are at present confused how to print, their personal opinion will be demoted in favor of concrete evidence. If they did this regularly enough, they might be placed on a performance plan.</p>
<p>In security, similar behavior seems rewarded, as if performance is measured by how steadfast your belief is in Security Dogma. Such behavior would not be rewarded if viewing security as a product.</p>
<p>When speaking with defenders, I notice a non-trivial amount bristle at the notion that they have customers — that they aren’t a neutral force above the fray, akin to the Federal Reserve. It was in thinking through why so many defenders hate the concept of having customers that my notion of Security Dogma solidified — that there are principles of security treated as incontrovertibly true and mandatory to implement regardless of the reality of the organization, what determines its fortunes, or what endangers its continuing operation most.</p>
<p>Security professionals may view themselves as a heroic knight, but to others in the organization, they might look like the Knights Templar. As <a href="http://tvtropes.org/pmwiki/pmwiki.php/Main/KnightTemplar">in the trope</a>, even minor security offenses are treated as critical, enforcing “justice” is considered paramount and non-negotiable, and an egotistical complex emerges of interpreting resistance to your “noble” intentions as evidence of principles in need of correction. While you are not, in fact, a knight, you do have the opportunity to be a hero — but not by rescuing someone who is not actually in distress because you believe they need rescuing.</p>
<p><strong>Your customer is your organization.</strong> Imagine if you attempted to order food in an app and it told you “no” because the food was insufficiently healthy, while also never explaining how it defined “healthy” food. Would you enjoy the app? Being realistic, most of us would repeatedly rage at it, even when you begrudgingly conceded that it had a point about your midnight pizza endeavors. This is more than not being likable — which isn’t strictly necessary to be effective. You need to be respected. If you are perceived as dogmatic, I promise you that you will not engender the respect you need to be effective.</p>
<p>I’m usually astonished at how little security teams work on cultivating organizational buy-in, since that’s a core part of my job as a product manager. I personally don’t believe a security program can succeed without it. This doesn’t mean everything becomes watered down, worn meaningless by only acting on things which have perfect agreement.</p>
<p>It instead means ensuring that the organization feels as if it is a stakeholder in security, that it’s along for the journey, and that security is not their adversary, but a fellow team attempting to better the organization. You could actually never implement things that other teams specifically request and still foster a sense of consensus by presenting your point of view with a sense of empathy.</p>
<p>I’ve personally struggled to practice empathy consistently. Particularly for those of us who are on the spectrum, actively seeing the world from someone else’s vantage can feel unnatural. But I assure you it’s not impossible, and your job will become substantially easier when you begin listening to people and ensuring you understand their point of view, rather than trying to dismiss theirs and ram your own point of view down their throat. <a href="https://en.wikipedia.org/wiki/Active_listening">Active listening</a> is one of the most useful life skills you can develop.</p>
<p>Cultivating customer empathy is the first step you should take in your transition to treating security as a product. An example method is through <a href="https://en.wikipedia.org/wiki/5_Whys">the 5 Whys</a>. The goal is to dig deeper into why something is a problem and identify its root causes. For example:</p>
<ul>
<li>“Why do you not want to implement 2FA for Salesforce?”</li>
<li>“Why do you not want to add a step for salespeople to login to Salesforce?”</li>
<li>“Why can’t salespeople afford to take the additional time?”</li>
<li>“Why do salespeople need to log their call notes immediately after a call?”</li>
<li>“Why do salespeople need to transfer notes from Google Docs to Salesforce?”</li>
</ul>
<p>The root cause is arguably that there is friction between the notes salespeople take during a call and where they are meant to log the call. The solution might be to integrate Google Docs into Salesforce, meaning the user has to log into only one service during the course of their work — meaning implementing 2FA will be more palatable. As in this case, you may hear answers that do not seem to be pertinent to the security team, as they are squarely in the business domain — but your role is to connect the dots between business operations and security risks that threaten them.</p>
<p>I strongly believe your highest value as a security professional is, perhaps, in empathizing with the organization’s business risk and identifying where digital risks arise that amplify or solidify business risk. Your customer knows what endangers them — but they do not know how that danger manifests through digital means.</p>
<p>Once you feel you truly understand where customers are struggling, you can begin architecting your vision. Consider your vision for your security program as its story that will unfold over time. Themes serve as the heart of stories, the foundation for the central idea the author is attempting to convey. The plot — or the events that unfold within the story — supports the theme and carries the story towards its goal.</p>
<p>In a security as product model, you will also have themes. Those themes will also have plots — courses of work that drive towards the stated goal and the actions you need to take within those courses of work. Before defining any of the work, however, you have to envision the overarching story. Few people are naturally proficient storytellers, but you can practice by expressing your program’s story through a caricatured, fairy-tale lens.</p>
<hr>
<p>At the dawn of the year, our band of heroes embarked on their quest in Engineersville. They heard the cries from the local farmers of meager yields and slow harvests due to bugs. It would not suffice simply for the heroes to kill all bugs as they appeared — after all, there are many quests elsewhere to complete. They knew their noble purpose was now to help the farmers ensure a bountiful, efficient harvest that they could sustain on their own.</p>
<p>Our heroes’ first goal was to reduce the amount of time it took to squash bugs spotted in the fields, as the bugs could hurt the harvest if they were left alive. Come spring’s first blossom, our heroes transitioned to their second goal — ensuring fewer bugs were being introduced to the crops. They helped the farmers map out how their field architecture would look ahead of planting to determine where bugs could spring up.</p>
<p>As summer began sizzling, they toiled to ensure that their tools could be used by the locals as well, beginning the work on crafting one master tool the locals could use that would automatically determine which specialized tool was best for reducing bugs in the type of fields being sown.</p>
<p>As the first leaves of autumn fell, our heroes tested this magical tool among a small group of farmers, carefully analyzing results and finally releasing it to all locals so that they could begin their next year empowered to have a bug-free harvest. This meant the heroes would have to do even less work of patching and helping locals tend to their fields, allowing them to focus on new quests.</p>
<p>(A wizard hat is optional in crafting stories, but recommended.)</p>
<hr>
<p>Are there any security principles truly sacrificed in this story? The overarching goal is to reduce the number of vulnerabilities in production. As in this story, there may be multiple themes that are part of the same story — reducing the mean time to fix vulnerabilities, adding threat modelling in the design phase to introduce fewer bugs, and creating an automated tool that abstracts multiple security products away from the engineer so they can test their code easily and efficiently during development.</p>
<p>The goal is still fundamentally a security goal, but the themes show customer empathy. The engineers want minimal friction in their workflows. As close as you can provide “push button, get security,” the more productive they will be. Your team, as a stakeholder, is also not ignored. The two initial themes are enablers to the longer-term goal, reducing workload off your team to support progress towards an even more efficient solution that will reduce workloads further.</p>
<p>It is essential to view it as a full story and not be disheartened that your end-goal cannot be accomplished immediately. Setting themes and dreaming up your vision can inspire you so fully that you find yourself with a cornucopia of ideas. Unless you are exceptionally fortunate, the vast majority of teams will not have the resources to pursue every theme and must prioritize them.</p>
<p>Prioritization is one of those tasks that’s very easily said — you “just” rank which themes are most important to you — but is formidable in practice. When I build roadmaps in my work, there is often an excruciating “this or that” decision that requires you to push back work on something which still would absolutely benefit customers… just not as much as the other theme.</p>
<p>My first word of caution to you is avoiding prioritizing themes based on what you <em>feel</em> is most important. Waging a war of opinions is one in which everyone loses — and that’s ultimately what you will be doing, unless you prefer a dictatorship style, if you use your personal views as the basis for your prioritization.</p>
<p>Instead, you must again return to the perspective of your customer. While you personally may believe the theme of “reducing the volume of emails with malicious attachments” is the most important one, your organization may have their deployment frequency and lead time metrics hampered by an arduous appsec process, which more tangibly affects business performance.</p>
<p>How do you differentiate which themes to prioritize? You collect and analyze data — both qualitative and quantitative. A good engineering program will be tracking metrics such as availability, customer tickets, deployment frequency, error rates, lead time, <a href="http://kpilibrary.com/kpis/mean-time-to-detect-mttd-2">mean time to detect (MTTD)</a>, and <a href="http://kpilibrary.com/kpis/mean-time-to-repair-mttr">mean time to repair or recovery (MTTR)</a>. Ask engineering how those metrics are being impacted by security requirements. Ask engineers how they would explain some of the mutual challenges you face — you may be surprised at how aligned DevOps engineering teams are with security (but that is a topic for another time).</p>
<p>If you aren’t tracking metrics on your security program, you should be, as it’s essential for measuring progress in a product. This includes your own MTTD and MTTR — such as how quickly you remediate product security tickets. It should also include measuring the frequency of configuration management changes, such as firewall rule updates, patching, hardening — anything to measure the tempo of your program.</p>
<p>You can also measure how resources — specifically your security team’s time — are being used. Are they spending half of their time extinguishing fires? Is a third of their day dedicated to configuring your SIEM? Do they lose a week each month asking routine questions for threat modelling exercises? These represent opportunities for automation, as there is benefit in reducing the cost of your recurring security tasks and freeing up resources for more impactful streams of work. You should also poll how they want to spend their time, to ensure you retain your talent and avoid needing to worry about the “pipeline problem” in the first place.</p>
<p>Beyond this, you also need to quantitatively measure how your organization perceives the efficacy of your program. For example, conduct the equivalent of <a href="https://en.wikipedia.org/wiki/Net_Promoter">NPS surveys</a> for the security organization, where teams with whom security interacts rate how satisfied they are with the security team. I’d recommend keeping the NPS anonymous with the option of entering a comment to give more detail. After all, security people can sometimes come across as a bit intimidating, and you want to find out the truth.</p>
<p>Quantitative data won’t necessarily tell the entire picture, however. Qualitative data helps fill in detail and may even expose concerns that are difficult to discern from quantitative data. Talk with a selection of individuals across different roles and levels in your organization to hear their feedback on how security can better meet their needs and work with them. You should also ask people on your team, from junior to senior, to give their feedback as well. Again, anonymous surveys can be your friend here in order to promote honesty.</p>
<p>My security fairy tale above could be an example of hitting the nexus of what your data is telling you, thus rising in priority. Your engineers are dissatisfied with having to wrestle with security testing products themselves, and their lead time to deploy is suffering. Half of your product security team’s time is spent on patching and last-minute security testing before GA, because engineering finds it too onerous to currently conduct earlier in the process. If you have three product security people making $100,000 each, you are spending $12,500 per month on something your customer doesn’t like anyway. And perhaps as a last data point, your product security team has expressed the desire to do more research and build custom tools.</p>
<p>A project to build a custom tool that lets engineers self-serve security testing in the development process and to standardize a threat model for the design stage would tangibly improve the data points you have collected. It also happens to be straightforward to measure, which makes likelihood of success even greater, since you can more easily determine what more needs to be done to drive the story.</p>
<p>There are also a few economic angles to consider when prioritizing. First is opportunity cost. By supporting legacy tech with time and money, from what else are you taking away resources? Some of the CISOs I most admire share — coincidentally or not — the trait of thinking in terms of monetary costs of work. This importantly includes pricing in the “total cost” of a security product, which includes the amount of maintenance, tuning, tweaking, and troubleshooting that your team will have to perform on an ongoing basis. Any expenditure of effort by your security team on an action is directly taking away investment into another action.</p>
<p>Second is the sunk cost fallacy. Just because you’ve invested a lot of time and money into something already, doesn’t mean it’s still worth pursuing. Throwing strong resources at weak purposes will deteriorate your product. As in the aforementioned example of opportunity cost, if a legacy security product requires substantial ongoing maintenance to perform as you need, prioritizing a theme of moving to a newer, less burdensome product might be necessary. While this may add a short-term resource sink, it will allow the plot in your story to ultimately move forward.</p>
<hr>
<p>You now feel confident which with themes you prioritized — you know what your story will tell, in what order. However, this story is a shared one, as any security initiatives will inherently be shared due to the nature of affecting the organization. Your customer must be brought along in your journey, and feel like they have a stake in your story.</p>
<p>When you’re soliciting feedback from other people, it’s an opportunity to grow the working relationship — and ultimately engender trust. Rather than nixing their ideas on the spot if you don’t think they’re worthwhile, use language like, “I hadn’t considered that — my team will have to look into it.” You don’t want to promise that all suggestions will be implemented, or you’ll result in a lot of disappointed people, but you do want to make people feel as if they’ve been heard. And, if you do end up implementing something they suggested, or a use case they emphasized, they’ll be delighted.</p>
<p>Be transparent with your story. To start, determine who the right stakeholders are in each organization and ask if you can bring by coffee and treats while you present the story to them. Ask them what they think of it — are there any assumptions with which they disagree? Are there any risks that haven’t been captured? How do they feel it will impact them? Ask open-ended questions so as not to guide them. Before trust is established, phrasing a question as “How will this help you or not?” may compel them to be supportive rather than expressing the full range of their impressions.</p>
<p>As someone working on a product with a third party risk management use case, I can attest that no matter your industry, some of your organization’s prospective customers are asking sales about your security practices. Presenting your vision and progress towards that vision gives them a differentiator to reference, even if far removed from the primary use case of whatever your organization is offering.</p>
<p>Connect with product managers or whomever is designing whatever your organization offers. Not only will it benefit you by receiving feedback, whether to prioritize or to determine the “how,” but it will inspire them to keep you abreast of their own roadmaps. Having security included earlier in the product process will only serve to benefit the entire organization.</p>
<p>As far as how you present your story, some sort of visual aid is generally advisable, rather than purely speaking to it. If you’ve seen <a href="/speaking/index.html">my slide presentations before</a>, you can likely guess that I expend substantial effort into how my ideas are visually presented. As a product manager, the slides I create describing my project are not nearly so sparse and beautiful — but I do always consider what I want the listener to take away and leave the rest to voiceover rather than text on the slide.</p>
<p>Bear in mind that while technical meat may sound delicious to you, it can be a repellent to colleagues elsewhere in the business. Your goal is to cultivate consensus around your themes — around the journey of your security program, not the intricate details of the plot. You need to express, in accessible terms, what the theme is, the value it brings to the organization, and any risks or considerations that will be shared challenges across the organization.</p>
<p>Returning to our fairy tale, a slide deck could be presented as follows:</p>
<ul>
<li>Our vision is to reduce the number of vulnerabilities in production</li>
<li>Our goals are to reduce the lead time to deployment, mean time to patch, and security team time spent on application testing</li>
<li>The primary benefit Organization, Inc. is less friction for engineers to test for security vulnerabilities, allowing for our products to be released more quickly</li>
<li>The secondary benefit to Organization, Inc. is reducing the cost of security testing, helping with scalability as well as freeing up resources to accomplish other security goals</li>
<li>We will need to partner with the Engineering team to understand workflows and ensure a security testing orchestrator is deployed appropriately into workflows</li>
<li>We will need to partner with PM to introduce threat modelling during the design phase, which will require a near-term time tradeoff for longer-term cost reduction</li>
</ul>
<p>You begin by inspiring stakeholders, then end with what you need from them to accomplish the vision. This has another benefit of putting those requirements on their radar in advance of when they will need to execute upon them, resulting in quicker turnaround times for you.</p>
<hr>
<p>By the end of the prioritization process, I hope you feel emboldened by the knowledge of which themes are most important to accomplish, and in which order. Now is the time of execution, defining which steps need to be taken towards your goal.</p>
<p>If you have a program manager, this is exactly where to loop them in. If you do not, or cannot loan one from another team for advice, then please do not pretend to be one. That is, you should not assume you understand the abilities and constraints of your team members and assign tasks to them without checking with them first.</p>
<p>While you can lead the charge on the “what,” you must include others in figuring out the “how.” Look to the real Steve Jobs, not the fallacious Steve Jobs Myth, and recall how he trusted the project lead to determine the underlying technical detail so long as his requirements were met. Depending on the size of your team, there will perhaps only be one or two individuals able to take on tasks. Where I see security managers often fail is vacillating between extremes of giving sparse direction and minimal feedback to delving too far in the weeds.</p>
<p>Through the process I outlined, you already defined the requirements of the project through the need to present across the organization — so there isn’t necessarily much more work to be done on your end, save for clarifying requirements on request.</p>
<p>If you want to really make an impact, begin tackling your security debt. You may be familiar with technical debt, which is when quality is sacrificed for speed, typically with the false promise of “we’ll fix it later.” Ironically, by not treating security as a product, you are vastly more likely to accumulate security debt as part of your crusade to integrate your gospel.</p>
<p>Embracing security as a product involves treating it almost as a living thing, one which decays and requires nurturing to stay alive. For each “shortcut” you take, are you considering what challenges will be created later? Did you document why you can’t address it effectively today, for example because there will be a superior way to fix the problem if you wait? How frequently are you returning to those shortcuts and paying down your debt?</p>
<p>There is power in ownership. The <a href="https://en.wikipedia.org/wiki/Endowment_effect">endowment effect</a> is a discovery by behavioral economics that people ascribe more value to things they own, far more than they “rationally” should. Your security program being a product means you own that product — it is <em>your</em> vision and <em>your</em> story. In a fashion, you can consciously nudge yourself into a mindset that will inherently encourage you to take better care of your security program.</p>
<p>I stress this because a lamentable consequence of the nihilism I see in defenders is they cease to care about the security of their organization on a time horizon that surpasses their planned tenure. With the tumultuous turnover of security talent in most organizations, if the strategy is Security Dogma, devotion to it dies when the believer leaves, and a new messenger of the Elder Infosec Deities comes in to spread their own interpretation of the Dogma.</p>
<p>By creating your vision for the security program, you describe a map for the security program’s journey. An incoming hero unburdened by zealotry can see where they are in the journey and the end destination. It’s unlikely that every stop on the journey will be entirely discarded unless there is scant evidence for its value. What is more likely is that the “how” will change most drastically, while the overarching quest — your vision, and to some extent, your legacy — remains intact.</p>
<p>The product process even aids you when switching organizations. What is changing is the end customer — not the process. Even so, just as when I helped women pick out dresses, there will be customers with characteristics in common to each other, rendering your maps meaningful beyond the initial customer.</p>
<p>And if you want to rise the ranks, the practice of articulating a clear vision and fostering consensus will only serve to demonstrate competency. It can demonstrate to your executives or your board of directors that you understand them as a customer and will nourish their trust in your ability to deftly manage risk in a way that supports their success.</p>
<hr>
<p>This is the fairy-tale ending to my own vision I shared with you today — that you can ride into the sunset knowing you were a hero in the way that helped the realm prosper. The fanatics who sought to serve spurious justice will never reach their dream of security nirvana, wailing relentlessly into the wind about their persecution at the hands of locals wanting to prosper.</p>
<p>Inspire your organization with your story of how security can allow it to thrive and make them feel they have a part to play in it. Your band of heroes can and should include colleagues outside of security, who will be far more willing to aid you in your quest — however long or arduous — if you take the time to discuss its purpose to them from a position of empathy.</p>
<p>Security is a product, and reluctance to embrace that is like rejecting scientific evidence in deference to zealotry. There is a way forward that does not rely on worshiping the Elder Infosec Gods through enforcing Security Dogma, which is, in fact, the path of least resistance — despite being less dictatorial.</p>
<p>Quixotism in the name of security purity will crumble as a foundation for a “Blutopia,” but a pragmatic approach, the support of devoted followers within your organization, and a visionary quest just might be the right start to our collective journey.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>2018 Cyber Security Predictions</title>
            <link>https://kellyshortridge.com/blog/posts/2018-cybersecurity-predictions/</link>
            <pubDate>Thu, 21 Dec 2017 16:52:25 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2018-cybersecurity-predictions/</guid>
            <description>Fed up with ridiculous infosec predictions for the upcoming year, I decided to aggregate them all and use the power of Markov Chains to generate my own list. What follows is the result, lightly edited solely for readability. I hope to be pioneering the next-gen AI-powered thought leadering market segment.
In 2018, security. Cyber security people will die. We’ve long debated where security people will die. We expect this lucrative trend to continue through 2018.
2017 predictions were fake, but we received the word. Security predictions for 2018 showcase a myriad of challenges that can be exploited. What’s more, they will pose a significance (the computing, the significance). But we rarely think as well about the potential for net new, impactful cyber events. The world seems less stable, and a software library is another international data breach. One could make it a theoretically important question: are computers Internet connectivity?
Companies can’t count on the internet. We knew full well that this was the near future. It’s simply a “good” business environment of valuable data, data that allows them to move into 2018. Any of their data is one thing to blame, and security will be front and center. Data breaches are from human error, yet traditional hacking is on critical data.
We are at the rising edge of a return to securing applications instead of building complex, expensive and defensive strategies for APT attacks. These breaches that plague organizations today are primarily the information security community’s ability to script, automate, scale, and more efficiently analyze the mass quantities of data involved in cyberattacks for more than a decade.
Organizations will continue to be a popular hacking method. Our children face an amazing future of gadgets, services, and experiences, but they also face tremendous growth of the marketspace and a necessity for organizations. Software will help overcome cultural resistance and arm organizations. The growing awareness is due to significant monetary gains and because problems are always easier to solve when security.
Reality is only automation.
Prediction #1: THE DARK BUT LUCRATIVE TREND IN RANSOMWARE WILL CONTINUE TO EXPLODE IN THE CLOUD The dark but lucrative trend in ransomware will emerge from the shadows and escalate, directly impacting the legal challenge of IT professionals, which will deepen. With this rise in ransomware solutions, businesses will exploit models that will ignite a bit of fun! While we predicted increases in ransomware last year, companies scrambled to update vision and strategy against each other.Ransomware protects expensive and often inefficient perimeter defenses. FAKEAV and ransomware — like peanut butter and jelly or Thelma and Louise, the two go together. The integration has been to encourage the use of human behavior-directed attacks in the war on cybercriminal technology and help them find a better way for vulnerabilities to require security prediction.
While hackers are already heavily sanctioned, with the rise of populism, 44% of organizations will escalate to a very scary pitch, with each side threatening to go public — exposing you to the risk of huge fines — unless you pay the ransom. For hackers, ransomware will attack each other for years. The Equifax hackers will demand $2.6 million USD — even for a target whose network of seemingly unlimited endpoints contains a massive Equifax breach.
Prediction #2: BITCOIN WALLET EXPLOITS WILL RESULT IN ANOTHER MAJOR DDOS ATTACK AGAINST CRITICAL INFRASTRUCTURE In 2018, the cryptocurrency escalates. The value of cryptocurrency exchanges and the age of them becomes a top priority for organizations to get the basics of cyber security prediction. They’ve become the payment method of choice for cyberattacks with security experts. Blockchain technology makes them attractive to hackers, as opposed to PCs.The industry will ultimately find a cheap, dirty, and effective way to monitor sugar levels, and blockchain technologies will increasingly come under mounting pressure to better combat the new threat that will emerge in 2018. Vendor-agnostic implemented blockchain technology underpins the transaction ledgers used by most cryptocurrencies and will increase, driven by third-party security policies that will still lack teeth.
Our prediction of what many deem to be past abuses that came to light with the blockchain technology has started making serious financial impact. 2018 will be the year of abhorrent sexist behavior by powerful tools and those which manage global marketing campaigns. Next year’s newfound love will be forced to only be not-authorized.
Automation will let BTC wallets be hacked and remotely controlled. As with any political drama of the past year, Gartner forecasts 8.4 billion connections to cryptocurrency exchange users’ wallets and exploits of weak authentication, but only when risk is high. For as little as US$ 5, you can actually pay someone to do the attack for you! This is just one issue the GDPR aims to resolve for European citizens.
Prediction #3: ENFORCEMENT ON SMART DEVICES OR SUPPLIERS WILL FAIL Following the trend in 2018, the IoT world will continue to grow. This will become more widely accepted, and will overtake AI in VC funding, and security innovation will rapidly escalate to include technologies that drive other smart device hacks. Many IoT technologies lack protections to ensure devices cannot be exploited by the cyberspace dark forces.
The IoT space gets even messier before it adopts a common framework. Given the difficulty of managing IoT sensors in the absence of standards, most solutions remain proprietary and geared toward solving very purpose-driven functions. Expect 2018 to be the year that your device is about to be confiscated.
Hackers who want to gain control over devices are to materialize in 2018 and organizations, including freelance groups hired by the government, will administer DDoS attacks and cyber warfare. Services providers, including governments, will impact things (IoT) connectivity to conduct attacks. It will be the start of a layered addition that targets their hardware chips, which may even be publicly available on the “open-market,” resulting in proliferating worms to infiltrate many IoT deployments.
Vigilante hacking smart meters and installing fileless malware attempts have begun. Major car manufacturers are not yet routinely building security into their target. Will we see self-driving cars seriously hacked? Amazon Echo devices submitted into our crystal ball to manage realization tasks will continue to grow through unpatched new vectors. Drones are used to create serious disruption of things, to say, open a garage door to legitimate organizations. The boardroom needs access to these malicious devices, so as not to have to fend off cyber security gaps using pirated social media spamming.
Prediction #4: TECH VS GOVERNMENT — ROUND II We predict increases in the United States launching cyber attacks against other nations. This offers very little incentive towards limiting the Cold War. If they can find a weak link in a system which already established that cyber-risk is now a prominent red exclamation mark in a triangle, we expect to see supply chain issues.
Fake news comes into play when GDPR gets imposed. It’s hard to argue that fake news may or may not have influenced the 2016 presidential election. When it comes to grips, the US elections are building secure fraudsters. The fake news triangle consists of: motivations of proper mobile devices, freelance groups hired by governments, and stealing information projects. A reminder is just around the corner with the US mid-term elections in the aforementioned battle between authentic and fake. Expect lobbyists, foreign and domestic, to push fake news to further their agenda.
International governments and vulnerability of data is embedded into business requirements, and overall levels of social information will accelerate. Singapore has recently been tasked with protecting people, data, intellectual property, stockholder loyalty, and brand protection. In 2018, Africa will emerge to help enterprises, which when left unsecured, can become slave nodes. British security evolves in areas such as China and its role in a free society. Each area alone could make 2018 an interesting year.
Malaysia has also recently analyzed this data as quickly as possible. Malaysia and Indonesia are already looking for alternatives to SSNs, including machine learning that lets computers emulate this to meet the ground up. Alternatives to SSNs could include the defense-in-depth strategy that address the systemic vulnerabilities in the user, coming from devices built on blockchain-related cyber security numbers. Action: Volunteer your time to fully eradicate SSNs from the credit process.
Prediction #5: PREDICTION IS… GDPR Prediction: the European Union (EU) will become untenable. The goal of GDPR is to harmonize data so privacy watchdogs can interfere with businesses worldwide. A group known as the ‘Cutting Sword of Justice’ took credit for GDPR compliance, so companies outside the European Union (EU) will face fines of up to 20,000,000 EUR or up to 4% of their total security. They need to assess whether they will ignite discussions on a politicized role beyond our wildest dreams.
Legislation will mean artificial intelligence in the first regulation (GDPR) becomes enforced. This rule would disable biometrics or a company’s data via the “troll farm” behind Twitter. Ransomware will still be outnumbered by the regulation’s impact on their operations, and in turn, lead to an increase in automated toolsets to drive success. Data regulations in developing markets on the Dark Web offer a sophisticated nature of the user’s physical location, all contacts, or access to their data.
Again, don’t take GDPR seriously or experience it by using machine learning engines.
Prediction #6: CYBER RECYCLING AI is a tool that can and will be exploited much more than just a convenient way to learn about today’s weather or get the latest sports scores. AI is a tool that can show genuine concern for protecting the privacy debate. AI will also open the way to new vulnerabilities. AI will permit attacks to scale far beyond the techniques that are frequently used. Insurance companies will continue to target holes in machine learning, AI, analyzing our smart devices, and even multi-factor solutions.Machine learning may also be a powerful tool, and those wielding it will believe that it should not completely take over security mechanisms. It should be considered an additional security layer incorporated into an in-depth defense strategy, and not a silver bullet.
Most still have not seen widespread advertising to deceive machine learning. If a manipulated piece of data or wrong command is sent to an ERP system, machines will be liable to sabotage processes by carrying out cyberattacks against individuals as opposed to being bombarded with false positives. 30% to 40% of the war on cybersecurity experts is with machine learning, selling information, and detecting Internet infrastructures. Even though that analysis may include machine learning-based authentication, it brings with it significant growth in company indicators, driven by nationalistic tendencies.
Machine learning and managed security will move away from detect-and-respond alerts and data. We can spot patterns or those who have superficial attack components. Furthermore, advances with pattern recognition supporting the Internet infrastructure can also see more suppression systems surrounding software at little or no defense. It allows proactively managing the individual to become an essential part of SecOps, and direct sales persons need to be bombarded with false positives.
Prediction #7: CORPORATE SECURITY BUDGETS DON’T PRODUCE INCOME We will see increased adoption of cyber security frameworks. Cloud Access Service Brokers (CASB) and other cloud security frameworks have been acquiring certificates that make UEFI an attractive target for cyberattacks. A prime example is Windows 10.
Next-generation security incident response exercise projects will face lawsuits. Sadly, it’s simply to notify them, and report to the fire department. As such, it is not about embedding cybersecurity practices, and large companies are not secure by design. Users and enterprises are advised to routinely check for software.Managed security processes will deploy a defense-in-depth strategy. Unfortunately, GDPR will provide accurate detection and Response (MDR) services, including techniques such as advanced phishing and social media to help stories spread rapidly. Action: Try shopping at the reconnaissance phase before it’s too late.
Previous attacks are the gift that continues to become an entry point to the central networks. We predict that these networks (which base their success on quantified metrics like ‘daily active users’ and cyber behaviors at the human point) are growing. Numerous readily available fortresses are not sufficient. A hoard of locusts will control systems daily.
In 2018, we will be protected by HTTPS. Those not using HTTPS inspection/decryption are at risk. TLS 1.2 is widely available to anyone who feels the risk level oversight and the human-centric root of risk. Once red teams incorporate into an in-depth defense strategy, not a silver bullet, they should be disabled on all website traffic using HTTPS by default. We discovered that the customers’ red teams were conducting penetration testing, which has repercussions for the industry marketing hype.
Thanks to Andrew Ruef
</description>
            <atom:content type="html"><![CDATA[<p><em>Fed up with ridiculous infosec predictions for the upcoming year, I decided to aggregate them all and use the power of Markov Chains to generate my own list. What follows is the result, lightly edited solely for readability. I hope to be pioneering the next-gen AI-powered thought leadering market segment.</em></p>
<p><img src="/blog/img/cyber-dynomite.png" alt="cyber dynomite"></p>
<p>In 2018, security. Cyber security people will die. We’ve long debated where security people will die. We expect this lucrative trend to continue through 2018.</p>
<p>2017 predictions were fake, but we received the word. Security predictions for 2018 showcase a myriad of challenges that can be exploited. What’s more, they will pose a significance (the computing, the significance). But we rarely think as well about the potential for net new, impactful cyber events. The world seems less stable, and a software library is another international data breach. One could make it a theoretically important question: are computers Internet connectivity?</p>
<p>Companies can’t count on the internet. We knew full well that this was the near future. It’s simply a “good” business environment of valuable data, data that allows them to move into 2018. Any of their data is one thing to blame, and security will be front and center. Data breaches are from human error, yet traditional hacking is on critical data.</p>
<p>We are at the rising edge of a return to securing applications instead of building complex, expensive and defensive strategies for APT attacks. These breaches that plague organizations today are primarily the information security community’s ability to script, automate, scale, and more efficiently analyze the mass quantities of data involved in cyberattacks for more than a decade.</p>
<p>Organizations will continue to be a popular hacking method. Our children face an amazing future of gadgets, services, and experiences, but they also face tremendous growth of the marketspace and a necessity for organizations. Software will help overcome cultural resistance and arm organizations. The growing awareness is due to significant monetary gains and because problems are always easier to solve when security.</p>
<p>Reality is only automation.</p>
<hr>
<h2 id="prediction-1-the-dark-but-lucrative-trend-in-ransomware-will-continue-to-explode-in-the-cloud">Prediction #1: THE DARK BUT LUCRATIVE TREND IN RANSOMWARE WILL CONTINUE TO EXPLODE IN THE CLOUD</h2>
<img style="float: right; max-width:50%; padding: 5px" src="/blog/img/trinity-0day.gif" alt="Trinity's 0day in the Matrix">
The dark but lucrative trend in ransomware will emerge from the shadows and escalate, directly impacting the legal challenge of IT professionals, which will deepen. With this rise in ransomware solutions, businesses will exploit models that will ignite a bit of fun! While we predicted increases in ransomware last year, companies scrambled to update vision and strategy against each other.
<p>Ransomware protects expensive and often inefficient perimeter defenses. FAKEAV and ransomware — like peanut butter and jelly or Thelma and Louise, the two go together. The integration has been to encourage the use of human behavior-directed attacks in the war on cybercriminal technology and help them find a better way for vulnerabilities to require security prediction.</p>
<p>While hackers are already heavily sanctioned, with the rise of populism, 44% of organizations will escalate to a very scary pitch, with each side threatening to go public — exposing you to the risk of huge fines — unless you pay the ransom. For hackers, ransomware will attack each other for years. The Equifax hackers will demand $2.6 million USD — even for a target whose network of seemingly unlimited endpoints contains a massive Equifax breach.</p>
<hr>
<h2 id="prediction-2-bitcoin-wallet-exploits-will-result-in-another-major-ddos-attack-against-critical-infrastructure">Prediction #2: BITCOIN WALLET EXPLOITS WILL RESULT IN ANOTHER MAJOR DDOS ATTACK AGAINST CRITICAL INFRASTRUCTURE</h2>
<img style="float: right; max-width:50%; padding: 5px" src="/blog/img/crypto-mining.gif" alt="Doge mining dogecoin">
In 2018, the cryptocurrency escalates. The value of cryptocurrency exchanges and the age of them becomes a top priority for organizations to get the basics of cyber security prediction. They’ve become the payment method of choice for cyberattacks with security experts. Blockchain technology makes them attractive to hackers, as opposed to PCs.
<p>The industry will ultimately find a cheap, dirty, and effective way to monitor sugar levels, and blockchain technologies will increasingly come under mounting pressure to better combat the new threat that will emerge in 2018. Vendor-agnostic implemented blockchain technology underpins the transaction ledgers used by most cryptocurrencies and will increase, driven by third-party security policies that will still lack teeth.</p>
<p>Our prediction of what many deem to be past abuses that came to light with the blockchain technology has started making serious financial impact. 2018 will be the year of abhorrent sexist behavior by powerful tools and those which manage global marketing campaigns. Next year’s newfound love will be forced to only be not-authorized.</p>
<p>Automation will let BTC wallets be hacked and remotely controlled. As with any political drama of the past year, Gartner forecasts 8.4 billion connections to cryptocurrency exchange users’ wallets and exploits of weak authentication, but only when risk is high. For as little as US$ 5, you can actually pay someone to do the attack for you! This is just one issue the GDPR aims to resolve for European citizens.</p>
<hr>
<h2 id="prediction-3-enforcement-on-smart-devices-or-suppliers-will-fail">Prediction #3: ENFORCEMENT ON SMART DEVICES OR SUPPLIERS WILL FAIL</h2>
<p>Following the trend in 2018, the IoT world will continue to grow. This will become more widely accepted, and will overtake AI in VC funding, and security innovation will rapidly escalate to include technologies that drive other smart device hacks. Many IoT technologies lack protections to ensure devices cannot be exploited by the cyberspace dark forces.</p>
<p>The IoT space gets even messier before it adopts a common framework. Given the difficulty of managing IoT sensors in the absence of standards, most solutions remain proprietary and geared toward solving very purpose-driven functions. Expect 2018 to be the year that your device is about to be confiscated.</p>
<p>Hackers who want to gain control over devices are to materialize in 2018 and organizations, including freelance groups hired by the government, will administer DDoS attacks and cyber warfare. Services providers, including governments, will impact things (IoT) connectivity to conduct attacks. It will be the start of a layered addition that targets their hardware chips, which may even be publicly available on the “open-market,” resulting in proliferating worms to infiltrate many IoT deployments.</p>
<p>Vigilante hacking smart meters and installing fileless malware attempts have begun. Major car manufacturers are not yet routinely building security into their target. Will we see self-driving cars seriously hacked? Amazon Echo devices submitted into our crystal ball to manage realization tasks will continue to grow through unpatched new vectors. Drones are used to create serious disruption of things, to say, open a garage door to legitimate organizations. The boardroom needs access to these malicious devices, so as not to have to fend off cyber security gaps using pirated social media spamming.</p>
<hr>
<h2 id="prediction-4-tech-vs-governmentround-ii">Prediction #4: TECH VS GOVERNMENT — ROUND II</h2>
<p>We predict increases in the United States launching cyber attacks against other nations. This offers very little incentive towards limiting the Cold War. If they can find a weak link in a system which already established that cyber-risk is now a prominent red exclamation mark in a triangle, we expect to see supply chain issues.</p>
<p>Fake news comes into play when GDPR gets imposed. It’s hard to argue that fake news may or may not have influenced the 2016 presidential election. When it comes to grips, the US elections are building secure fraudsters. The fake news triangle consists of: motivations of proper mobile devices, freelance groups hired by governments, and stealing information projects. A reminder is just around the corner with the US mid-term elections in the aforementioned battle between authentic and fake. Expect lobbyists, foreign and domestic, to push fake news to further their agenda.</p>
<p>International governments and vulnerability of data is embedded into business requirements, and overall levels of social information will accelerate. Singapore has recently been tasked with protecting people, data, intellectual property, stockholder loyalty, and brand protection. In 2018, Africa will emerge to help enterprises, which when left unsecured, can become slave nodes. British security evolves in areas such as China and its role in a free society. Each area alone could make 2018 an interesting year.</p>
<p>Malaysia has also recently analyzed this data as quickly as possible. Malaysia and Indonesia are already looking for alternatives to SSNs, including machine learning that lets computers emulate this to meet the ground up. Alternatives to SSNs could include the defense-in-depth strategy that address the systemic vulnerabilities in the user, coming from devices built on blockchain-related cyber security numbers. Action: Volunteer your time to fully eradicate SSNs from the credit process.</p>
<hr>
<h2 id="prediction-5-prediction-is-gdpr">Prediction #5: PREDICTION IS… GDPR</h2>
<p>Prediction: the European Union (EU) will become untenable. The goal of GDPR is to harmonize data so privacy watchdogs can interfere with businesses worldwide. A group known as the ‘Cutting Sword of Justice’ took credit for GDPR compliance, so companies outside the European Union (EU) will face fines of up to 20,000,000 EUR or up to 4% of their total security. They need to assess whether they will ignite discussions on a politicized role beyond our wildest dreams.</p>
<p>Legislation will mean artificial intelligence in the first regulation (GDPR) becomes enforced. This rule would disable biometrics or a company’s data via the “troll farm” behind Twitter. Ransomware will still be outnumbered by the regulation’s impact on their operations, and in turn, lead to an increase in automated toolsets to drive success. Data regulations in developing markets on the Dark Web offer a sophisticated nature of the user’s physical location, all contacts, or access to their data.</p>
<p>Again, don’t take GDPR seriously or experience it by using machine learning engines.</p>
<hr>
<h2 id="prediction-6-cyber-recycling">Prediction #6: CYBER RECYCLING</h2>
<img style="float: right; max-width:50%; padding: 5px" src="/blog/img/machine-learning-oprah.gif" alt="Oprah saying 'And you get a machine learning'">
AI is a tool that can and will be exploited much more than just a convenient way to learn about today’s weather or get the latest sports scores. AI is a tool that can show genuine concern for protecting the privacy debate. AI will also open the way to new vulnerabilities. AI will permit attacks to scale far beyond the techniques that are frequently used. Insurance companies will continue to target holes in machine learning, AI, analyzing our smart devices, and even multi-factor solutions.
<p>Machine learning may also be a powerful tool, and those wielding it will believe that it should not completely take over security mechanisms. It should be considered an additional security layer incorporated into an in-depth defense strategy, and not a silver bullet.</p>
<p>Most still have not seen widespread advertising to deceive machine learning. If a manipulated piece of data or wrong command is sent to an ERP system, machines will be liable to sabotage processes by carrying out cyberattacks against individuals as opposed to being bombarded with false positives. 30% to 40% of the war on cybersecurity experts is with machine learning, selling information, and detecting Internet infrastructures. Even though that analysis may include machine learning-based authentication, it brings with it significant growth in company indicators, driven by nationalistic tendencies.</p>
<p>Machine learning and managed security will move away from detect-and-respond alerts and data. We can spot patterns or those who have superficial attack components. Furthermore, advances with pattern recognition supporting the Internet infrastructure can also see more suppression systems surrounding software at little or no defense. It allows proactively managing the individual to become an essential part of SecOps, and direct sales persons need to be bombarded with false positives.</p>
<hr>
<h2 id="prediction-7-corporate-security-budgets-dont-produce-income">Prediction #7: CORPORATE SECURITY BUDGETS DON’T PRODUCE INCOME</h2>
<p>We will see increased adoption of cyber security frameworks. Cloud Access Service Brokers (CASB) and other cloud security frameworks have been acquiring certificates that make UEFI an attractive target for cyberattacks. A prime example is Windows 10.</p>
<img style="float: right; max-width:50%; padding: 5px" src="/blog/img/oh-no-hackers.gif" alt="Oh no, hackers in the mainframe">
Next-generation security incident response exercise projects will face lawsuits. Sadly, it’s simply to notify them, and report to the fire department. As such, it is not about embedding cybersecurity practices, and large companies are not secure by design. Users and enterprises are advised to routinely check for software.
<p>Managed security processes will deploy a defense-in-depth strategy. Unfortunately, GDPR will provide accurate detection and Response (MDR) services, including techniques such as advanced phishing and social media to help stories spread rapidly. Action: Try shopping at the reconnaissance phase before it’s too late.</p>
<p>Previous attacks are the gift that continues to become an entry point to the central networks. We predict that these networks (which base their success on quantified metrics like ‘daily active users’ and cyber behaviors at the human point) are growing. Numerous readily available fortresses are not sufficient. A hoard of locusts will control systems daily.</p>
<p>In 2018, we will be protected by HTTPS. Those not using HTTPS inspection/decryption are at risk. TLS 1.2 is widely available to anyone who feels the risk level oversight and the human-centric root of risk. Once red teams incorporate into an in-depth defense strategy, not a silver bullet, they should be disabled on all website traffic using HTTPS by default. We discovered that the customers’ red teams were conducting penetration testing, which has repercussions for the industry marketing hype.</p>
<hr>
<p><em>Thanks to Andrew Ruef</em></p>
]]></atom:content>
        </item>
        
        <item>
            <title>My 2017 Reading List</title>
            <link>https://kellyshortridge.com/blog/posts/2017-reading-list/</link>
            <pubDate>Thu, 07 Dec 2017 16:44:58 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2017-reading-list/</guid>
            <description>As I wrote about last year, my ongoing New Year’s resolution is to try to read one non-fiction and one fiction book per month. I was unable to fully accomplish this goal for 2017 (I tend to gravitate more towards fiction in tumultuous times), but I loved the books I did manage to read along the way.
In the vein of last year’s post, I sought out non-fiction and science / speculative fiction books by a diverse set of authors. While last year I aimed to discover as many new female authors as possible, this year I specifically strove to experience sci-fi from other cultural perspectives and underrepresented voices, as well as learn more about the history of non-Western civilizations.
My reading list is below, including links to each book’s Amazon page if you’d like to check them out. There were no “thumbs down” books I read this year, and I’m bad at book reviews anyway, so I’ll leave judgement up to y’all.
Non-Fiction 1491: New Revelations of the Americas Before Columbus by Charles C. Mann
The Plague of War: Athens, Sparta, and the Struggle for Ancient Greece by Jennifer T. Roberts
Predicting the Unpredictable: The Tumultuous Science of Earthquake Prediction by Susan Elizabeth Hough
The Secret History of the Mongol Queens: How the Daughters of Genghis Khan Rescued His Empire by Jack Weatherford
What Works: Gender Equality by Design by Iris Bohnet
Fiction Autonomous: A Novel by Annalee Newitz
Binti by Nnedi Okorafor
The Fifth Season by N.K. Jemisin
Infomocracy: A Novel by Malka Older
Kalpa Imperial: The Greatest Empire That Never Was by Angelica Gorodischer
The Mountains of Mourning by Lois McMaster Bujold
Ninefox Gambit by Yoon Ha Lee
The Queue by Basma Abdel Aziz
The Three-Body Problem by Cixin Liu
The Three Stigmata of Palmer Eldritch by Philip K. Dick
Warcross by Marie Lu
</description>
            <atom:content type="html"><![CDATA[<p>As I <a href="/blog/posts/2016-reading-list">wrote about last year</a>, my ongoing New Year’s resolution is to try to read one non-fiction and one fiction book per month. I was unable to fully accomplish this goal for 2017 (I tend to gravitate more towards fiction in tumultuous times), but I loved the books I did manage to read along the way.</p>
<p>In the vein of last year’s post, I sought out non-fiction and science / speculative fiction books by a diverse set of authors. While last year I aimed to discover as many new female authors as possible, this year I specifically strove to experience sci-fi from other cultural perspectives and underrepresented voices, as well as learn more about the history of non-Western civilizations.</p>
<p>My reading list is below, including links to each book’s Amazon page if you’d like to check them out. There were no “thumbs down” books I read this year, and I’m bad at book reviews anyway, so I’ll leave judgement up to y’all.</p>
<h2 id="non-fiction">Non-Fiction</h2>
<p><a href="https://www.amazon.com/1491-Revelations-Americas-Before-Columbus/dp/1400032059">1491: New Revelations of the Americas Before Columbus</a> by Charles C. Mann</p>
<p><a href="https://www.amazon.com/1491-Revelations-Americas-Before-Columbus/dp/1400032059">The Plague of War: Athens, Sparta, and the Struggle for Ancient Greece</a> by Jennifer T. Roberts</p>
<p><a href="https://www.amazon.com/Predicting-Unpredictable-Tumultuous-Earthquake-Prediction/dp/0691138168">Predicting the Unpredictable: The Tumultuous Science of Earthquake Prediction</a> by Susan Elizabeth Hough</p>
<p><a href="https://www.amazon.com/Secret-History-Mongol-Queens-Daughters/dp/0307407160">The Secret History of the Mongol Queens: How the Daughters of Genghis Khan Rescued His Empire</a> by Jack Weatherford</p>
<p><a href="https://www.amazon.com/What-Works-Gender-Equality-Design/dp/0674089030">What Works: Gender Equality by Design</a> by Iris Bohnet</p>
<h2 id="fiction">Fiction</h2>
<p><a href="https://www.amazon.com/Autonomous-Novel-Annalee-Newitz/dp/0765392070/">Autonomous: A Novel</a> by Annalee Newitz</p>
<p><a href="https://www.amazon.com/Binti-Nnedi-Okorafor/dp/0765385252/">Binti</a> by Nnedi Okorafor</p>
<p><a href="https://www.amazon.com/Fifth-Season-Broken-Earth/dp/0316229296/">The Fifth Season</a> by N.K. Jemisin</p>
<p><a href="https://www.amazon.com/Fifth-Season-Broken-Earth/dp/0316229296/">Infomocracy: A Novel</a> by Malka Older</p>
<p><a href="https://www.amazon.com/Kalpa-Imperial-Greatest-Empire-Never/dp/1931520054/">Kalpa Imperial: The Greatest Empire That Never Was</a> by Angelica Gorodischer</p>
<p><a href="https://www.amazon.com/Mountains-Mourning-Vorkosigan-Saga-ebook/dp/B004O4C13W/">The Mountains of Mourning</a> by Lois McMaster Bujold</p>
<p><a href="https://www.amazon.com/Ninefox-Gambit-Machineries-Empire-Yoon/dp/1781084491">Ninefox Gambit</a> by Yoon Ha Lee</p>
<p><a href="https://www.amazon.com/Queue-Basma-Abdel-Aziz/dp/1612195164/">The Queue</a> by Basma Abdel Aziz</p>
<p><a href="https://www.amazon.com/Three-Body-Problem-Cixin-Liu/dp/0765382032/">The Three-Body Problem</a> by Cixin Liu</p>
<p><a href="https://www.amazon.com/Three-Body-Problem-Cixin-Liu/dp/0765382032/">The Three Stigmata of Palmer Eldritch</a> by Philip K. Dick</p>
<p><a href="https://www.amazon.com/Warcross-Marie-Lu/dp/0399547967/">Warcross</a> by Marie Lu</p>
]]></atom:content>
        </item>
        
        <item>
            <title>First Principles of Building Security Products</title>
            <link>https://kellyshortridge.com/blog/posts/first-principles-building-security-products/</link>
            <pubDate>Mon, 12 Jun 2017 16:32:41 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/first-principles-building-security-products/</guid>
            <description>Using Shamir’s 10 Commandments of Commercial Security to build better security products
A printout of Adi Shamir’s 10 Commandments of Commercial Security has been my #1 office essential since I first stumbled upon them, and I argue it should be yours, too. Shamir outlined these commandments in his talk at the Crypto ‘95 conference (yes, way back in 1995), and they not only spell out the first principles of enterprise security, but serve as a poignant reminder that although it feels as if the industry is evolving around us at a feverish pace, the fundamentals of building security products are true even two decades later.
Despite these tenets being evergreen, I rarely see them referenced — hence, I want to re-post and draw attention to them, and discuss why they’re still pertinent even today. While it’s absolutely beneficial reading for blue teams, the examination after the list is specifically about their importance from the perspective of people building commercial information security products.
10 Commandments of Commercial Security Directly quoted from Adi Shamir, Crypto ‘95
1. Don’t aim for perfect security So, be realistic, and do the best you can within your limits. Roughly, you should double security expenditure to halve risk.
2. Don’t solve the wrong problem For example, note that US banks lose 10 billion dollars a year in check fraud but only 5 million in online fraud. [naturally, these are 1995 figures and no longer accurate]
3. Don’t sell security bottom-up (in terms of the personnel hierarchy).
4. Don’t use cryptographic overkill Even bad crypto is usually the strong part of the system.
5. Don’t make it complicated This yields more places to attack the system, and it encourages users to find ways to bypass security.
6. Don’t make it expensive
7. Don’t use a single line of defense Have several layers so security can be maintained without expensive replacement of the primary line.
8. Don’t forget the “mystery attack” Be able to regenerate security even when you have no idea what’s going wrong. For example, smart cards are attackable but are great for quick cheap recovery.
9. Don’t trust systems
10. Don’t trust people
How can these commandments help us build better security products? Let’s review these commandments from the product perspective one by one.
1. Don’t aim for perfect security
No matter what sort of security product you’re building, you must be realistic that you will never block, detect, monitor, “thwart,” mitigate, or remediate all attacks. While for blue teams the rule of thumb is you should double your security budget to halve risk, for product teams, it should be that you should double your R&amp;D budget to halve your customer’s risk. Don’t invest too much time or money on addressing niche threats just because doing so sounds sexy — and don’t just consider what would meaningfully decrease your customer’s risk today, but also 1–2 years from today. Nor should you prioritize trendy attacks that will be passé to attackers next year.
2. Don’t solve the wrong problem
I believe this is the primary failing of most security product teams. The problem is never about stopping X attack. Let’s explore this with the “5 Whys” method (in which you may not need all 5 whys). Why does the customer need to stop X attack? X attack can cause a data breach, which blue teams want to prevent. Why? Data breaches lead to regulatory fines, reputational impact, etc. that blue teams want to prevent. Why? Damage to the organization makes the blue team look incompetent, which blue teams don’t want. Why? Blue teams looking incompetent can lead to them losing their jobs.
Ultimately, if your product is not helping blue teams look competent — meeting compliance, demonstrating mastery of your organization’s security posture to executives, being able to demonstrate why a breach was not the result of negligence — then you aren’t solving the right problem. This is why dashboards, reporting, visualizations, scoring, compliance modules, and all the other “boring” things matter.
3. Don’t sell security bottom-up
This is why buyer vs. user personas matter. This is also why including the aforementioned “boring” things like dashboards, reporting, and other high-level elements matter. You must be able to demonstrate value to CISOs, SOC directors, AppSec managers, and any other relevant team leads. This doesn’t mean your UX should be geared towards the buyer — your PoC/PoV will fall flat if so. It means you should consider the value to the buyer, and how you can articulate and demonstrate that in your product. Design UX for drill-downs, but market with dashboards.
4. Don’t use cryptographic overkill
The classic advice is “don’t roll your own crypto,” but what this really means is don’t rely on crypto to check the box for deeming your product “secure.” While Shamir may not have envisioned a future so dismal that AV products don’t even deliver updates over SSL, the point stands that with shameful frequency, security product vendors don’t rigorously audit their own product’s security.
5. Don’t make it complicated
One of my paramount frustrations with the infosec industry is the earnest shock and sneering contempt infosec professionals seem to have regarding the fact that users often bypass security protections. While a good handful of the booths at RSA this year finally demonstrated apt attention towards UX, too often UX is ignored in security product design — and specifically, UX to end users.
Yes, infosec is hard, but design is hard as well, and security product builders should respect it far more. Mediocre security that users love — or even simply tolerate in their workflows — will always be superior to fine security that users will spend vast amounts of effort looking to circumvent. And guess which will win in the market, too?
The other point here is that introducing complexity — more features, more parsers, more attempts at “all in one” security — degrades the security of the systems the product is meant to protect. Complexity means more limited ability for vendors to verify security, and even now, big vendors often get away with providing assurances of security standards without actually meeting them.
6. Don’t make it expensive
Security vendors violate this all the time, catering to only the largest of enterprises, while the medium enterprise market remains one of the juiciest yet most neglected. For example, Okta, who had an incredibly successful IPO and is adored by the public market, explicitly developed their product in a mid-size enterprise-friendly manner, but one from which large enterprises could still benefit. Their pricing is transparent with a menu of potential products, but as a reference, their bread and butter SSO offering would run an organization of 1,000 employees about $24,000 per year.
Compare this to a solution like the original FireEye box priced at $250,000 per year. In fact, FireEye has somewhat turned their previously lagging fortunes around in large part by releasing FireEye Helix, a far simpler — and far less expensive — platform available in an as-a-service subscription model. Selling to the large banks or generating massive services fees may allow for some initial success, but will set security products up for failure in the broader market.
7. Don’t use a single line of defense
For product builders, this commandment means you should assume that your product will not serve as your customer’s single line of defense. Do not try to make your product the solution to all problems. This doesn’t mean you can’t offer multiple products under one “platform,” because, frankly, most of the time what people mean by platform is just that all the vendor’s products have API rails to talk to each other, and that there may be a central console from which you can click buttons to access each of the products.
This does mean that each product should aim to tackle one particular challenge, and tackle it well. To quote the illustrious Ron Swanson, “don’t half-ass two things, whole ass one thing.”
8. Don’t forget the “mystery attack”
Consider how your product helps your customers deal with unknown unknowns. This can take many forms, such as presenting valuable context or creating resilient environments. As a simple example, offering detection only with signatures creates a binary of “bad” and “not bad,” while looking for behaviors can allow for risk scores whose traits can be used to help narrow down attacker activity when a breach is discovered. Or, by exposing context around events to security analysts, you can present relevant information to assist in threat hunting and incident response, rather than only showing the direct cause of an alert.
Further, with the rise of VMs and containers, it’s become easier to tear infrastructure up and down should compromise occur. For example, Slack rebuilt each component of their cloud infrastructure from scratch after a major breach in March 2015. Infosec vendors should consider leveraging these technologies and strategies in their own products and systems, too.
9. Don’t trust systems
Design your product in a way that assumes the customer’s estate is compromised and that your product can be compromised. The former should absolutely be taken into consideration for any machine learning or baselining approach, but it also means that compromised customers can lead to attackers learning how your product works — and thus developing countermeasures. The latter assumption is related to #4 and #5, in that you should consider how to minimize impact to your customer should an attacker exploit your product.
This naturally includes being careful about the third parties on which you rely, such as by auditing any proprietary or open source libraries you incorporate in your product — but also ensuring that your product minimizes privileges and permissions. This is, in part, why customers are wary of adopting host agents, as evidenced by the fact that Tavis Ormandy has found many highly embarrassing bugs due to endpoint protection products running at a high level of privileges.
10. Don’t trust people
Don’t assume end users will operate like Homo securitas, perfectly adhering to proper security hygiene and willing to bear some inconvenience by understanding that security is so important. For whatever reason, it doesn’t seem like the industry accepts yet that users will click things they shouldn’t, bypass things that get in the way of their work, and not understand things that you think are obvious. From the customer angle, although it’s more of a basic principle of UX, assume that your customers will also often use your product in a way that you didn’t intend — whether benign or malicious.
Consider a particularly iniquitous example: if you’ve built a user monitoring product for finding insider threats, bear in mind that it could be used as a tool for LOVEINT and harassment depending on how it’s designed, allowing someone to effectively spy on the activity of another employee.
Or, DLP solutions that give granular visibility into documents going in and out of the organization’s estate could expose potentially revealing titles — like M&amp;A-Agreement-With-Acquiror.pdf or Reputationally-Sensitive-Deal-Draft.docx— that could violate confidentiality agreements or cause other damage should an employee of either the customer or the vendor see — or worse, leak — this information.
Conclusion I hope I’ve adequately convinced you of these commandments’ relevancy today (and going forward), and that they can serve as a constructive framework for building security products. For product managers, challenge your roadmap against these commandments. For engineers, consider if your plan for how to architect product features and underlying systems adheres to these tenets. For marketers, leverage these principles in messaging value to the customer and demonstrating alignment with their priorities.
The reality of the infosec industry is that few products adhere to these commandments, which means this framework offers opportunities for differentiation. Keep it simple, stupid, and design your product to help customers return to these first principles, even if they don’t know yet that they need them — build the car, not faster horses.
Many thanks to Leigh Honeywell for reviewing.
</description>
            <atom:content type="html"><![CDATA[<p><em>Using Shamir’s 10 Commandments of Commercial Security to build better security products</em></p>
<p>A printout of Adi Shamir’s <a href="http://www.ieee-security.org/Cipher/ConfReports/conf-rep-Crypto95.html">10 Commandments of Commercial Security</a> has been my #1 office essential since I first stumbled upon them, and I argue it should be yours, too. Shamir outlined these commandments in his talk at the Crypto ‘95 conference (yes, way back in 1995), and they not only spell out the first principles of enterprise security, but serve as a poignant reminder that although it feels as if the industry is evolving around us at a feverish pace, the fundamentals of building security products are true even two decades later.</p>
<p>Despite these tenets being evergreen, I rarely see them referenced — hence, I want to re-post and draw attention to them, and discuss why they’re still pertinent even today. While it’s absolutely beneficial reading for blue teams, the examination after the list is specifically about their importance from the perspective of people building commercial information security products.</p>
<hr>
<h2 id="10-commandments-of-commercial-security">10 Commandments of Commercial Security</h2>
<p>Directly quoted from <a href="http://www.ieee-security.org/Cipher/ConfReports/conf-rep-Crypto95.html">Adi Shamir, Crypto ‘95</a></p>
<blockquote>
<p><strong>1. Don’t aim for perfect security</strong>
So, be realistic, and do the best you can within your limits. Roughly, you should double security expenditure to halve risk.</p>
</blockquote>
<blockquote>
<p><strong>2. Don’t solve the wrong problem</strong>
For example, note that US banks lose 10 billion dollars a year in check fraud but only 5 million in online fraud. [naturally, these are 1995 figures and no longer accurate]</p>
</blockquote>
<blockquote>
<p><strong>3. Don’t sell security bottom-up</strong>
(in terms of the personnel hierarchy).</p>
</blockquote>
<blockquote>
<p><strong>4. Don’t use cryptographic overkill</strong>
Even bad crypto is usually the strong part of the system.</p>
</blockquote>
<blockquote>
<p><strong>5. Don’t make it complicated</strong>
This yields more places to attack the system, and it encourages users to find ways to bypass security.</p>
</blockquote>
<blockquote>
<p><strong>6. Don’t make it expensive</strong></p>
</blockquote>
<blockquote>
<p><strong>7. Don’t use a single line of defense</strong>
Have several layers so security can be maintained without expensive replacement of the primary line.</p>
</blockquote>
<blockquote>
<p><strong>8. Don’t forget the “mystery attack”</strong>
Be able to regenerate security even when you have no idea what’s going wrong. For example, smart cards are attackable but are great for quick cheap recovery.</p>
</blockquote>
<blockquote>
<p><strong>9. Don’t trust systems</strong></p>
</blockquote>
<blockquote>
<p><strong>10. Don’t trust people</strong></p>
</blockquote>
<hr>
<h2 id="how-can-these-commandments-help-us-build-better-security-products">How can these commandments help us build better security products?</h2>
<p>Let’s review these commandments from the product perspective one by one.</p>
<p><strong>1. Don’t aim for perfect security</strong></p>
<p>No matter what sort of security product you’re building, you must be realistic that you will never block, detect, monitor, “thwart,” mitigate, or remediate all attacks. While for blue teams the rule of thumb is you should double your security budget to halve risk, for product teams, it should be that you should double your R&amp;D budget to halve your customer’s risk. Don’t invest too much time or money on addressing niche threats just because doing so sounds sexy — and don’t just consider what would meaningfully decrease your customer’s risk today, but also 1–2 years from today. Nor should you prioritize trendy attacks that will be passé to attackers next year.</p>
<p><strong>2. Don’t solve the wrong problem</strong></p>
<p>I believe this is the primary failing of most security product teams. The problem is never about stopping X attack. Let’s explore this with the “5 Whys” method (in which you may not need all 5 whys). Why does the customer need to stop X attack? X attack can cause a data breach, which blue teams want to prevent. Why? Data breaches lead to regulatory fines, reputational impact, etc. that blue teams want to prevent. Why? Damage to the organization makes the blue team look incompetent, which blue teams don’t want. Why? Blue teams looking incompetent can lead to them losing their jobs.</p>
<p>Ultimately, if your product is not helping blue teams look competent — meeting compliance, demonstrating mastery of your organization’s security posture to executives, being able to demonstrate why a breach was not the result of negligence — then you aren’t solving the right problem. This is why dashboards, reporting, visualizations, scoring, compliance modules, and all the other “boring” things matter.</p>
<p><strong>3. Don’t sell security bottom-up</strong></p>
<p>This is why buyer vs. user personas matter. This is also why including the aforementioned “boring” things like dashboards, reporting, and other high-level elements matter. You must be able to demonstrate value to CISOs, SOC directors, AppSec managers, and any other relevant team leads. This doesn’t mean your UX should be geared towards the buyer — your PoC/PoV will fall flat if so. It means you should consider the value to the buyer, and how you can articulate and demonstrate that in your product. Design UX for drill-downs, but market with dashboards.</p>
<p><strong>4. Don’t use cryptographic overkill</strong></p>
<p>The classic advice is “don’t roll your own crypto,” but what this really means is don’t rely on crypto to check the box for deeming your product “secure.” While Shamir may not have envisioned a future so dismal that <a href="https://www.av-test.org/en/news/news-single-view/32-products-put-to-the-test-how-good-is-antivirus-software-at-protecting-itself/">AV products don’t even deliver updates over SSL</a>, the point stands that with shameful frequency, security product vendors don’t rigorously audit their own product’s security.</p>
<p><strong>5. Don’t make it complicated</strong></p>
<p>One of my paramount frustrations with the infosec industry is the earnest shock and sneering contempt infosec professionals seem to have regarding the fact that users often bypass security protections. While a good handful of the booths at RSA this year finally demonstrated apt attention towards UX, too often UX is ignored in security product design — and specifically, UX to end users.</p>
<p>Yes, infosec is hard, but design is hard as well, and security product builders should respect it far more. Mediocre security that users love — or even simply tolerate in their workflows — will always be superior to fine security that users will spend vast amounts of effort looking to circumvent. And guess which will win in the market, too?</p>
<p>The other point here is that introducing complexity — more features, more parsers, more attempts at “all in one” security — degrades the security of the systems the product is meant to protect. Complexity means more limited ability for vendors to verify security, and even now, big vendors often get away with providing assurances of security standards without actually meeting them.</p>
<p><strong>6. Don’t make it expensive</strong></p>
<p>Security vendors violate this all the time, catering to only the largest of enterprises, while the medium enterprise market remains one of the juiciest yet most neglected. For example, Okta, who had an incredibly successful IPO and is adored by the public market, explicitly developed their product in a mid-size enterprise-friendly manner, but one from which large enterprises could still benefit. Their pricing is transparent with a menu of potential products, but as a reference, their bread and butter SSO offering would run an organization of 1,000 employees about $24,000 per year.</p>
<p>Compare this to a solution like the original FireEye box priced at $250,000 per year. In fact, FireEye has somewhat turned their previously lagging fortunes around in large part by releasing FireEye Helix, a far simpler — and far less expensive — platform available in an as-a-service subscription model. Selling to the large banks or generating massive services fees may allow for some initial success, but will set security products up for failure in the broader market.</p>
<p><strong>7. Don’t use a single line of defense</strong></p>
<p>For product builders, this commandment means you should assume that your product will not serve as your customer’s single line of defense. Do not try to make your product the solution to all problems. This doesn’t mean you can’t offer multiple products under one “platform,” because, frankly, most of the time what people mean by platform is just that all the vendor’s products have API rails to talk to each other, and that there may be a central console from which you can click buttons to access each of the products.</p>
<p>This does mean that each product should aim to tackle one particular challenge, and tackle it well. To quote the illustrious Ron Swanson, <a href="https://www.youtube.com/watch?v=zl-HalherjQ">“don’t half-ass two things, whole ass one thing.”</a></p>
<p><strong>8. Don’t forget the “mystery attack”</strong></p>
<p>Consider how your product helps your customers deal with unknown unknowns. This can take many forms, such as presenting valuable context or creating resilient environments. As a simple example, offering detection only with signatures creates a binary of “bad” and “not bad,” while looking for behaviors can allow for risk scores whose traits can be used to help narrow down attacker activity when a breach is discovered. Or, by exposing context around events to security analysts, you can present relevant information to assist in threat hunting and incident response, rather than only showing the direct cause of an alert.</p>
<p>Further, with the rise of VMs and containers, it’s become easier to tear infrastructure up and down should compromise occur. For example, Slack rebuilt each component of their cloud infrastructure from scratch after a major breach in March 2015. Infosec vendors should consider leveraging these technologies and strategies in their own products and systems, too.</p>
<p><strong>9. Don&rsquo;t trust systems</strong></p>
<p>Design your product in a way that assumes the customer’s estate is compromised and that your product can be compromised. The former should absolutely be taken into consideration for any machine learning or baselining approach, but it also means that compromised customers can lead to attackers learning how your product works — and thus developing countermeasures. The latter assumption is related to #4 and #5, in that you should consider how to minimize impact to your customer should an attacker exploit your product.</p>
<p>This naturally includes being careful about the third parties on which you rely, such as by auditing any proprietary or open source libraries you incorporate in your product — but also ensuring that your product minimizes privileges and permissions. This is, in part, why customers are wary of adopting host agents, as evidenced by the fact that Tavis Ormandy has found <a href="https://www.wired.com/2016/06/symantecs-woes-expose-antivirus-software-security-gaps/">many highly embarrassing bugs</a> due to endpoint protection products running at a high level of privileges.</p>
<p><strong>10. Don&rsquo;t trust people</strong></p>
<p>Don’t assume end users will operate like <em>Homo securitas</em>, perfectly adhering to proper security hygiene and willing to bear some inconvenience by understanding that security is so important. For whatever reason, it doesn’t seem like the industry accepts yet that users will click things they shouldn’t, bypass things that get in the way of their work, and not understand things that you think are obvious. From the customer angle, although it’s more of a basic principle of UX, assume that your customers will also often use your product in a way that you didn’t intend — whether benign or malicious.</p>
<p>Consider a particularly iniquitous example: if you’ve built a user monitoring product for finding insider threats, bear in mind that it could be used as a tool for <a href="https://en.m.wikipedia.org/wiki/LOVEINT">LOVEINT</a> and harassment depending on how it’s designed, allowing someone to effectively spy on the activity of another employee.</p>
<p>Or, DLP solutions that give granular visibility into documents going in and out of the organization’s estate could expose potentially revealing titles — like M&amp;A-Agreement-With-Acquiror.pdf or Reputationally-Sensitive-Deal-Draft.docx— that could violate confidentiality agreements or cause other damage should an employee of either the customer or the vendor see — or worse, leak — this information.</p>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>I hope I’ve adequately convinced you of these commandments’ relevancy today (and going forward), and that they can serve as a constructive framework for building security products. For product managers, challenge your roadmap against these commandments. For engineers, consider if your plan for how to architect product features and underlying systems adheres to these tenets. For marketers, leverage these principles in messaging value to the customer and demonstrating alignment with their priorities.</p>
<p>The reality of the infosec industry is that few products adhere to these commandments, which means this framework offers opportunities for differentiation. <a href="https://en.wikipedia.org/wiki/KISS_principle">Keep it simple, stupid</a>, and design your product to help customers return to these first principles, even if they don’t know yet that they need them — build the car, not faster horses.</p>
<hr>
<p>Many thanks to <a href="https://twitter.com/snare?lang=en">Leigh Honeywell</a> for reviewing.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Choice Architecture for InfoSec Blue Teams</title>
            <link>https://kellyshortridge.com/blog/posts/choice-architecture-infosec-blue-teams/</link>
            <pubDate>Wed, 08 Feb 2017 16:17:34 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/choice-architecture-infosec-blue-teams/</guid>
            <description>
I recently spoke at Art into Science: A Conference for Defense, which was an intellectually-stimulating (and delightfully quirky) conference focused on moving towards a professional discipline for defensive infosec. Sadly, I had to rush through the last part of my presentation, so I wanted to do it justice by fleshing out my thoughts here. I’m going to skip through the first two sections of my talk— an introduction to cognitive biases and how they manifest in infosec, then challenges that arise due to the group nature of blue teams — but feel free to check out the slides here.
The last part of the presentation focused on the “what to do about it” — I developed a choice architecture for blue teams in information security. Choice architecture is a term coined by Richard Thaler and Cass Sunstein in their famous book “Nudge,” and means the design of how choices can be presented to people.
The implication is that it can be designed in a way that impacts decision making, and more specifically in a way that minimizes errors due to cognitive biases. It’s basically a “how do we fix it?” response to the flaws in thinking that behavioral economics exposes (see my post on Prospect Theory &amp; Information Security for a primer on some of these flaws as they appear in infosec).
I’ll walk you through my proposed choice architecture for how blue teams can develop a decision-making process that is resilient to cognitive biases, or you can just skip to the conclusion for the 6-step guide.
Belief Prompting Decision Trees Social Tactics Conclusion Belief Prompting Get your thinky thinky face on
People have beliefs about their opponents in any confrontation. People also tend to believe their opponents are less rational than they actually are — and often make imprudent decisions as if their opponent is randomly choosing their decisions. A counter to this blunder is asking players for their explicit beliefs about what their opponents will do, known as belief prompting. Think of it as increasing one’s thinking by an additional step — how will your opponent respond to your move?
What beliefs about adversaries need to be evaluated in information security? I believe the answer is capital, time, equipment and risk aversion. You can also use the kill chain as a guide for the timeline of moves you need to consider. It’s critical to keep in mind that attackers aren’t profligate; as Dino Dai Zovi said, “attackers will take the least cost path through an attack graph from their start node to their goal node.”
Thus, theorizing probabilities of each type of move is necessary in order to consider weighted risk — for example, your adversary using iOS 0day on one of your employees may have a 1% chance (probably even less) of successfully occurring, so it most likely shouldn’t be the top influence in your decision making.
Some example questions to ask yourself, or when discussing with team members, are:
Which of our assets will attackers want? How does our adversary choose and craft their delivery method? What countermeasures does our adversary anticipate? How would an attacker bypass our [insert security product / solution / strategy here]? How would an attacker respond to our [insert security product / solution / strategy here]? What are the cost / resources required for an attacker to make [insert type of offensive move]? What is the probability that an attacker will conduct [insert type of offensive move here]? As an example of where these questions might lead you, here’s a belief prompting process for exfiltration, with defensive moves in blue and offensive moves in orange:
Decision Trees I’m basically Jared for the infosec industry
Creating a decision tree allows for a feedback loop that is invaluable in aiding the decision-making process, particularly in prioritizing strategies. I believe a decision tree model is the most efficient means of solving a few challenges in decision-making, as it:
Forces you to belief-prompt and increase your thinking by many additional steps Provides an auditable risk model so you can identify where your assumptions broke down (and thus mitigate the “doubling down” effect and self-justification) in event of a breach — an attempt to remove politics out of security strategy (e.g. favoring a product because you implemented it) Allows for easy refinement as data is generated from incidents Lets you see commonalities between attack trees where certain solutions might counter multiple moves Helps you visualize the hardest path for attackers so you can tune your strategy to force them down that path In creating the decision tree, you need to map out how attackers will respond to each of your countermeasures and to assign probabilities to the likelihood that they will pursue a certain option, as well as what options defenders have and the probability that these countermeasures will successfully prevent the offensive move.
The decision tree below from my presentation is for illustrative purposes— it covers a criminal group gaining access to a company’s server. It’s based on a Defender-Attacker-Defender model (see my upcoming talk at Troopers to go deeper!), with potential countermeasures by defense in blue and potential moves by attackers in gold.
The left branches in each set of moves represent the lower-cost moves, while the right branches represent the more expensive moves (cost here meaning monetary, human and time capital). For example, “#YOLO” means doing absolutely nothing, while “Privilege Separation” requires doing more than nothing, thus making it the more expensive option.
For those who need a walkthrough — if you start with the “Criminal Group” adversary that has just arrived on one of your servers, you can consider a preemptive defensive move being either: implementing privilege separation, which has an guesstimated 60% chance of deterring or thwarting the attacker; or “implementing” the #YOLO strategy of doing nothing, which has a 0% chance of deterring the attacker.
If you decide to implement Privilege Separation, the attacker’s response will either be the lower cost option — to scan for reachable data from their lower-privilege vantage with a guesstimated 50% likelihood of leading the attacker to a valuable box — or they could pursue the more resource-intensive option of throwing a known exploit, which has a guesstimated 50% chance of successfully working.
Following the “hard path,” whereby the attacker chooses using the known known exploit, the defender then contemplates what countermeasures they can put in place to challenge the attacker further. For example, they could implement seccomp, which filters sys calls and has a guesstimated rate of thwarting the known exploits 50% of the time, or they could implement GRSec, which bears the higher cost of being barely usable but blocks basically all known exploits.
If you go down the GRSec path, the attacker’s only option becomes “elite” 0day (meaning 0day that requires certain level of finesse and reliability), which takes significantly more time to craft — and the probability of it being deployed successfully is very low.
I know how y’all can be, so let me emphasize: the goal is not to quibble over exact probabilities until you can prove who is the better pedant. It’s meant to be a framework to aid in decision-making by visualizing your belief-prompting and is a starting point from which you can tweak your assumptions as you ingest real-world data.
Social Tactics omg it’s kittens having a meeting
I won’t cover the social dynamics of teams here (check out the presentation to read about it if interested), but belief-prompting vis a vis probability-labeled decision-trees ameliorates team-based biases as well. However, blue team leaders need to consider a few additional tactics to round out their decision-making model — most of which are in the vein of framing, i.e. how choices are presented (see framing effects for more).
First and foremost, leaders shouldn’t state their own views before soliciting feedback, lest it anchor the rest of the team’s opinions. It could lead to the team then guessing, “Ok, what do I think the boss believes?” but hopefully there’s a sufficient culture of respect that there isn’t that sort of paranoia. In that vein, using a decision tree can also serve as a starting point to solicit dissenting feedback — it can be easier to disagree with a probability or label on a tree rather than words coming out of someone’s mouth.
A key challenge for blue teams is how to deter short-termism and overly risky decisions. The nebulousness of costs and benefits of certain strategies or implementing solutions results in uncertainty of blue team members in regards to how their performance is evaluated. Soliciting longer-term views on these decisions, clearly articulating what constitutes success and failure for them, and agreeing on these figures with whomever is the “doer” for the project would alleviate much of this uncertainty.
The goal is to not pressure team members into agreeing to something just to show off their skill level, or because they fear they will look incompetent if they refuse, but to foster a sense of buy-in to the plan and maintain explicit expectations.
Finally, blue teams can leverage the decision trees when weighing different types of strategies or solution options, by estimating how much of a difference it would make in decreasing the probability of an attack succeeding vs. its cost — including monetary cost as well as personnel cost (having to hire a new person just to manage a product would require a hefty reduction in risk to justify).
Conclusion Ultimately, the ideal bias-resilient decision making process for blue teams in information security looks something like:
State beliefs about your adversaries Model decision trees Create a spectrum of success / failure for each decision Develop a probability / payoff matrix for different decision options (leveraging the decision tree) Prioritize rationality and overall benefit over risk-taking Revisit and refine your decision trees after each incident I don’t claim to have all the answers, and welcome any and all feedback on how to refine this model of a decision-making model. In the spirit of the Art into Science con, we need to collaborate in order to move the ball forward meaningfully in the philosophy of defense and ultimately give defenders a more auspicious foundation upon which to architect their security strategy.
Many thanks to snare for reviewing.
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/frederic-audet-161331.jpg" alt="Image of Darth Jar Jar"></p>
<p>I recently spoke at Art into Science: A Conference for Defense, which was an intellectually-stimulating (and delightfully quirky) conference focused on moving towards a professional discipline for defensive infosec. Sadly, I had to rush through the last part of my presentation, so I wanted to do it justice by fleshing out my thoughts here. I’m going to skip through the first two sections of my talk— an introduction to cognitive biases and how they manifest in infosec, then challenges that arise due to the group nature of blue teams — but feel free to <a href="/speaking/Know-Thyself-Kelly-Shortridge-ACoD-2017.pdf">check out the slides here</a>.</p>
<p>The last part of the presentation focused on the “what to do about it” — I developed a choice architecture for blue teams in information security. Choice architecture is a term coined by Richard Thaler and Cass Sunstein in their famous book “Nudge,” and means the design of how choices can be presented to people.</p>
<p>The implication is that it can be designed in a way that impacts decision making, and more specifically in a way that minimizes errors due to cognitive biases. It’s basically a “how do we fix it?” response to the flaws in thinking that behavioral economics exposes (see <a href="/blog/posts/behavioral-models-infosec-prospect-theory/">my post on Prospect Theory &amp; Information Security</a> for a primer on some of these flaws as they appear in infosec).</p>
<p>I’ll walk you through my proposed choice architecture for how blue teams can develop a decision-making process that is resilient to cognitive biases, or you can just skip to the conclusion for the 6-step guide.</p>
<ol>
<li><a href="#belief-prompting">Belief Prompting</a></li>
<li><a href="#decision-trees">Decision Trees</a></li>
<li><a href="#social-tactics">Social Tactics</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ol>
<hr>
<h2 id="a-namebelief-promptingabelief-prompting"><a name="belief-prompting"></a>Belief Prompting</h2>
<p><img src="/blog/img/thinking-clueless.gif" alt="Gif of Cher from Clueless thinking"><em>Get your thinky thinky face on</em></p>
<p>People have beliefs about their opponents in any confrontation. People also tend to believe their opponents are less rational than they actually are — and often make imprudent decisions as if their opponent is randomly choosing their decisions. A counter to this blunder is asking players for their explicit beliefs about what their opponents will do, known as <a href="https://books.google.com/books?id=bMWHDAAAQBAJ&amp;pg=PA132&amp;lpg=PA132&amp;dq=%22belief-prompting%22&amp;source=bl&amp;ots=QsfBOYXBlM&amp;sig=EDB2DND3JdfbnXJG7CJycLQOMjk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwjlupiu7fTRAhXq5YMKHfzxClgQ6AEIKjAD"><strong>belief prompting</strong></a>. Think of it as increasing one’s thinking by an additional step — how will your opponent respond to your move?</p>
<p>What beliefs about adversaries need to be evaluated in information security? I believe the answer is capital, time, equipment and risk aversion. You can also use the kill chain as a guide for the timeline of moves you need to consider. It’s critical to keep in mind that attackers aren’t profligate; as Dino Dai Zovi said, “attackers will take the least cost path through an attack graph from their start node to their goal node.”</p>
<p>Thus, theorizing probabilities of each type of move is necessary in order to consider weighted risk — for example, your adversary using iOS 0day on one of your employees may have a 1% chance (probably even less) of successfully occurring, so it most likely shouldn’t be the top influence in your decision making.</p>
<p>Some example questions to ask yourself, or when discussing with team members, are:</p>
<ul>
<li>Which of our assets will attackers want?</li>
<li>How does our adversary choose and craft their delivery method?</li>
<li>What countermeasures does our adversary anticipate?</li>
<li>How would an attacker bypass our [insert security product / solution / strategy here]?</li>
<li>How would an attacker respond to our [insert security product / solution / strategy here]?</li>
<li>What are the cost / resources required for an attacker to make [insert type of offensive move]?</li>
<li>What is the probability that an attacker will conduct [insert type of offensive move here]?</li>
</ul>
<p>As an example of where these questions might lead you, here’s a belief prompting process for exfiltration, with defensive moves in blue and offensive moves in orange:</p>
<p><img src="/blog/img/belief-prompting-example.png" alt="An example of belief prompting for information security, by Kelly Shortridge"></p>
<hr>
<h2 id="a-namedecision-treesadecision-trees"><a name="decision-trees"></a>Decision Trees</h2>
<p><img src="/blog/img/jared-swot.gif" alt="Jared from the show Silicon Valley presenting a SWOT analysis"><em>I’m basically Jared for the infosec industry</em></p>
<p>Creating a decision tree allows for a feedback loop that is invaluable in aiding the decision-making process, particularly in prioritizing strategies. I believe a decision tree model is the most efficient means of solving a few challenges in decision-making, as it:</p>
<ol>
<li>Forces you to belief-prompt and increase your thinking by many additional steps</li>
<li>Provides an auditable risk model so you can identify where your assumptions broke down (and thus mitigate the “doubling down” effect and self-justification) in event of a breach — an attempt to remove politics out of security strategy (e.g. favoring a product because you implemented it)</li>
<li>Allows for easy refinement as data is generated from incidents</li>
<li>Lets you see commonalities between attack trees where certain solutions might counter multiple moves</li>
<li>Helps you visualize the hardest path for attackers so you can tune your strategy to force them down that path</li>
</ol>
<p>In creating the decision tree, you need to map out how attackers will respond to each of your countermeasures and to assign probabilities to the likelihood that they will pursue a certain option, as well as what options defenders have and the probability that these countermeasures will successfully prevent the offensive move.</p>
<p>The decision tree below from my presentation is for illustrative purposes— it covers a criminal group gaining access to a company’s server. It’s based on a Defender-Attacker-Defender model (see <a href="/speaking/Volatile-Memory-Kelly-Shortridge-Troopers-2017.pdf">my upcoming talk at Troopers</a> to go deeper!), with potential countermeasures by defense in blue and potential moves by attackers in gold.</p>
<p>The left branches in each set of moves represent the lower-cost moves, while the right branches represent the more expensive moves (cost here meaning monetary, human and time capital). For example, “#YOLO” means doing absolutely nothing, while “Privilege Separation” requires doing more than nothing, thus making it the more expensive option.</p>
<p><img src="/blog/img/decision-tree-example.png" alt="An example of a decision tree for information security, by Kelly Shortridge"></p>
<p>For those who need a walkthrough — if you start with the “Criminal Group” adversary that has just arrived on one of your servers, you can consider a preemptive defensive move being either: implementing privilege separation, which has an guesstimated 60% chance of deterring or thwarting the attacker; or “implementing” the #YOLO strategy of doing nothing, which has a 0% chance of deterring the attacker.</p>
<p>If you decide to implement Privilege Separation, the attacker’s response will either be the lower cost option — to scan for reachable data from their lower-privilege vantage with a guesstimated 50% likelihood of leading the attacker to a valuable box — or they could pursue the more resource-intensive option of throwing a known exploit, which has a guesstimated 50% chance of successfully working.</p>
<p>Following the “hard path,” whereby the attacker chooses using the known known exploit, the defender then contemplates what countermeasures they can put in place to challenge the attacker further. For example, they could implement seccomp, which filters sys calls and has a guesstimated rate of thwarting the known exploits 50% of the time, or they could implement GRSec, which bears the higher cost of being barely usable but blocks basically all known exploits.</p>
<p>If you go down the GRSec path, the attacker’s only option becomes “elite” 0day (meaning 0day that requires certain level of finesse and reliability), which takes significantly more time to craft — and the probability of it being deployed successfully is very low.</p>
<p>I know how y’all can be, so let me emphasize: <strong>the goal is not to quibble over exact probabilities until you can prove who is the better pedant</strong>. It’s meant to be a framework to aid in decision-making by visualizing your belief-prompting and is a starting point from which you can tweak your assumptions as you ingest real-world data.</p>
<hr>
<h2 id="social-tactics">Social Tactics</h2>
<p><img src="/blog/img/kittens-meeting.gif" alt="A gif of kittens having a &amp;ldquo;meeting&amp;rdquo;"><em>omg it&rsquo;s kittens having a meeting</em></p>
<p>I won’t cover the social dynamics of teams here (<a href="/speaking/Know-Thyself-Kelly-Shortridge-ACoD-2017.pdf">check out the presentation</a> to read about it if interested), but belief-prompting vis a vis probability-labeled decision-trees ameliorates team-based biases as well. However, blue team leaders need to consider a few additional tactics to round out their decision-making model — most of which are in the vein of framing, i.e. how choices are presented (see <a href="https://en.wikipedia.org/wiki/Framing_effect_%28psychology%29">framing effects</a> for more).</p>
<p>First and foremost, leaders shouldn’t state their own views before soliciting feedback, lest it anchor the rest of the team’s opinions. It could lead to the team then guessing, “Ok, what do I think the boss believes?” but hopefully there’s a sufficient culture of respect that there isn’t that sort of paranoia. In that vein, using a decision tree can also serve as a starting point to solicit dissenting feedback — it can be easier to disagree with a probability or label on a tree rather than words coming out of someone’s mouth.</p>
<p>A key challenge for blue teams is how to deter short-termism and overly risky decisions. The nebulousness of costs and benefits of certain strategies or implementing solutions results in uncertainty of blue team members in regards to how their performance is evaluated. Soliciting longer-term views on these decisions, clearly articulating what constitutes success and failure for them, and agreeing on these figures with whomever is the “doer” for the project would alleviate much of this uncertainty.</p>
<p>The goal is to not pressure team members into agreeing to something just to show off their skill level, or because they fear they will look incompetent if they refuse, but to foster a sense of buy-in to the plan and maintain explicit expectations.</p>
<p>Finally, blue teams can leverage the decision trees when weighing different types of strategies or solution options, by estimating how much of a difference it would make in decreasing the probability of an attack succeeding vs. its cost — including monetary cost as well as personnel cost (having to hire a new person just to manage a product would require a hefty reduction in risk to justify).</p>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>Ultimately, the ideal bias-resilient decision making process for blue teams in information security looks something like:</p>
<ol>
<li>State beliefs about your adversaries</li>
<li>Model decision trees</li>
<li>Create a spectrum of success / failure for each decision</li>
<li>Develop a probability / payoff matrix for different decision options (leveraging the decision tree)</li>
<li>Prioritize rationality and overall benefit over risk-taking</li>
<li>Revisit and refine your decision trees after each incident</li>
</ol>
<p>I don’t claim to have all the answers, and welcome any and all feedback on how to refine this model of a decision-making model. In the spirit of the Art into Science con, we need to collaborate in order to move the ball forward meaningfully in the philosophy of defense and ultimately give defenders a more auspicious foundation upon which to architect their security strategy.</p>
<hr>
<p>Many thanks to <a href="https://twitter.com/snare?lang=en">snare</a> for reviewing.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Russia used the U.S. Influence Ops Playbook</title>
            <link>https://kellyshortridge.com/blog/posts/russia-used-us-influence-ops-playbook/</link>
            <pubDate>Wed, 18 Jan 2017 16:05:49 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/russia-used-us-influence-ops-playbook/</guid>
            <description>
Roger Trinquier, the French counterinsurgency theorist, said, “The sine qua non of victory in [insurgent/counterinsurgent] warfare is the unconditional support of the people.” In Influence Operations, success is ultimately about the ability to overcome one’s status as an outsider.
The past few months have seen cyberwar intersect with Influence Operations, in Russia using chicanery along these lines to influence the U.S. presidential election. In wanting to learn more about Influence Ops, I stumbled across a paper from the Naval War College entitled “Influence Operations &amp; the Human Domain.” They helpfully give an example of their offensive playbook, from their Influence Ops campaign in the Philippines. But as I read it, I felt my expression of horror intensifying.
The similarities in the U.S.’s Influence Ops strategy in the Philippines and Russia’s Influence Ops strategy for the U.S. presidential election are stark. The primary difference is the Philippines campaign mentions leveraging social media, but it occurred at the dawn of its ubiquity, whereas Russia had access to social media’s fuller potency. I’ve pulled quotes on the U.S. strategy from the paper, and written my thoughts on how it maps to the Russian ops below — judge for yourself.
The first step is determining the worthwhile targets of your Influence Ops campaign. These targets seem to be referred to as your “mobilizable population,” which falls into three distinct categories:
Core supporters of the state (think: offense’s side — supporters of the Philippines government in the USG case; Trump’s supporter base in the Russian case) Core supporters of the insurgency (think: offense’s opposition — terrorists in the USG case; the “establishment,” and more specifically the Democratic party in the Russian case) Large middle group who are prepared to support one side or the other depending on circumstances (think: swing voters) [The third group’s members] are the fence sitters weighing the cost and benefit of aligning with one side or the other. This group is the focal point of the influence struggle. The first two groups are generally ideologically driven and are highly unlikely to change sides.
The enmity between groups of individuals in the United States that are party-first is well known— they’re intransigent and will only ever vote for their party, no matter the candidate’s actual qualifications or policy positions. However, it’s also known that winning swing voters is the only way to achieve victory — only winning the party’s base is not enough to guarantee success.
For the core supporters of the state, a specialized U.S. task force conducting Influence Operations and working with host-nation forces generally provides the host government with the resources, training, and/or support that is most appropriate for the operating environment
If we equate “core supporters of the state” with “our side,” this translates to: “A specialized Russian task force conducting Influence Operations and working with U.S. forces generally provides the Trump campaign with the resources, training, and/or support that is most appropriate for the operating environment.”
That large middle group, the impressionable majority of the population, becomes the focal point in a struggle between the insurgents and counterinsurgents for decisive influence. Many in this group will have an initial preference toward one side, but the side they choose to support depends on the expected costs and benefits of their alternatives.
Commonly, you hear that swing voters went to Trump for economic reasons. The goal is also to minimize the perceived cost, however — making it seem like the cost of a Trump Presidency was minimal. And indeed, the cost to cisgender, heterosexual, middle class and up white people is most likely minimal.
The primary target audience for JSOTF-P’s Influence Operations was the diverse Philippine population within the joint operations area. Secondary audiences included local Philippine government officials, Philippine security forces, and the Philippine population not directly affected or targeted by the insurgents.
I’d argue the “operations area” here is the electorate with the highest chance of turning towards Trump — so the Minivan Majority plus disaffected swing voters and Democrats. Secondary audiences included local American government officials, American security forces (likely Comey &amp; the FBI, given their seeming fealty to Trump), and the American population not directly affected or targeted by the “establishment” (upper middle class, educated white voters).
The phrase “as they create a secure and stable environment” was particularly significant. It remained critical for the Filipino population to see their own government in the lead, which made enhancing the Philippine Security Forces’ capacity to operate autonomously and more effectively a primary JSOTF-P mission.
It was crucial that American voters saw Trump as a candidate with his own agency, rather than as a Russian puppet.
The method of application began with a targeting process to identify which communities were most vulnerable to a particular threat or hostile influence.
Russia thus identified the communities, primarily white and non-urban, that felt most vulnerable to a “particular threat” — in this case, the multicultural globalization movement that they feel has left them behind socially and economically.
1. The first PSYOP LOE supported JSOTF-P’s civil-military engagement by personalizing AFP and JSOTF-P support to local communities
Russia’s Influence Ops and the Trump campaign did an excellent job of tailoring the message to each rally and to propaganda spread throughout social media channels.
2. The second PSYOP LOE was focused on disrupting insurgent operations by creating dissent among the insurgents as well as between the insurgents and the communities that traditionally supported or tolerated them.
The Russian Influence Ops campaign published the DNC emails via Wikileaks specifically for this scurrilous purpose — by making it seem that Bernie Sanders had been robbed of the nomination, they drew some of his supporters to Trump. The leaks in general were the foundation for their polemic against “the establishment,” and implicating Clinton in it.
3. The third major PSYOP LOE was the Rewards for Justice Campaign. This LOE identified the most heinous insurgent leaders, offered rewards leading to their arrest, and, more importantly, made personal connections between the atrocities committed and the insurgent leaders responsible for them.
In general, the Russian ops was adroit in making Clinton the boogeyman — painting her as the putative zenith of USG’s and the “global elite’s ” corruption and haughty ways. In particular, the “Lock Her Up!” chant I think is an exemplary use of this sort of seditious tactic. Not to mention flouting Clinton’s “Wall Street ties” and global focus — both seen as enemies of Main Street.
4. JSOTF-P’s fourth PSYOP LOE — the Mass Media Campaign — provided operational-level influence support to the task force as a whole and galvanized all three previous PSYOP LOEs together through an extensive and overt commercial multimedia campaign.
While they received some help from Fox News, for the most part the Russian Influence Ops campaign disseminated their calumny against existing U.S. power structures and Clinton via alt-right and white nationalist sites like Breitbart, or set up new domains for propaganda dissemination, such as americanlookout.com or endoftheamericandream.com. These became “mass media” through rampant social media sharing, particularly through Facebook.
Creating dissent within and between the two insurgent groups and the populace was dependent on fostering trust and developing favorable options for the affected people, thereby providing a viable and desirable alternative to living with an insurgent presence.
Trump decisively labeled Clinton as being part of the “establishment” and thus not for the “people.” However, what helped him appear contumacious the most was breaking away from the GOP, labeling them the establishment as well, and significantly departing from their policy positions — for example by being against free trade and pro-Putin.
A unique aspect of JSOTF-P’s influence messaging was the primacy of using CME and face-to-face engagements to validate the influence messages instead of employing reactive messages to address events after they occurred.
Trump held countless rallies, personalizing apoplectic messaging for each local community.
Our PSYOP messaging mediums capitalized upon these seams by amplifying the population’s silent-majority concerns and grievances with the ASG.
An example of amplifying the population’s “silent-majority” concerns and grievances were Trump’s anti-immigration and anti-Muslim rhetoric, but also dog-whistling white nationalism by tying black communities to inner cities, Mexican immigrants to rapists and even leveraging anti-semitic global-banking-conspiracy / New World Order themes.
The PSYOP detachment paid careful attention not to show carnage but to encapsulate the fear and anguish of the witnesses, as well as the grim determination of the AFP and U.S. forces that were often the first to arrive on the scene with medical aid and security.
Ironically, it tends to be Trump’s base that talks about “feels over reals,” but ultimately Trump’s campaign was about suppurating their feels— the white middle class is afraid of being less relevant in a globalized world, afraid of the erosion of their privilege that greater equality — both on a national and global scale — brings. Trump gave very few tangible policy, let alone execution, strategies, but preyed on the “anguish” of his base that their jobs have been lost to automation and globalization.
The U.S. supported the Philippine government and security forces with access to information, intelligence, and modern technology to assist their efforts to build and maintain situational awareness, provide predictive analysis, and react to insurgent threats.
Replace with “Russia supported the Trump Campaign (and Wikileaks) with access to information, intelligence, and modern technology to assist their efforts to build and maintain situational awareness, provide predictive analysis, and react to threats from the Clinton campaign.”
Key to JSOTF-P’s Intelligence LOE success was the ability of U.S. intelligence personnel to “export” the processing, exploitation, and dissemination of the collected intelligence to the partner or host in order to build their capacity and give them ownership of the decision-making cycle.
A good chunk of this was giving hacked data from the DNC over to Wikileaks, but assuredly also to the Trump campaign.
The desired effect for terror groups is dissent within their ranks, discord from the populace, and their surrender, dissolution, and demonstrated defeat
Well, Russia certainly succeeded in sowing dissent within liberal ranks and discord from the populace…but they aren’t to demonstrated defeat yet.
Each village, community, province, and hostile group was unique within the concept of population-centric warfare, but they all shared cultural and personal commonalities.
Trump’s base shared a few key cultural and personal commonalities, such as being white as well as typically being less educated and living in more rural areas.
In the affected nation, the relevant population will generally choose the side that provides them with the greatest stability.
While many who recognize the instability that a Russian puppet brings to our democracy, you can see why voters — primarily those who are white, in the Midwest, and lacking the skills to compete in an increasingly globalized, technological society — would view Trump as the candidate who would provide them with the greatest stability by his theoretical rolling back of the clock to the days of yore.
This, of course, is just one case study, but why I like it is that it shows that the U.S. IC was well aware of how a nation state conducts an Influence Ops campaign in the social media era. However, I couldn’t find any papers on countering a nation state’s Influence Ops campaign. Perhaps it’s classified, or perhaps we never assumed that a nation state would use our playbook against us.
That’s not to say that creating such a strategy would be trivial — I personally don’t have a great notion of how a playbook to counter nation-state Influence Ops would look. I’m all too aware of the realities a decentralized media ecosystem brings and how puissant social media has become — it is prohibitive for the government to squash damaging propaganda distributed through those channels.
The Grugq recently wrote about the challenges in countering such a campaign, and I’m inclined to believe he’s correct in his analysis. I also agree with McCain’s strong suggestion recently of starting a new USIA, though it will require a radically different approach than what was employed before.
Finally, if you buy into my pattern-matching above, it’s challenging to arrive at the conclusion that the Trump campaign did not have a Russian retinue to coordinate their efforts with Russia’s influence ops campaign. It’s either an exceptionally convenient coincidence they were so in sync in pulling off this strategy, or something is rotten in the state of Denmark.
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/spy-vs-spy.png" alt="An image of the two spys from the comic &amp;ldquo;Spy vs. Spy&amp;rdquo; shaking hands"></p>
<p>Roger Trinquier, the French counterinsurgency theorist, said, “The <a href="https://en.wikipedia.org/wiki/Sine_qua_non">sine qua non</a> of victory in [insurgent/counterinsurgent] warfare is the unconditional support of the people.” In Influence Operations, success is ultimately about the ability to overcome one’s status as an outsider.</p>
<p>The past few months have seen cyberwar intersect with Influence Operations, in Russia using chicanery along these lines to influence the U.S. presidential election. In wanting to learn more about Influence Ops, I stumbled across a paper from the Naval War College entitled <a href="https://www.usnwc.edu/getattachment/Departments---Colleges/Center-on-Irregular-Warfare---Armed-Groups/Publications/Scanzillo-and-Lopacienski---Influence-Operations-and-the-Human-Domain.pdf.aspx">“Influence Operations &amp; the Human Domain.”</a> They helpfully give an example of their offensive playbook, from their Influence Ops campaign in the Philippines. But as I read it, I felt my expression of horror intensifying.</p>
<p><img src="/blog/img/britney-shock.gif" alt="A gif of Britney Spears reacting with shock, of the negative kind"></p>
<p>The similarities in the U.S.’s Influence Ops strategy in the Philippines and Russia’s Influence Ops strategy for the U.S. presidential election are stark. The primary difference is the Philippines campaign mentions leveraging social media, but it occurred at the dawn of its ubiquity, whereas Russia had access to social media’s fuller potency. I’ve pulled quotes on the U.S. strategy from the paper, and written my thoughts on how it maps to the Russian ops below — judge for yourself.</p>
<hr>
<p>The first step is determining the worthwhile targets of your Influence Ops campaign. These targets seem to be referred to as your “mobilizable population,” which falls into three distinct categories:</p>
<ol>
<li>Core supporters of the state (think: offense’s side — supporters of the Philippines government in the USG case; Trump’s supporter base in the Russian case)</li>
<li>Core supporters of the insurgency (think: offense’s opposition — terrorists in the USG case; the “establishment,” and more specifically the Democratic party in the Russian case)</li>
<li>Large middle group who are prepared to support one side or the other depending on circumstances (think: swing voters)</li>
</ol>
<blockquote>
<p>[The third group’s members] are the fence sitters weighing the cost and benefit of aligning with one side or the other. This group is the focal point of the influence struggle. The first two groups are generally ideologically driven and are highly unlikely to change sides.</p>
</blockquote>
<p>The enmity between groups of individuals in the United States that are party-first is well known— they’re intransigent and will only ever vote for their party, no matter the candidate’s actual qualifications or policy positions. However, it’s also known that winning swing voters is the only way to achieve victory — only winning the party’s base is not enough to guarantee success.</p>
<blockquote>
<p>For the core supporters of the state, a specialized U.S. task force conducting Influence Operations and working with host-nation forces generally provides the host government with the resources, training, and/or support that is most appropriate for the operating environment</p>
</blockquote>
<p>If we equate “core supporters of the state” with “our side,” this translates to: “A specialized Russian task force conducting Influence Operations and working with U.S. forces generally provides the Trump campaign with the resources, training, and/or support that is most appropriate for the operating environment.”</p>
<blockquote>
<p>That large middle group, the impressionable majority of the population, becomes the focal point in a struggle between the insurgents and counterinsurgents for decisive influence. Many in this group will have an initial preference toward one side, but the side they choose to support depends on the expected costs and benefits of their alternatives.</p>
</blockquote>
<p>Commonly, you hear that swing voters went to Trump for economic reasons. The goal is also to minimize the perceived cost, however — making it seem like the cost of a Trump Presidency was minimal. And indeed, the cost to cisgender, heterosexual, middle class and up white people is most likely minimal.</p>
<blockquote>
<p>The primary target audience for JSOTF-P’s Influence Operations was the diverse Philippine population within the joint operations area. Secondary audiences included local Philippine government officials, Philippine security forces, and the Philippine population not directly affected or targeted by the insurgents.</p>
</blockquote>
<p>I’d argue the “operations area” here is the electorate with the highest chance of turning towards Trump — so the Minivan Majority plus disaffected swing voters and Democrats. Secondary audiences included local American government officials, American security forces (likely Comey &amp; the FBI, given their seeming fealty to Trump), and the American population not directly affected or targeted by the “establishment” (upper middle class, educated white voters).</p>
<blockquote>
<p>The phrase “as they create a secure and stable environment” was particularly significant. It remained critical for the Filipino population to see their own government in the lead, which made enhancing the Philippine Security Forces’ capacity to operate autonomously and more effectively a primary JSOTF-P mission.</p>
</blockquote>
<p>It was crucial that American voters saw Trump as a candidate with his own agency, rather than as a Russian puppet.</p>
<blockquote>
<p>The method of application began with a targeting process to identify which communities were most vulnerable to a particular threat or hostile influence.</p>
</blockquote>
<p>Russia thus identified the communities, primarily white and non-urban, that felt most vulnerable to a “particular threat” — in this case, the multicultural globalization movement that they feel has left them behind socially and economically.</p>
<blockquote>
<p>1. The first PSYOP LOE supported JSOTF-P’s civil-military engagement by personalizing AFP and JSOTF-P support to local communities</p>
</blockquote>
<p>Russia’s Influence Ops and the Trump campaign did an excellent job of tailoring the message to each rally and to propaganda spread throughout social media channels.</p>
<blockquote>
<p>2. The second PSYOP LOE was focused on disrupting insurgent operations by creating dissent among the insurgents as well as between the insurgents and the communities that traditionally supported or tolerated them.</p>
</blockquote>
<p>The Russian Influence Ops campaign published the DNC emails via Wikileaks specifically for this scurrilous purpose — by making it seem that Bernie Sanders had been robbed of the nomination, they drew some of his supporters to Trump. The leaks in general were the foundation for their polemic against “the establishment,” and implicating Clinton in it.</p>
<blockquote>
<p>3. The third major PSYOP LOE was the Rewards for Justice Campaign. This LOE identified the most heinous insurgent leaders, offered rewards leading to their arrest, and, more importantly, made personal connections between the atrocities committed and the insurgent leaders responsible for them.</p>
</blockquote>
<p>In general, the Russian ops was adroit in making Clinton the boogeyman — painting her as the putative zenith of USG’s and the “global elite’s ” corruption and haughty ways. In particular, the <a href="https://www.washingtonpost.com/news/the-fix/wp/2016/11/22/a-brief-history-of-the-lock-her-up-chant-as-it-looks-like-trump-might-not-even-try/">“Lock Her Up!” chant</a> I think is an exemplary use of this sort of seditious tactic. Not to mention flouting Clinton’s “Wall Street ties” and global focus — both seen as enemies of Main Street.</p>
<blockquote>
<p>4. JSOTF-P’s fourth PSYOP LOE — the Mass Media Campaign — provided operational-level influence support to the task force as a whole and galvanized all three previous PSYOP LOEs together through an extensive and overt commercial multimedia campaign.</p>
</blockquote>
<p>While they received some help from Fox News, for the most part the Russian Influence Ops campaign disseminated their calumny against existing U.S. power structures and Clinton via alt-right and white nationalist sites like Breitbart, or <a href="http://www.propornot.com/p/the-list.html">set up new domains for propaganda dissemination</a>, such as americanlookout.com or endoftheamericandream.com. These became “mass media” through rampant social media sharing, particularly through Facebook.</p>
<blockquote>
<p>Creating dissent within and between the two insurgent groups and the populace was dependent on fostering trust and developing favorable options for the affected people, thereby providing a viable and desirable alternative to living with an insurgent presence.</p>
</blockquote>
<p>Trump decisively labeled Clinton as being part of the “establishment” and thus not for the “people.” However, what helped him appear contumacious the most was breaking away from the GOP, labeling them the establishment as well, and significantly departing from their policy positions — for example by being against free trade and pro-Putin.</p>
<blockquote>
<p>A unique aspect of JSOTF-P’s influence messaging was the primacy of using CME and face-to-face engagements to validate the influence messages instead of employing reactive messages to address events after they occurred.</p>
</blockquote>
<p>Trump held countless rallies, personalizing apoplectic messaging for each local community.</p>
<blockquote>
<p>Our PSYOP messaging mediums capitalized upon these seams by amplifying the population’s silent-majority concerns and grievances with the ASG.</p>
</blockquote>
<p>An example of amplifying the population’s “silent-majority” concerns and grievances were Trump’s anti-immigration and anti-Muslim rhetoric, but also dog-whistling white nationalism by <a href="https://www.theatlantic.com/business/archive/2016/10/trump-african-american-inner-city/503744/">tying black communities to inner cities</a>, <a href="https://www.washingtonpost.com/news/fact-checker/wp/2015/07/08/donald-trumps-false-comments-connecting-mexican-immigrants-and-crime/">Mexican immigrants to rapists</a> and even leveraging <a href="https://www.washingtonpost.com/opinions/anti-semitism-is-no-longer-an-undertone-of-trumps-campaign-its-the-melody/2016/11/07/b1ad6e22-a50a-11e6-8042-f4d111c862d1_story.html">anti-semitic global-banking-conspiracy</a> / <a href="https://en.wikipedia.org/wiki/New_World_Order_%28conspiracy_theory%29">New World Order</a> themes.</p>
<blockquote>
<p>The PSYOP detachment paid careful attention not to show carnage but to encapsulate the fear and anguish of the witnesses, as well as the grim determination of the AFP and U.S. forces that were often the first to arrive on the scene with medical aid and security.</p>
</blockquote>
<p>Ironically, it tends to be Trump’s base that talks about <a href="https://en.wiktionary.org/wiki/feels_over_reals">“feels over reals,”</a> but ultimately Trump’s campaign was about suppurating their feels— the white middle class is afraid of being less relevant in a globalized world, afraid of the erosion of their privilege that greater equality — both on a national and global scale — brings. Trump gave very few tangible policy, let alone execution, strategies, but preyed on the “anguish” of his base that their jobs have been lost to automation and globalization.</p>
<blockquote>
<p>The U.S. supported the Philippine government and security forces with access to information, intelligence, and modern technology to assist their efforts to build and maintain situational awareness, provide predictive analysis, and react to insurgent threats.</p>
</blockquote>
<p>Replace with “Russia supported the Trump Campaign (and Wikileaks) with access to information, intelligence, and modern technology to assist their efforts to build and maintain situational awareness, provide predictive analysis, and react to threats from the Clinton campaign.”</p>
<blockquote>
<p>Key to JSOTF-P’s Intelligence LOE success was the ability of U.S. intelligence personnel to “export” the processing, exploitation, and dissemination of the collected intelligence to the partner or host in order to build their capacity and give them ownership of the decision-making cycle.</p>
</blockquote>
<p>A good chunk of this was giving hacked data from the DNC over to Wikileaks, but assuredly also to the Trump campaign.</p>
<blockquote>
<p>The desired effect for terror groups is dissent within their ranks, discord from the populace, and their surrender, dissolution, and demonstrated defeat</p>
</blockquote>
<p>Well, Russia certainly succeeded in sowing dissent within liberal ranks and discord from the populace…but they aren’t to demonstrated defeat yet.</p>
<blockquote>
<p>Each village, community, province, and hostile group was unique within the concept of population-centric warfare, but they all shared cultural and personal commonalities.</p>
</blockquote>
<p>Trump’s base shared a few key cultural and personal commonalities, such as being white as well as typically being less educated and living in more rural areas.</p>
<blockquote>
<p>In the affected nation, the relevant population will generally choose the side that provides them with the greatest stability.</p>
</blockquote>
<p>While many who recognize the instability that a Russian puppet brings to our democracy, you can see why voters — primarily those who are white, in the Midwest, and lacking the skills to compete in an increasingly globalized, technological society — would view Trump as the candidate who would provide them with the greatest stability by his theoretical rolling back of the clock to the days of yore.</p>
<hr>
<p>This, of course, is just one case study, but why I like it is that it shows that the U.S. IC was well aware of how a nation state conducts an Influence Ops campaign in the social media era. However, I couldn’t find any papers on countering a nation state’s Influence Ops campaign. Perhaps it’s classified, or perhaps we never assumed that a nation state would use our playbook against us.</p>
<p>That’s not to say that creating such a strategy would be trivial — I personally don’t have a great notion of how a playbook to counter nation-state Influence Ops would look. I’m all too aware of the realities a decentralized media ecosystem brings and how puissant social media has become — it is prohibitive for the government to squash damaging propaganda distributed through those channels.</p>
<p>The Grugq recently wrote about <a href="https://medium.com/@thegrugq/security-cyber-and-elections-part-4-e327e527132a#.ab5xfs6za">the challenges in countering such a campaign</a>, and I’m inclined to believe he’s correct in his analysis. I also agree with McCain’s strong suggestion recently of starting a new <a href="https://en.wikipedia.org/wiki/United_States_Information_Agency">USIA</a>, though it will require a radically different approach than what was employed before.</p>
<p>Finally, if you buy into my pattern-matching above, it’s challenging to arrive at the conclusion that the Trump campaign did not have a Russian retinue to coordinate their efforts with Russia’s influence ops campaign. It’s either an exceptionally convenient coincidence they were so in sync in pulling off this strategy, or something is rotten in the state of Denmark.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Revisiting 2016 Security Predictions</title>
            <link>https://kellyshortridge.com/blog/posts/revisiting-2016-security-predictions/</link>
            <pubDate>Fri, 30 Dec 2016 21:08:55 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/revisiting-2016-security-predictions/</guid>
            <description>The flaming word cloud of the cyberpocalypse, from 16 of the larger security vendors’ 2016 predictions
I totally get it — it’d be boring to say year after year, “Yep, phishing definitely still works,” so security vendors instead pour creative thinking and marketing pizzazz into their annual security predictions. Most seem to aim to match the majority of their predictions to other vendors’, with generally one or two unique predictions thrown in to show off their innovative thought leadering. Sometimes those unique ones are spot-on and glorify their authors, and other times we can look back a year later and share a good chuckle.
I chose 16 prediction reports for my analysis, published between September 2015 and February 2016 (later than that would give an unfair advantage), mostly by the larger security vendors and also Wired, since they’re one of the few publications who publishes their own predictions rather than crowdsourcing.
I’ll delve into:
What actually happened in 2016? Which predictions were right? (naming names) Which predictions were off? (not naming names) What actually happened in 2016? Looking back on infosec news in 2016, these seem to have been the biggest stories:
Grizzly Steppe / Fancy Bear Unless you’ve been living under a rock, Russia waged a (successful) campaign to stoke chaos and influence the U.S. election — hacking the DNC (and RNC), spreading fake news, etc. FBI / DHS just released a joint report on the campaign, deeming it “GRIZZLY STEPPE.” Crowdstrike named the group “Fancy Bear,” and has written about its intrusion into the DNC as well as its use of Android malware to infiltrate Ukranian field artillery units (which seemed to be a response to skeptics of attributing the DNC hack to Russia).
So how was this epic, history-shaping attack conducted? The initial delivery vector was spear-phishing…if it ain’t broke, don’t fix it. To be fair, this was “sophisticated” as far as spear-phishing goes — using legitimate domains and then spoofing a Google suspicious account activity email (not something obvious like malware.delivery.plzclick.ru), which ultimately tricked recipients into entering in their passwords through a fake webmail domain that appeared to be a password reset page. Then, the credentials were used to harvest the emails, exfiltrate them through encrypted communications and create the #Podesta debacle.
Mirai Botnet (Dyn DDoS) People panicked when Dyn, a DNS service provider, was getting DDoSed, breaking the internet even more than Kim Kardashian. Then, we found out that the nemesis was a botnet composed of hundreds of thousands connected devices, serving as a zombie army of “things” for their overlords. All Mirai did was log into devices by using the factory default passwords, which was successful enough to infect over 1 million devices.
The author of Mirai open sourced it, and given that the security of IoT devices hasn’t radically improved in the past few months, and consumers are generally allergic to security responsibility, it’s easy to imagine massive IoT botnets happening again.
Crypto-ransomware boom Most recently, San Francisco’s Municipal Transportation Agency was infected by ransomware (specifically HDDCryptor), with the attackers demanding $73k in exchange for restoring the data. However, SFMTA decided instead to let riders ride for free for a bit, and then fixed the problem by using a backup to restore their systems. The initial access was gained by a vulnerability involving Oracle’s WebLogic server and the Apache Commons library (a deserialization vulnerability), and SFMTA only became a target because the attackers used a web scanner to find vulnerable servers.
Hollywood Presbyterian Medical Center in Los Angeles was also held hostage through the ransomware “Locky,” but ended up paying only $17k in BTC vs. the initial demand of $3.4 million. Locky ended up infecting a few other hospitals, as well, and is particularly nasty because it looks for and erases Volume Shadow Copy files, which means automatic backups are erased.
SWIFT wire transfer hax Hackers stole $81 million from accounts at Bangladesh’s central bank within a few hours, and that wasn’t the only bank they attacked. They achieved this by getting bank employees’ credentials to SWIFT, the network between banks that processes most of the world’s wire transfers. There’s nothing concrete on how they got the credentials, but we do know they used malware to subvert SWIFT’s software for recording money transfers — which is far from ideal. It resulted in SWIFT to push for 2FA, which should help in the future.
Adult Friend Finder This breach received perfect 10 on the breach level index — over 400 million records were exposed. There isn’t much information on the breach yet, but it appears the passwords were kept in plaintext or used SHA1. The attackers allegedly exploited a local file inclusion vulnerability, meaning it’s yet another web app attack.
Honorable mention goes to two other hookup sites that were breached and stored their passwords in plaintext. Fling.com had credentials for 40 million of its users on sale for 0.8888BTC in May (~$400 at the time). Mate1.com had over 27 million plaintext passwords for sale, but for 20BTC in February (~$8,700 at the time). For Mate1, the hacker said that they compromised the server and dumped the MySQL database, with no further details.
Yahoo data breach It affected 1 billion user accounts…which was surprising because most people never thought Yahoo had that many users. I won’t be counting this towards scoring the predictions, given the actual attack happened in 2013. But, there’s no denying it received substantial coverage, and will be an interesting case study going forward if it results in Verizon successfully reducing the purchase price of the acquisition…or not acquiring them at all.
Which predictions were right? (naming names) Spear phishing was mentioned 11 times total in all of the predictions — half of those were about mobile-specific spear phishing, and the rest mostly about the need to train employees on how to look out for it. Only McAfee and Sophos made any mention of hackers using “sophisticated” spear phishing more often, and even then it was a minor point in their larger reports. Election-themed phishing was predicted by Forcepoint, which I cover below, but no mention of spear phishing methods specifically. So, no one really wins on this one.
Kaspersky and Intel Security accurately predicted — although embedded rather than one of their primary predictions — that libraries used by servers (particularly open source libraries) would increasingly be targets, which was relevant in the SFMTA case. Otherwise, web app / server-side app attacks were virtually ignored — despite being a prevalent attack vector year after year. And exactly zero of the reports mentioned anything about SWIFT, or wire transfers more generally — the focus was primarily on credit card systems.
But other events fared better in their coverage:
Influence operations &amp; U.S. election-themed attacks First up, ForcePoint (Raytheon) hit the nail directly on the head with their prediction on U.S. election-related influence operations. While a few others brought up electronic voting systems, ForcePoint specifically called out the fake news phenomenon (albeit missing involvement by nation-state threat actors):
Information on social media is often spread and accepted before fact can catch up with fiction, giving determined hacktivists an opening to misrepresent and/or misdirect the public’s perception of individuals and events… …the other hand suggests there’s little to prevent incendiary, inaccurate information from virally spreading and being accepted by the public as factual. Even if such information is later corrected, this false information lives forever on the Internet, with the potential to inform opinions and as a result misinform — and potentially direct the actions of — the electorate.
They also correctly predicted the use of election-themed phishing campaigns, and even that candidates would be targeted:
However, given the influence the choice of a U.S. President can have…it’s not hard to envision a circumstance where factions hoping to gain insight or advantage in an election or following it, might target a candidate or groups involved in promoting them for useful data in keeping ahead of or undermining the competition
So, congrats on them for being right, however unfortunate for the rest of us.
IoT Botnets The other big winner is Wired, who predicted the Mirai botnet, or what they called “the rise of the IoT Zombie Botnet.”
One trend we’ve already spotted is the commandeering of IoT devices for botnets. Instead of hackers hijacking your laptop for their zombie army, they will commandeer large networks of IoT devices — like CCTV surveillance cameras, smart TVs, and home automation systems.
Anomali also gets some credit for this, too, although it was a minor point in their larger prediction about IoT exploitation.
Ransomware…to a certain extent This was easily one the safest predictions to make last year, which is probably why nearly all the vendors mentioned it. Some took a more hyperbolic approach and overshot its impact and potential damage. Others head-scratching-ly said that ransomware will “go corporate,” although there had already been plenty of documented cases of corporate ransomware before 2016. But technically they weren’t wrong about it hurting businesses more, so I’ll allow it.
Many warned about potential extortion, in which the attackers would threaten to go public with data in the hopes of receiving a higher ransom, but for obvious reasons we probably wouldn’t hear about those cases publicly, so the jury’s still out. I’d argue this fear was overblown, given we saw three major hookup site breaches, credentials for which were being sold on dark web forums (when they could’ve been used for extortion).
Predictions that ransomware would become cross-platform were also technically accurate, as there are now documented cases of ransomware for Linux and Mac — although not many documented families of ransomware for these platforms as of yet. I’d personally say Linux ransomware could be a possibility for 2017 — after all, it’s what constitutes modern infrastructure for most enterprises.
Attacks hidden in SSL vs cleartext According to an A10 Networks / Ponemon study from this summer 41% of attacks used malware hidden in SSL traffic to evade detection. Appliances have a difficult time quickly inspecting SSL traffic to detect malware, making it a new headache for enterprises. And turns out A10 Networks was the only one to predict this trend.
Which predictions were off? (not naming names) IoT in theory, but not reality We won’t see widespread examples of attackers getting IoT devices to run arbitrary code any time soon.
Welp…see the aforementioned Mirai botnet.
PKI ubiquity In 2016, we expect that PKI will become ubiquitous security technology within the IoT market.
A safe prediction for 2017 is that PKI will continue to be a mess, and so will IoT security.
Drone hackpocalypse However, drones also present a wide range of risks, from privacy invasion to corporate espionage to terrorism.
Sure, but realistically it’s confined to nation-state level for now, so it’s dubious if it belongs in a predictions report aimed at corporate CISOs. Some research was published on how to hack drones, but there’s nothing confirmed in-the-wild. Amazon also just published a patent for protecting its delivery drones against hacking…and against bows and arrows. My guess is this prediction is supposed to fall under the Rule of Cool rather than having a logical basis for impact to enterprises.
Terrorists pick up the (cyber)bomb In 2016, we will increasingly see the convergence of physical and cyber terrorism aimed at wreaking far-reaching havoc.
Terrorists did not become cyber ninjas blowing up power plants all over the place. The Grugq has already written about this. Squirrels are still the better conductors of cyber war ops against our critical infrastructure.
Post-quantum crypto “The cryptopocalypse is nigh.” (due to quantum computing)
Nope. Probably a prediction that can be safely shelved for a few years.
Cyber. insurance. changes. everything. The cyber insurance market will dramatically disrupt businesses in the next 12 months. In 2016 many companies will turn to cyber insurance as another layer of protection, particularly as cyber attacks start mirroring physical world attacks.
Looks like most companies still think insurance is inadequate risk mitigation. According to the SANS survey, only 33.5% of companies have cyber insurance. What’s more, 83% of respondents to a PartnerRe survey said that cyber insurance policies are only “sometimes” meeting the needs of insured companies. So, while it may be blossoming as a new “check the box” item, it’s nowhere near disrupting enterprise security strategies yet.
Mobile app exploitation With the growing amount of malware and the vulnerabilities present in legitimate mobile apps, a major breach is bound to happen, potentially on a massive scale.
While many of the predictions highlighted mobile as an attack vector in general, where they specifically got it wrong is in thinking that vulnerabilities in mobile apps would be exploited. Why expend the effort when malicious apps still work just fine? See: the poisoned app Fancy Bear used to hack Ukranian field artillery units. There haven’t been any major corporate breaches directly tied to mobile malware, though I’ll concede it’s possible they just aren’t public. Mobile malware grew at a very healthy pace in 2016, however:
Wearables as the new sexy attack vector Initially, we doubt that a smartphone will be completely compromised by an attack through a wearables device, but we expect to see the control apps for wearables compromised in the next 12 to 18 months in a way that will provide valuable data for spear-phishing attacks.
There’s been research on hacking them, but as far as I can find, no evidence that hacking wearables has ever been used as part of a corporate data breach. The only headlines in 2016 are variations of “Hackers targeting your wearables data?” or “Can Wearable Technology Threaten the Security of Your Biz?” so I’ll invoke Betteridge’s law of headlines and say the answer is no.
Remote-controlled cars Attacks on automobile systems will increase rapidly in 2016 due to the rapid increase in connected automobile hardware built without foundational security principles.
This didn’t happen. Seriously cool research, but no reports in the wild — just because something can be hacked doesn’t mean it’s easily hacked.
Why is it that every headline is &#39;product X easily hacked&#39;. Hacking a car was hard. Hacking is hard. It is good that it is hard.
— Chris Valasek (@nudehaberdasher) December 4, 2016Password reuse attacks will decline Password reuse attacks will begin to decline… people are starting to adopt password managers… Advancements in biometrics are also helping the cause…most major Internet layers are also adding two-factor authentication as a standard option
All of those things might be true, but not enough to make a dent. These are entities publicly victims to password reuse attacks in this year alone: Carbonite, Citrix GoToMyPC, Deliveroo, GitHub, Groupon, Mark Zuckerberg, TeamViewer, U.K. National Lottery. Further, Patrick Heim, Dropbox’s Head of Trust &amp; Security said in September that “99% of compromised user accounts come from password reuse.” Seems more like wishful thinking / “it’s-2016-how-is-this-still-happening?!” than a prediction.
Conclusion I’ve already started reading some of the 2017 predictions — being diplomatic, I’ll say I’m looking forward to seeing how the next year unfolds. I saw one list described as not containing any “wows,” which I think accurately pinpoints the problem with these lists in general.
It’s more an exercise in taking a provocative stance to raise attention and be able to tell customers how your product addresses this important upcoming issue, rather than a probability-weighted list of the actual threats enterprises will face in the upcoming year — users will keep clicking on things they shouldn’t, injection vulnerabilities are still in all the things, and not enough people use 2FA or encrypt their users’ credentials.
In conclusion, my ultimate 2017 prediction is that I’ll have plenty of content for an equivalent of this post next year.
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/2016-predictions.png" alt="Word cloud of 2016 predictions"><em>The flaming word cloud of the cyberpocalypse, from 16 of the larger security vendors’ 2016 predictions</em></p>
<p>I totally get it — it’d be boring to say year after year, “Yep, phishing definitely still works,” so security vendors instead pour creative thinking and marketing pizzazz into their annual security predictions. Most seem to aim to match the majority of their predictions to other vendors’, with generally one or two unique predictions thrown in to show off their innovative thought leadering. Sometimes those unique ones are spot-on and glorify their authors, and other times we can look back a year later and share a good chuckle.</p>
<p>I chose 16 prediction reports for my analysis, published between September 2015 and February 2016 (later than that would give an unfair advantage), mostly by the larger security vendors and also Wired, since they’re one of the few publications who publishes their own predictions rather than crowdsourcing.</p>
<p>I&rsquo;ll delve into:</p>
<ol>
<li><a href="#what-happened">What actually happened in 2016?</a></li>
<li><a href="#correct-predictions">Which predictions were right? (naming names)</a></li>
<li><a href="#wrong-predictions">Which predictions were off? (not naming names)</a></li>
</ol>
<hr>
<p><img src="/blog/img/fancy-bear.jpg" alt="The fancy bear logo by Crowdstrike"></p>
<h2 id="a-namewhat-happenedawhat-actually-happened-in-2016"><a name="what-happened"></a>What actually happened in 2016?</h2>
<p>Looking back on infosec news in 2016, these seem to have been the biggest stories:</p>
<h3 id="grizzly-steppe--fancy-bear">Grizzly Steppe / Fancy Bear</h3>
<p>Unless you’ve been living under a rock, Russia waged a (successful) campaign to stoke chaos and influence the U.S. election — hacking the DNC (and RNC), spreading fake news, etc. <a href="https://www.us-cert.gov/sites/default/files/publications/JAR_16-20296.pdf">FBI / DHS just released a joint report on the campaign</a>, deeming it “GRIZZLY STEPPE.” Crowdstrike named the group <a href="https://www.crowdstrike.com/blog/who-is-fancy-bear/">“Fancy Bear,”</a> and has written about its intrusion into the DNC as well as its use of <a href="https://www.crowdstrike.com/wp-content/brochures/FancyBearTracksUkrainianArtillery.pdf">Android malware to infiltrate Ukranian field artillery units</a> (which seemed to be a response to skeptics of attributing the DNC hack to Russia).</p>
<p>So how was this epic, history-shaping attack conducted? The initial delivery vector was spear-phishing…if it ain’t broke, don’t fix it. To be fair, this was “sophisticated” as far as spear-phishing goes — using legitimate domains and then spoofing a Google suspicious account activity email (not something obvious like malware.delivery.plzclick.ru), which ultimately tricked recipients into entering in their passwords through a fake webmail domain that appeared to be a password reset page. Then, the credentials were used to harvest the emails, exfiltrate them through encrypted communications and create the #Podesta debacle.</p>
<h3 id="mirai-botnet-dyn-ddos">Mirai Botnet (Dyn DDoS)</h3>
<p>People panicked when Dyn, a DNS service provider, was getting DDoSed, breaking the internet even more than Kim Kardashian. Then, we found out that the nemesis was a botnet composed of <a href="http://www.theregister.co.uk/2016/10/21/dyn_dns_ddos_explained/">hundreds of thousands connected devices</a>, serving as a zombie army of “things” for their overlords. All Mirai did was log into devices by using the factory default passwords, which was successful enough to infect over 1 million devices.</p>
<p>The author of Mirai open sourced it, and given that the security of IoT devices hasn’t radically improved in the past few months, and consumers are generally allergic to security responsibility, <a href="https://www.wired.com/2016/12/botnet-broke-internet-isnt-going-away/">it’s easy to imagine massive IoT botnets happening again</a>.</p>
<h3 id="crypto-ransomware-boom">Crypto-ransomware boom</h3>
<p>Most recently, San Francisco’s Municipal Transportation Agency <a href="http://arstechnica.com/security/2016/11/san-francisco-muni-hit-by-black-friday-ransomware-attack/">was infected by ransomware (specifically HDDCryptor)</a>, with the attackers demanding $73k in exchange for restoring the data. However, SFMTA decided instead to let riders ride for free for a bit, and then fixed the problem by using a backup to restore their systems. The initial access was gained by a vulnerability involving Oracle’s WebLogic server and the Apache Commons library (a deserialization vulnerability), and SFMTA only became a target because the attackers used a web scanner to find vulnerable servers.</p>
<p><a href="https://www.wired.com/2016/02/hack-brief-hackers-are-holding-an-la-hospitals-computers-hostage/">Hollywood Presbyterian Medical Center in Los Angeles was also held hostage</a> through the ransomware “Locky,” but ended up paying only $17k in BTC vs. the initial demand of $3.4 million. Locky ended up infecting a few other hospitals, as well, and is particularly nasty because it looks for and erases Volume Shadow Copy files, which means automatic backups are erased.</p>
<h3 id="swift-wire-transfer-hax">SWIFT wire transfer hax</h3>
<p><a href="https://www.wired.com/2016/05/insane-81m-bangladesh-bank-heist-heres-know/">Hackers stole $81 million from accounts at Bangladesh’s central bank</a> within a few hours, and that wasn’t the only bank they attacked. They achieved this by getting bank employees’ credentials to SWIFT, the network between banks that processes most of the world’s wire transfers. There’s nothing concrete on how they got the credentials, but we do know they used malware to subvert SWIFT’s software for recording money transfers — which is far from ideal. It resulted in SWIFT to push for 2FA, which should help in the future.</p>
<h3 id="adult-friend-finder">Adult Friend Finder</h3>
<p>This breach received perfect 10 on the <a href="http://breachlevelindex.com/top-data-breaches">breach level index</a> — <a href="http://breachlevelindex.com/top-data-breaches">over 400 million records were exposed</a>. There isn’t much information on the breach yet, but it appears the passwords were kept in plaintext or used SHA1. The attackers allegedly exploited a <a href="http://www.csoonline.com/article/3132533/security/researcher-says-adult-friend-finder-vulnerable-to-file-inclusion-vulnerabilities.html">local file inclusion vulnerability</a>, meaning it’s yet another web app attack.</p>
<p>Honorable mention goes to two other hookup sites that were breached and stored their passwords in plaintext. Fling.com had credentials for <a href="http://www.ibtimes.co.uk/fling-com-breach-passwords-sexual-preferences-40-million-users-sale-dark-web-1558711">40 million of its users on sale for 0.8888BTC</a> in May (~$400 at the time). Mate1.com had <a href="https://motherboard.vice.com/read/hacker-claims-to-have-sold-27m-dating-site-passwords-mate1-com-hell-forum">over 27 million plaintext passwords for sale</a>, but for 20BTC in February (~$8,700 at the time). For Mate1, the hacker said that they compromised the server and dumped the MySQL database, with no further details.</p>
<h3 id="yahoo-data-breach">Yahoo data breach</h3>
<p><a href="http://www.nytimes.com/2016/12/14/technology/yahoo-hack.html?_r=0">It affected 1 billion user accounts</a>…which was surprising because most people never thought Yahoo had that many users. I won’t be counting this towards scoring the predictions, given the actual attack happened in 2013. But, there’s no denying it received substantial coverage, and will be an interesting case study going forward if it results in <a href="https://techcrunch.com/2016/10/06/report-verizon-wants-1-billion-discount-after-yahoo-privacy-concerns/">Verizon successfully reducing the purchase price of the acquisition</a>…or not acquiring them at all.</p>
<hr>
<p><img src="/blog/img/bad-cyberart-09.jpg" alt="A crystal ball"></p>
<h2 id="a-namecorrect-predictionsawhich-predictions-were-right-naming-names"><a name="correct-predictions"></a>Which predictions were right? (naming names)</h2>
<p>Spear phishing was mentioned 11 times total in all of the predictions — half of those were about mobile-specific spear phishing, and the rest mostly about the need to train employees on how to look out for it. Only McAfee and Sophos made any mention of hackers using “sophisticated” spear phishing more often, and even then it was a minor point in their larger reports. Election-themed phishing was predicted by Forcepoint, which I cover below, but no mention of spear phishing methods specifically. So, no one really wins on this one.</p>
<p>Kaspersky and Intel Security accurately predicted — although embedded rather than one of their primary predictions — that libraries used by servers (particularly open source libraries) would increasingly be targets, which was relevant in the SFMTA case. Otherwise, web app / server-side app attacks were virtually ignored — despite being a prevalent attack vector year after year. And exactly zero of the reports mentioned anything about SWIFT, or wire transfers more generally — the focus was primarily on credit card systems.</p>
<p>But other events fared better in their coverage:</p>
<h3 id="influence-operations--us-election-themed-attacks">Influence operations &amp; U.S. election-themed attacks</h3>
<p>First up, ForcePoint (Raytheon) hit the nail directly on the head with their prediction on U.S. election-related influence operations. While a few others brought up electronic voting systems, ForcePoint specifically called out the fake news phenomenon (albeit missing involvement by nation-state threat actors):</p>
<blockquote>
<pre><code>Information on social media is often spread and accepted before fact can catch up with fiction, giving determined hacktivists an opening to misrepresent and/or misdirect the public’s perception of individuals and events… 
</code></pre>
<p>…the other hand suggests there’s little to prevent incendiary, inaccurate information from virally spreading and being accepted by the public as factual. Even if such information is later corrected, this false information lives forever on the Internet, with the potential to inform opinions and as a result misinform — and potentially direct the actions of — the electorate.</p>
</blockquote>
<p>They also correctly predicted the use of election-themed phishing campaigns, and even that candidates would be targeted:</p>
<blockquote>
<p>However, given the influence the choice of a U.S. President can have…it’s not hard to envision a circumstance where factions hoping to gain insight or advantage in an election or following it, might target a candidate or groups involved in promoting them for useful data in keeping ahead of or undermining the competition</p>
</blockquote>
<p>So, congrats on them for being right, however unfortunate for the rest of us.</p>
<h3 id="iot-botnets">IoT Botnets</h3>
<p>The other big winner is Wired, who predicted the Mirai botnet, or what they called “the rise of the IoT Zombie Botnet.”</p>
<blockquote>
<p>One trend we’ve already spotted is the commandeering of IoT devices for botnets. Instead of hackers hijacking your laptop for their zombie army, they will commandeer large networks of IoT devices — like CCTV surveillance cameras, smart TVs, and home automation systems.</p>
</blockquote>
<p>Anomali also gets some credit for this, too, although it was a minor point in their larger prediction about IoT exploitation.</p>
<h3 id="ransomwareto-a-certain-extent">Ransomware…to a certain extent</h3>
<p>This was easily one the safest predictions to make last year, which is probably why nearly all the vendors mentioned it. Some took a more hyperbolic approach and overshot its impact and potential damage. Others head-scratching-ly said that ransomware will “go corporate,” although there had already been plenty of documented cases of corporate ransomware before 2016. But technically they weren’t wrong about it hurting businesses more, so I’ll allow it.</p>
<p>Many warned about potential extortion, in which the attackers would threaten to go public with data in the hopes of receiving a higher ransom, but for obvious reasons we probably wouldn’t hear about those cases publicly, so the jury’s still out. I’d argue this fear was overblown, given we saw three major hookup site breaches, credentials for which were being sold on dark web forums (when they could’ve been used for extortion).</p>
<p>Predictions that ransomware would become cross-platform were also technically accurate, as there are now documented cases of ransomware for <a href="http://www.computerworld.com/article/3113658/security/new-ransomware-threat-deletes-files-from-linux-web-servers.html">Linux</a> and <a href="http://www.theregister.co.uk/2016/03/09/first_macosx_ransomware_actually_linux_port/">Mac</a> — although not many documented families of ransomware for these platforms as of yet. I’d personally say Linux ransomware could be a possibility for 2017 — after all, it’s what constitutes modern infrastructure for most enterprises.</p>
<h3 id="attacks-hidden-in-ssl-vs-cleartext">Attacks hidden in SSL vs cleartext</h3>
<p>According to an <a href="https://www.a10networks.com/news/cybersecurity-report-organizations-victimized-by-malware-hidden-in-encrypted-traffic">A10 Networks / Ponemon study</a> from this summer 41% of attacks used malware hidden in SSL traffic to evade detection. Appliances have a difficult time quickly inspecting SSL traffic to detect malware, making it a new headache for enterprises. And turns out A10 Networks was the only one to predict this trend.</p>
<hr>
<p><img src="/blog/img/bad-cyberart-10.png" alt="A digital thumbs down"></p>
<h2 id="a-namewrong-predictionsawhich-predictions-were-off-not-naming-names"><a name="wrong-predictions"></a>Which predictions were off? (not naming names)</h2>
<h3 id="iot-in-theory-but-not-reality">IoT in theory, but not reality</h3>
<blockquote>
<p>We won’t see widespread examples of attackers getting IoT devices to run arbitrary code any time soon.</p>
</blockquote>
<p>Welp…see the aforementioned Mirai botnet.</p>
<h3 id="pki-ubiquity">PKI ubiquity</h3>
<blockquote>
<p>In 2016, we expect that PKI will become ubiquitous security technology within the IoT market.</p>
</blockquote>
<p>A safe prediction for 2017 is that PKI will continue to be a mess, and so will IoT security.</p>
<h3 id="drone-hackpocalypse">Drone hackpocalypse</h3>
<blockquote>
<p>However, drones also present a wide range of risks, from privacy invasion to corporate espionage to terrorism.</p>
</blockquote>
<p>Sure, but realistically it’s confined to nation-state level for now, so it’s dubious if it belongs in a predictions report aimed at corporate CISOs. Some <a href="https://www.wired.com/2016/03/hacker-says-can-hijack-35k-police-drone-mile-away/">research</a> <a href="https://hub.jhu.edu/2016/06/08/hacking-drones-security-flaws/">was</a> <a href="http://arstechnica.com/security/2016/10/drone-hijacker-gives-hackers-complete-control-of-aircraft-in-midflight/">published</a> on how to hack drones, but there’s nothing confirmed in-the-wild. Amazon also just <a href="http://qz.com/873920/amazon-has-a-plan-to-defend-drones-from-hackers-and-bow-and-arrow-wielding-troublemakers/">published a patent</a> for protecting its delivery drones against hacking…and against bows and arrows. My guess is this prediction is supposed to fall under the <a href="http://qz.com/873920/amazon-has-a-plan-to-defend-drones-from-hackers-and-bow-and-arrow-wielding-troublemakers/">Rule of Cool</a> rather than having a logical basis for impact to enterprises.</p>
<h3 id="terrorists-pick-up-the-cyberbomb">Terrorists pick up the (cyber)bomb</h3>
<blockquote>
<p>In 2016, we will increasingly see the convergence of physical and cyber terrorism aimed at wreaking far-reaching havoc.</p>
</blockquote>
<p>Terrorists did not become cyber ninjas blowing up power plants all over the place. The Grugq has <a href="https://medium.com/@thegrugq/isis-cyber-security-skills-suck-cc3466aa73f7#.jhezy91mj">already written</a> <a href="https://medium.com/@thegrugq/just-the-facts-isis-encryption-c70f258c0f7#.e1ovjsbl8">about this</a>. <a href="http://cybersquirrel1.com/">Squirrels are still the better conductors of cyber war ops</a> against our critical infrastructure.</p>
<h3 id="post-quantum-crypto">Post-quantum crypto</h3>
<blockquote>
<p>“The cryptopocalypse is nigh.” (due to quantum computing)</p>
</blockquote>
<p>Nope. Probably a prediction that can be safely shelved for a few years.</p>
<h3 id="cyber-insurance-changes-everything">Cyber. insurance. changes. everything.</h3>
<blockquote>
<p>The cyber insurance market will dramatically disrupt businesses in the next 12 months.
In 2016 many companies will turn to cyber insurance as another layer of protection, particularly as cyber attacks start mirroring physical world attacks.</p>
</blockquote>
<p>Looks like most companies still think insurance is <a href="https://www.sans.org/reading-room/whitepapers/analyst/bridging-insurance-infosec-gap-2016-cyber-insurance-survey-37062">inadequate risk mitigation</a>. According to the <a href="https://www.sans.org/reading-room/whitepapers/analyst/bridging-insurance-infosec-gap-2016-cyber-insurance-survey-37062">SANS survey</a>, only 33.5% of companies have cyber insurance. What’s more, <a href="http://www.partnerre.com/assets/uploads/docs/PartnerRe_Cyber_Liability_Trends_Survey_2016.pdf">83% of respondents to a PartnerRe survey</a> said that cyber insurance policies are only “sometimes” meeting the needs of insured companies. So, while it may be blossoming as a new “check the box” item, it’s nowhere near disrupting enterprise security strategies yet.</p>
<h3 id="mobile-app-exploitation">Mobile app exploitation</h3>
<blockquote>
<p>With the growing amount of malware and the vulnerabilities present in legitimate mobile apps, a major breach is bound to happen, potentially on a massive scale.</p>
</blockquote>
<p>While many of the predictions highlighted mobile as an attack vector in general, where they specifically got it wrong is in thinking that vulnerabilities in mobile apps would be exploited. Why expend the effort when malicious apps still work just fine? See: <a href="https://www.crowdstrike.com/blog/danger-close-fancy-bear-tracking-ukrainian-field-artillery-units/">the poisoned app Fancy Bear used to hack Ukranian field artillery units</a>. There haven’t been any major corporate breaches directly tied to mobile malware, though I’ll concede it’s possible they just aren’t public. Mobile malware grew at a very healthy pace in 2016, however:</p>
<p><img src="/blog/img/new-mobile-malware-2016.png" alt="Chart of new mobile malware by McAfee Labs"></p>
<h3 id="wearables-as-the-new-sexy-attack-vector">Wearables as the new sexy attack vector</h3>
<blockquote>
<p>Initially, we doubt that a smartphone will be completely compromised by an attack through a wearables device, but we expect to see the control apps for wearables compromised in the next 12 to 18 months in a way that will provide valuable data for spear-phishing attacks.</p>
</blockquote>
<p>There’s been research on hacking them, but as far as I can find, no evidence that hacking wearables has ever been used as part of a corporate data breach. The only headlines in 2016 are variations of “Hackers targeting your wearables data?” or “Can Wearable Technology Threaten the Security of Your Biz?” so I’ll invoke <a href="https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines">Betteridge’s law of headlines</a> and say the answer is no.</p>
<h3 id="remote-controlled-cars">Remote-controlled cars</h3>
<blockquote>
<p>Attacks on automobile systems will increase rapidly in 2016 due to the rapid increase in connected automobile hardware built without foundational security principles.</p>
</blockquote>
<p>This didn’t happen. <a href="https://www.wired.com/2016/08/jeep-hackers-return-high-speed-steering-acceleration-hacks/">Seriously cool research</a>, but no reports in the wild — just because something can be hacked doesn’t mean it’s easily hacked.</p>
<div class="center">
	<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Why is it that every headline is &#39;product X easily hacked&#39;. Hacking a car was hard. Hacking is hard. It is good that it is hard.</p>&mdash; Chris Valasek (@nudehaberdasher) <a href="https://twitter.com/nudehaberdasher/status/805419828756496384?ref_src=twsrc%5Etfw">December 4, 2016</a></blockquote>
	<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div>
<h3 id="password-reuse-attacks-will-decline">Password reuse attacks will decline</h3>
<blockquote>
<p>Password reuse attacks will begin to decline… people are starting to adopt password managers… Advancements in biometrics are also helping the cause…most major Internet layers are also adding two-factor authentication as a standard option</p>
</blockquote>
<p>All of those things might be true, but not enough to make a dent. These are entities publicly victims to password reuse attacks in this year alone: <a href="https://www.carbonite.com/en/cloud-backup/business/resources/carbonite-blog/carbonite-password-attack/">Carbonite</a>, <a href="https://threatpost.com/gotomypc-suffers-major-password-reuse-attack/118781/">Citrix GoToMyPC</a>, <a href="http://www.bbc.com/news/technology-38070985">Deliveroo</a>, <a href="https://techcrunch.com/2016/06/16/github-accounts-targeted-in-password-reuse-attack/">GitHub</a>, <a href="http://www.batblue.com/groupon-users-hit-password-reuse-attack/">Groupon</a>, <a href="http://www.vanityfair.com/news/2016/06/mark-zuckerberg-terrible-password-revealed-in-hack">Mark Zuckerberg</a>, <a href="http://arstechnica.com/security/2016/06/teamviewer-users-are-being-hacked-in-bulk-and-we-still-dont-know-how/">TeamViewer</a>, <a href="https://www.itgovernance.co.uk/blog/uk-national-lottery-password-reuse-attacks-what-are-the-chances/">U.K. National Lottery</a>. Further, Patrick Heim, Dropbox’s Head of Trust &amp; Security said in September that <a href="http://www.cso.com.au/article/606531/99-compromised-user-accounts-come-from-password-reuse-cso-heavy-hitters-reveal/">“99% of compromised user accounts come from password reuse.”</a> Seems more like wishful thinking / “it’s-2016-how-is-this-still-happening?!” than a prediction.</p>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>I’ve already started reading some of the 2017 predictions — being diplomatic, I’ll say I’m looking forward to seeing how the next year unfolds. I saw one list described as not containing any “wows,” which I think accurately pinpoints the problem with these lists in general.</p>
<p>It’s more an exercise in taking a provocative stance to raise attention and be able to tell customers how your product addresses this important upcoming issue, rather than a probability-weighted list of the actual threats enterprises will face in the upcoming year — users will keep clicking on things they shouldn’t, injection vulnerabilities are still in all the things, and not enough people use 2FA or encrypt their users’ credentials.</p>
<p>In conclusion, my ultimate 2017 prediction is that I’ll have plenty of content for an equivalent of this post next year.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>My 2016 Reading List</title>
            <link>https://kellyshortridge.com/blog/posts/2016-reading-list/</link>
            <pubDate>Mon, 26 Dec 2016 21:03:57 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/2016-reading-list/</guid>
            <description>Two years ago, I made my New Years Resolution to read one fiction and one non-fiction book each month, and I’ve (mostly) kept it up since then. But towards the end of 2015, it suddenly occurred to me that the vast majority of the authors I read were men. So, I made my 2016 resolution to flip that ratio, while still sticking to the genres to which I gravitate, namely science fiction and popular science (with a bit of history thrown in).
Before I get to the list, I feel like these posts typically have a “what I learned from it” component. The only real difference I found was that there were far more female characters involved or female scientists highlighted — and for their research, not their sex.
I certainly didn’t lack for subject variety — I read about gravitational waves, extinction theory, Antarctica, parasites, machine learning and the women who were the first “computers” on the pop-sci side, to the absolutely-bonkers 14th century in Europe, the Great Migration, and the U.S. criminal justice system on the non-science history side. All the sci-fi novels I read were entertaining, thought-provoking, and full of rich world-building like any other good sci-fi — just with greater representation of women in the action.
I realize some people might think this is a useless exercise, or “reverse sexism.” I’d say it’s more in the model of Ruth Bader Ginsburg’s response to “When will there be enough women on the Supreme Court?” — when all nine justices are women. I went a year reading books that were ~90% by men (realistically, many more years than that), without it even registering. The goal should be to get to the point where it isn’t considered weird or “SJW” to read books 90% by women, as well.
The key point to me is visibility — the reality is that women, in many fields, struggle to gain visibility for their accomplishments. I’d assume for genres like popular science or science fiction that it’s similar to Hollywood, in that backing work by women is seen as a “gamble.” I’m aware that my individual Kindle purchase doesn’t amount to much, but if more people adopt a similar strategy and recommend these books to their avid-reader friends, then it starts to amount to something of significance.
Going forward, I’ll aim for closer to a 50/50 ratio and diversify my picks along other lines, such as ethnicity, religion, and gender identity. As anyone who reads a lot can likely attest, it’s always such a challenge to narrow down your next book selection from the thousands of options available. So to all who might potentially hate on this strategy, just view my method as a particularly socially-conscious selection engine.
Without further ado, here’s my reading list from this past year, including links to each book’s Amazon page so you can learn more — I make no illusions about being a masterful book reviewer, but just assume all of these books get a hearty thumbs up from me.
Non-Fiction A Distant Mirror: The Calamitous 14th Century by Barbara W. Tuchman
Antarctica: An Intimate Portrait of a Mysterious Continent by Gabrielle Walker
Black Hole Blues and Other Songs from Outer Space by Janna Levin
Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe by Lisa Randall
The New Jim Crow by Michelle Alexander
Rise of the Rocket Girls: The Women Who Propelled Us, from Missiles to the Moon to Mars by Nathalia Holt
The Sixth Extinction: An Unnatural History by Elizabeth Kolbert
This Is Your Brain on Parasites: How Tiny Creatures Manipulate Our Behavior and Shape Society by Kathleen McAuliffe
The Warmth of Other Suns: The Epic Story of America’s Great Migration by Isabelle Wilkerson
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil
Fiction Ancillary Justice (Imperial Radch Book 1) by Ann Leckie
Bloodchild: And Other Stories by Octavia E. Butler
Gravity’s Rainbow by Thomas Pynchon (the token male author ;)
The Handmaid’s Tale by Margaret Atwood
The Left Hand of Darkness by Ursula Le Guin
Ink by Sabrina Vourvoulias
Synners by Pat Cadigan
The Waves by Virginia Woolf
</description>
            <atom:content type="html"><![CDATA[<p>Two years ago, I made my New Years Resolution to read one fiction and one non-fiction book each month, and I’ve (mostly) kept it up since then. But towards the end of 2015, it suddenly occurred to me that the vast majority of the authors I read were men. So, I made my 2016 resolution to flip that ratio, while still sticking to the genres to which I gravitate, namely science fiction and popular science (with a bit of history thrown in).</p>
<p>Before I get to the list, I feel like these posts typically have a “what I learned from it” component. The only real difference I found was that there were far more female characters involved or female scientists highlighted — and for their research, not their sex.</p>
<p>I certainly didn’t lack for subject variety — I read about gravitational waves, extinction theory, Antarctica, parasites, machine learning and the women who were the first “computers” on the pop-sci side, to the absolutely-bonkers 14th century in Europe, the Great Migration, and the U.S. criminal justice system on the non-science history side. All the sci-fi novels I read were entertaining, thought-provoking, and full of rich world-building like any other good sci-fi — just with greater representation of women in the action.</p>
<p>I realize some people might think this is a useless exercise, or “reverse sexism.” I’d say it’s more in the model of Ruth Bader Ginsburg’s response to “When will there be enough women on the Supreme Court?” — when all nine justices are women. I went a year reading books that were ~90% by men (realistically, many more years than that), without it even registering. The goal should be to get to the point where it isn’t considered weird or “SJW” to read books 90% by women, as well.</p>
<p>The key point to me is visibility — the reality is that women, in many fields, struggle to gain visibility for their accomplishments. I’d assume for genres like popular science or science fiction that it’s similar to Hollywood, in that backing work by women is seen as a “gamble.” I’m aware that my individual Kindle purchase doesn’t amount to much, but if more people adopt a similar strategy and recommend these books to their avid-reader friends, then it starts to amount to something of significance.</p>
<p>Going forward, I’ll aim for closer to a 50/50 ratio and diversify my picks along other lines, such as ethnicity, religion, and gender identity. As anyone who reads a lot can likely attest, it’s always such a challenge to narrow down your next book selection from the thousands of options available. So to all who might potentially hate on this strategy, just view my method as a particularly socially-conscious selection engine.</p>
<p>Without further ado, here’s my reading list from this past year, including links to each book’s Amazon page so you can learn more — I make no illusions about being a masterful book reviewer, but just assume all of these books get a hearty thumbs up from me.</p>
<h2 id="non-fiction">Non-Fiction</h2>
<p><a href="https://www.amazon.com/gp/product/B004R1Q296">A Distant Mirror: The Calamitous 14th Century</a> by Barbara W. Tuchman</p>
<p><a href="https://www.amazon.com/gp/product/B006R8PHW0">Antarctica: An Intimate Portrait of a Mysterious Continent</a> by Gabrielle Walker</p>
<p><a href="https://www.amazon.com/gp/product/B017QLQLQW">Black Hole Blues and Other Songs from Outer Space</a> by Janna Levin</p>
<p><a href="https://www.amazon.com/gp/product/B00T3CU1ZK/">Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe</a> by Lisa Randall</p>
<p><a href="https://www.amazon.com/gp/product/B0067NCQVU">The New Jim Crow</a> by Michelle Alexander</p>
<p><a href="https://www.amazon.com/gp/product/B013CATQPY">Rise of the Rocket Girls: The Women Who Propelled Us, from Missiles to the Moon to Mars</a> by Nathalia Holt</p>
<p><a href="https://www.amazon.com/gp/product/B00EGJE4G2">The Sixth Extinction: An Unnatural History</a> by Elizabeth Kolbert</p>
<p><a href="https://www.amazon.com/gp/product/B011H55MY0">This Is Your Brain on Parasites: How Tiny Creatures Manipulate Our Behavior and Shape Society</a> by Kathleen McAuliffe</p>
<p><a href="https://www.amazon.com/gp/product/B003EY7JGM">The Warmth of Other Suns: The Epic Story of America’s Great Migration</a> by Isabelle Wilkerson</p>
<p><a href="https://www.amazon.com/gp/product/B019B6VCL">Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy</a> by Cathy O’Neil</p>
<h2 id="fiction">Fiction</h2>
<p><a href="https://www.amazon.com/gp/product/B00BAXFDLM">Ancillary Justice (Imperial Radch Book 1)</a> by Ann Leckie</p>
<p><a href="https://www.amazon.com/gp/product/B008HALO0U">Bloodchild: And Other Stories</a> by Octavia E. Butler</p>
<p><a href="https://www.amazon.com/gp/product/B005CRQ3MA/">Gravity’s Rainbow</a> by Thomas Pynchon (the token male author ;)</p>
<p><a href="https://www.amazon.com/gp/product/B003JFJHTS">The Handmaid’s Tale</a> by Margaret Atwood</p>
<p><a href="https://www.amazon.com/gp/product/B003JFJHTS">The Left Hand of Darkness</a> by Ursula Le Guin</p>
<p><a href="https://www.amazon.com/gp/product/B009LL3YRU">Ink</a> by Sabrina Vourvoulias</p>
<p><a href="https://www.amazon.com/gp/product/B00GU38B4S">Synners</a> by Pat Cadigan</p>
<p><a href="https://www.amazon.com/gp/product/B006VXFGJU">The Waves</a> by Virginia Woolf</p>
]]></atom:content>
        </item>
        
        <item>
            <title>3 questions on cybersecurity that should be asked in the debates</title>
            <link>https://kellyshortridge.com/blog/posts/3-questions-cybersecurity-debates/</link>
            <pubDate>Fri, 30 Sep 2016 20:41:38 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/3-questions-cybersecurity-debates/</guid>
            <description>A citizen’s guide (or to sound smart at cocktail parties) While the first U.S. presidential debate included an open-ended, broad question about each candidate’s stance on cybersecurity, it accomplished little in helping citizens understand the candidates’ actual policy positions — nor why cybersecurity policies are relevant at all on the national scale. Saying “cybersecurity is important” for the U.S. today is like saying “having a military is important.”
But, it was also clear from the responses, such as bringing up Daesh’s use of the internet for recruitment or using the term “the cyber,” that there’s a lack of mainstream understanding of what cybersecurity actually means in a policy context (the argument against the term “cyber” and its derivatives is left for another day).
Here’s my best attempt at a definition as far as most citizens are concerned: cybersecurity at the national level includes 1) methods and resources to conduct geopolitical, intelligence-gathering, or offensive operations, and 2) methods and resources to defend against digital attacks that might threaten national security, individual liberties, or our economic viability.
Simply put, cybersecurity is the newest domain both for warfare and way of life, and thus has policy implications at a national scale. It would be a grave mistake to underestimate the importance of cybersecurity in geopolitical strategy and how much our “life, liberty and the pursuit of happiness” depends on it today.
So, without further ado, here are the three questions I think are worthy of being asked at the next two debates, as well as why you, as a citizen, should care about their answers. If you think a question is worth asking, vote for it by clicking the relevant link below. (Author note: after the conclusion of the debates, the question site was disabled – these now link to the relevant sections in the post).
Do you support federally mandated encryption backdoors? How would you improve protection of critical infrastructure from cyber attacks? What balance would you strike between cyber deterrence and offensive cyber operations on other nations? Do you support federally mandated encryption back doors? What we’d learn The way the candidates answer this question primarily will show to what degree they are aligned to constitutional rights plus the needs and desires of citizens vs. what “the powers that be” (primarily the FBI) say is necessary. Secondarily, it will show how open they are to listening to expert opinions in a particular area, as the overwhelming majority of cybersecurity professionals are vehemently and publicly against encryption backdoors.
The context There have been multiple encryption debates throughout the years, but the most recent focuses on encryption backdoors. Let’s start with a basic definition of encryption: it’s a “process of ciphering information in such a way that only authorized parties can read it.” It’s not hyperbole to say encryption is part of everything you use online — from online banking, online shopping, email, electronic medical records to Facebook chat. It is a fundamental part of what makes the internet economy as we know work by adding in a layer of trust.
Now, what’s a backdoor? A backdoor is an intentionally-placed method of bypassing a security mechanism in software, and is most often used to gain unauthorized access to something. In the context of encryption backdoors, it is specifically to obtain the “plaintext” or raw data. For example, encrypted data might look like “IUFdjxi/FI8&#43;2zv/WbEUq=M&#43;b…” while the plaintext says “I like pizza.”
By way of analogy, an encryption backdoor is similar to designing a physical lock with a master key that can always open the lock if needed. It’d be naive to assume that only the designated owner of the master key (for example, the government) could unlock the lock. Someone else could examine the way the lock is designed, deduce how the master key looks, and create one on their own.
The implications for an encryption backdoor are even worse than that analogy — at least in the physical lock case, there’s a slight barrier in that physical proximity is still needed to use the master key. In the digital case, a hacker doesn’t even have to move in order to use the master key across a bunch of different digital “locks” in any location in the world.
The FBI has been the most notable proponent of encryption backdoors, as highlighted in their battle with Apple earlier this year. Further, in 2007, a backdoor was discovered in the encryption algorithm supported by the NSA, which would have meant that companies who adopted the NSA’s recommend encryption algorithm would have developed software susceptible to attack or data interception. The argument in favor of encryption backdoors generally rests on the use of encryption by criminals or other bad actors, and the worry that encryption allows them to “go dark” (i.e. make it harder for someone to intercept or access their data).
However, the overwhelming majority of cybersecurity experts are against backdoors, primarily because there’s absolutely no way to ensure that these “trap doors” aren’t discovered by hackers, criminals or combative nation-states and used against American citizens, corporations, banks, utilities, troops or the government itself. It cannot be stressed enough that the “harsh technical realities make a [lawful access only] solution effectively impossible.”
Requiring encryption backdoors also would place a huge financial and resource burden on private enterprises by requiring software developers to design systems in a way that allows law enforcement to gain access as needed — or desired. Further, no matter how you decide who is granted access to the “master key” for these backdoors, they immediately become an attractive and lucrative target for cyberattack — potentially pouring many millions of dollars of extra risk onto the shoulders of private enterprises.
Why you should care If you’d include yourself among people who care about the following, you should be strongly against requiring — or the even existence of — encryption backdoors:
The First Amendment, free speech and freedom of the press The Second Amendment (encryption software has historically been classified as a munition — a military weapon — by the government, which means that citizens arguably have the right to use encryption to defend their personal data, without it being rendered ineffective due to a backdoor) The Fourth Amendment (keeping data safe in event of an “unreasonable search and seizure”) Criminal justice reform Eliminating discrimination Stopping people from stealing your personal data or assets Stopping people from stealing corporate data or assets Protecting critical infrastructure Protecting hospital systems and medical devices Keeping our troops safe and many other things, but I have a tendency to ramble as-is As you can recognize from the list, this isn’t a partisan issue.
Many people use the “I have nothing to hide” argument when first hearing about the encryption debate. That also happens to be irrelevant — given the prevalence of digital communications in our modern lives, encryption is essential in preserving our constitutional rights.
But it’s also way beyond that. As I mentioned above, encryption is used in nearly everything you do online these days, and not just your communications. Purposefully backdooring encryption leaves an open hole for hackers to get your healthcare data, personal pictures of your kids, drain your bank account, run up your credit card, or steal your identity. The vibrant, useful, trillion-dollar internet economy as we know it would not and could not exist without encryption.
As Matt Blaze, a leading expert on encryption, said in his recent testimony before Congress:
This is not simply a matter of weighing the desires for personal privacy and for safeguards against government abuse against the need for improved law enforcement… [Backdoors] will provide rich, attractive targets not only for relatively petty criminals such as identity thieves, but also for organized crime, terrorists, and hostile intelligence services. It is not an exaggeration to understand these risks as a significant threat to our economy and to national security.
How would you improve protection of critical infrastructure from cyber attacks? With the follow-up: “Would you include election and electronic voting systems under the definition of critical infrastructure?”
What we’d learn Each candidate would outline their plans for for the federal government’s role in protecting critical infrastructure. Additionally, we’d hear each candidate’s proposals for addressing and solving some of the key challenges in protecting critical infrastructure in order to judge how much they recognize the threat and how effective they’d be in preserving our national security, economy and way of life.
The context Critical infrastructure, as per the Patriot Act, is defined as:
systems and assets, whether physical or virtual, so vital to the U.S. that the incapacity or destruction of such systems or assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters
The NIST Cybersecurity Framework suggests a host of industries fall under this label, including agriculture, water, public health, emergency services, government, defense, information &amp; telecommunications, energy, transportation &amp; shipping, banking &amp; finance, chemicals &amp; hazardous materials, post, national monuments &amp; icons and critical manufacturing.
Many would argue that the list should also include election and electronic voting systems, as they are a vital component of maintaining democratic elections (and I personally would agree). Particularly in light of the recent revelations that Russian actors hacked the DNC as well as the Illinois and Arizona election systems (and attempted to hack many more), the plea by information security experts to have the security of voting systems be taken more seriously is steadily gaining legitimacy.
The reason why federal-level protection of critical infrastructure from cyber attacks is up for debate is that, currently, the onus is primarily on the private sector to defend itself. However, the same isn’t true for physical threats such as potential terrorist attacks — should the owner of the World Trade Center have conducted their own anti-terrorism operations and had fighter jets ready to escort a hijacked plane? Of course not.
If the federal government is in charge of protecting national security, then it’s logical to suggest that they should also take the lead on all national security, including securing national infrastructure. However, given the complexities of our physical and virtual infrastructure, and consequently the large number of industries that fall under the critical infrastructure label, there is disagreement over the extent to which the federal government should help bolster their cybersecurity. We, as citizens, should hear what the candidates respective positions are on this important issue.
Additionally, the how to do it is potentially more subjective and could include anything from recommending minimal security standards (which is the default, albeit ineffective, strategy) to imposing fines on software vendors for vulnerabilities or conducting cyber deterrence (which I dig into more in the last question). I, for one, would like to know the candidates ideas on the “how to” as well.
Why you should care If you care about our national security, you should care about this question. It’s safe to say it’s a bipartisan desire for the local power plant not to blow up, to avoid a food crisis or not have our financial system come to a standstill.
The reason why you should care about each candidate’s specific answer to the question is because the level of cybersecurity in critical infrastructure is, in general, alarmingly poor, increasing the likelihood and severity of devastation of a cyber attack on critical infrastructure at any time. While I don’t condone FUD (fear, uncertainty and doubt as a media strategy), I will say that it’s far better to act now to reduce the probability of a calamitous digital attack on our critical infrastructure than keep our fingers crossed that it won’t happen.
There are a few cybersecurity challenges faced by these industries, though. First, there’s a massive shortage of talent, and those who are practitioners generally go to industries who can pay the most (like tech and financial services). Perhaps workers in declining industries could be given incentives to retrain with cybersecurity skills. Second, critical infrastructure systems usually are complex and a lot of infrastructure software is old, and it’s difficult to install or integrate security measures after the fact.
Third, a single private entity won’t have the same level of information about potential digital threats as the federal government, nor the resources to prevent against every possible scenario. By leveraging the U.S. intelligence community’s data, private entities in critical industries could be given a “heads up” on potential threats and guidance on the tactics, techniques and procedures of groups likely to target them in a cyberattack.
What balance would you strike between cyber deterrence and offensive cyber operations on other nations? With the follow-up: “What role do you think cyber deterrence plays in cyberwarfare?”
What we’d learn The candidates’ answers to this question should reveal:
How they’d invest in and preserve our advantage in the “cyberwar” arena — from technology to human capital How they’d use offensive cyber operations — would it be covert geopolitical influencing or upfront display of capability? Would we be the first-movers on offense or focused on attacking back? The context The domains of warfare were traditionally Land, Sea, Air and Space, but Information Operations (i.e. the digital domain) became the fifth dimension for the U.S. Military in 1995. Since then, information operations, or “cyberwarfare” as dubbed by the media, has become a crucial component of military strategy due to the proliferation of digital systems globally and their importance in all areas of modern life.
The two main types of cyberwarfare are espionage and sabotage. Espionage is used for spying purposes to gain intelligence; for example, the hack of the Office of Personnel Management (presumably by China) was to gain intelligence on people who work for various U.S. government agencies. Sabotage is used to disrupt adversaries’ systems for geopolitical or military gain. For example, rather than conducting some sort of strike on Iran’s nuclear facilities, the U.S. leveraged its offensive cybersecurity capabilities to covertly disrupt Iran’s nuclear program in an attack later dubbed “Stuxnet.”
Cyberwarfare is particularly reliant on intelligence (part of why the NSA has expanded so much over the past two decades), and thus most operations tend to fly under the radar. It would reduce a government’s advantage to reveal capabilities or methods, since then adversaries could better thwart attacks or repackage the attack for their own use.
This highlights the difficulty of cyber deterrence. Being able to attribute cyber attacks to a specific nation-state requires revealing, in part, how you were able to figure out who did it. If you don’t present evidence, it can be dismissed as a baseless accusation, which isn’t great for geopolitical maneuvering. Even then, attribution is notoriously difficult since attackers can attempt to mask their digital tracks, including by making it appear that their attack originated from a different location or by using a different language than their own.
In any case, to dissuade adversaries from attacking us, the U.S. has to make it clear that the intelligence community will figure out who is behind any attacks against us, retaliate swiftly and inflict significant damage…all without revealing the extent of our capabilities.
Why you should care It is evident that the U.S. currently has a decisive advantage in the nation-state cybersecurity arena — anyone suggesting otherwise, as seen in this election, is misinformed. We began preparing for, and conducting, offensive cyber operations about a decade before others, giving us a significant head start.
Further, the dominance we have over global digital infrastructure is extremely difficult to replicate and that fact makes our cyber operations smoother to conduct. For example, as revealed by the Snowden leaks, the U.S. taps into undersea fiber optic cables that serve as the fundamental communication rails of the internet — giving access to any data that is transmitted over these cables.
To be clear, Russia, China and Iran all have highly intelligent and capable cybersecurity teams (to varying degrees of size and sophistication). But we can conduct offensive cybersecurity operations on a bigger scale. We not only can perform equally as sophisticated attacks, but we also possess a formidable information advantage to better craft attacks and anticipate attacks against us.
This doesn’t mean we’ll never be attacked, due to the aforementioned abilities of our adversaries — though the cyber deterrence strategy is meant to dissuade others from attacking us by showing our muscle. While we currently have superior offensive cybersecurity capabilities that give us a geopolitical advantage, this does not make us invulnerable to the potentially devastating effects of cyberattack against us by a capable nation-state.
On the other hand, an offensive operation presents the risk of being caught, which might be viewed as a declaration of war — thus leading to retaliation against us (which isn’t ideal). So, we have to be judicious in how we leverage our offensive cybersecurity capabilities to balance optimizing our foreign policy goals while protecting our own national security.
Conclusion Do I think these questions will be asked at the debates? No (but fingers crossed). I don’t think there’s a sufficient public understanding of the multifarious policy issues presented by cybersecurity — largely because the media coverage of cybersecurity is notoriously terrible.
However, raising voter awareness of these issues still is critically important. Real change won’t happen if it’s just the information security or privacy community who is concerned…and it really shouldn’t just be them, since cybersecurity issues affect all citizens.
Cybersecurity’s importance in our nation’s ecosystem only will grow, so starting the discussion of these issues now means there’ll be a deeper consciousness of them among voters in the next election — and a greater ability of “the people” to ensure their rights and security are preserved as we march past the point of no return into digital dependence.
</description>
            <atom:content type="html"><![CDATA[<p><em>A citizen’s guide (or to sound smart at cocktail parties)</em>
<img src="/blog/img/code-murica-flag.jpg" alt="Murica flag, cyber edition"></p>
<p>While the first U.S. presidential debate included an open-ended, broad question about each candidate’s stance on cybersecurity, it accomplished little in helping citizens understand the candidates’ actual policy positions — nor why cybersecurity policies are relevant at all on the national scale. Saying “cybersecurity is important” for the U.S. today is like saying “having a military is important.”</p>
<p>But, it was also clear from the responses, such as bringing up Daesh’s use of the internet for recruitment or using the term “the cyber,” that there’s a lack of mainstream understanding of what cybersecurity actually means in a policy context (the argument against the term “cyber” and its derivatives is left for another day).</p>
<p>Here’s my best attempt at a definition as far as most citizens are concerned: cybersecurity at the national level includes 1) methods and resources to conduct geopolitical, intelligence-gathering, or offensive operations, and 2) methods and resources to defend against digital attacks that might threaten national security, individual liberties, or our economic viability.</p>
<p>Simply put, cybersecurity is the newest domain both for warfare and way of life, and thus has policy implications at a national scale. It would be a grave mistake to underestimate the importance of cybersecurity in geopolitical strategy and how much our “life, liberty and the pursuit of happiness” depends on it today.</p>
<p>So, without further ado, here are the three questions I think are worthy of being asked at the next two debates, as well as why you, as a citizen, should care about their answers. If you think a question is worth asking, vote for it by clicking the relevant link below. (Author note: after the conclusion of the debates, the question site was disabled &ndash; these now link to the relevant sections in the post).</p>
<ol>
<li><a href="#crypto-backdoors">Do you support federally mandated encryption backdoors?</a></li>
<li><a href="#critical-infra">How would you improve protection of critical infrastructure from cyber attacks?</a></li>
<li><a href="#deterrence-offense">What balance would you strike between cyber deterrence and offensive cyber operations on other nations?</a></li>
</ol>
<hr>
<p><img src="blog/img/bad-cyberart-05.jpg" alt="Code dripping over the White House"></p>
<h2 id="a-namecrypto-backdoorsado-you-support-federally-mandated-encryption-back-doors"><a name="crypto-backdoors"></a>Do you support federally mandated encryption back doors?</h2>
<h3 id="what-wed-learn">What we’d learn</h3>
<p>The way the candidates answer this question primarily will show to what degree they are aligned to constitutional rights plus the needs and desires of citizens vs. what “the powers that be” (primarily the FBI) say is necessary. Secondarily, it will show how open they are to listening to expert opinions in a particular area, as the overwhelming majority of cybersecurity professionals are vehemently and publicly against encryption backdoors.</p>
<h3 id="the-context">The context</h3>
<p>There have been multiple encryption debates throughout the years, but the most recent focuses on encryption backdoors. Let’s start with a basic definition of encryption: it’s a “process of ciphering information in such a way that only authorized parties can read it.” It’s not hyperbole to say encryption is part of everything you use online — from online banking, online shopping, email, electronic medical records to Facebook chat. It is a fundamental part of what makes the internet economy as we know work by adding in a layer of trust.</p>
<p>Now, what’s a backdoor? A backdoor is an intentionally-placed method of bypassing a security mechanism in software, and is most often used to gain unauthorized access to something. In the context of encryption backdoors, it is specifically to obtain the “plaintext” or raw data. For example, encrypted data might look like “IUFdjxi/FI8+2zv/WbEUq=M+b…” while the plaintext says “I like pizza.”</p>
<p>By way of analogy, an encryption backdoor is similar to designing a physical lock with a master key that can always open the lock if needed. It’d be naive to assume that only the designated owner of the master key (for example, the government) could unlock the lock. Someone else could examine the way the lock is designed, deduce how the master key looks, and create one on their own.</p>
<p>The implications for an encryption backdoor are even worse than that analogy — at least in the physical lock case, there’s a slight barrier in that physical proximity is still needed to use the master key. In the digital case, a hacker doesn’t even have to move in order to use the master key across a bunch of different digital “locks” in any location in the world.</p>
<p>The FBI has been the most notable proponent of encryption backdoors, as highlighted in their <a href="/blog/posts/apple-vs-fbi-privacy-inequality/">battle with Apple earlier this year</a>. Further, in 2007, a <a href="http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html">backdoor was discovered in the encryption algorithm supported by the NSA</a>, which would have meant that companies who adopted the NSA’s recommend encryption algorithm would have developed software susceptible to attack or data interception. The argument in favor of encryption backdoors generally rests on the use of encryption by criminals or other bad actors, and the worry that encryption allows them to “go dark” (i.e. make it harder for someone to intercept or access their data).</p>
<p>However, the overwhelming majority of cybersecurity experts are against backdoors, primarily because there’s absolutely no way to ensure that these “trap doors” aren’t discovered by hackers, criminals or combative nation-states and used against American citizens, corporations, banks, utilities, troops or the government itself. It cannot be stressed enough that the <a href="http://docs.house.gov/meetings/IF/IF02/20160419/104812/HHRG-114-IF02-Wstate-BlazeM-20160419-U3.pdf">“harsh technical realities make a [lawful access only] solution effectively impossible.”</a></p>
<p>Requiring encryption backdoors also would place a huge financial and resource burden on private enterprises by requiring software developers to design systems in a way that allows law enforcement to gain access as needed — or desired. Further, no matter how you decide who is granted access to the “master key” for these backdoors, they immediately become an attractive and lucrative target for cyberattack — potentially pouring many millions of dollars of extra risk onto the shoulders of private enterprises.</p>
<h3 id="why-you-should-care">Why you should care</h3>
<p>If you’d include yourself among people who care about the following, <em>you should be strongly against requiring — or the even existence of — encryption backdoors:</em></p>
<ul>
<li>The First Amendment, free speech and freedom of the press</li>
<li>The Second Amendment (encryption software has historically been classified as a munition — a military weapon — by the government, which means that citizens arguably have the right to use encryption to defend their personal data, without it being rendered ineffective due to a backdoor)</li>
<li>The Fourth Amendment (keeping data safe in event of an “unreasonable search and seizure”)</li>
<li>Criminal justice reform</li>
<li>Eliminating discrimination</li>
<li>Stopping people from stealing your personal data or assets</li>
<li>Stopping people from stealing corporate data or assets</li>
<li>Protecting critical infrastructure</li>
<li>Protecting hospital systems and medical devices</li>
<li>Keeping our troops safe</li>
<li>and many other things, but I have a tendency to ramble as-is</li>
</ul>
<p>As you can recognize from the list, this isn’t a partisan issue.</p>
<p>Many people use the “I have nothing to hide” argument when first hearing about the encryption debate. That also happens to be irrelevant — given the prevalence of digital communications in our modern lives, encryption is essential in preserving our constitutional rights.</p>
<p>But it’s also way beyond that. As I mentioned above, encryption is used in nearly everything you do online these days, and not just your communications. Purposefully backdooring encryption leaves an open hole for hackers to get your healthcare data, personal pictures of your kids, drain your bank account, run up your credit card, or steal your identity. <strong>The vibrant, useful, trillion-dollar internet economy as we know it would not and could not exist without encryption.</strong></p>
<p>As Matt Blaze, a leading expert on encryption, said in <a href="http://docs.house.gov/meetings/IF/IF02/20160419/104812/HHRG-114-IF02-Wstate-BlazeM-20160419-U3.pdf">his recent testimony before Congress</a>:</p>
<blockquote>
<p>This is not simply a matter of weighing the desires for personal privacy and for safeguards against government abuse against the need for improved law enforcement… [Backdoors] will provide rich, attractive targets not only for relatively petty criminals such as identity thieves, but also for organized crime, terrorists, and hostile intelligence services. It is not an exaggeration to understand these risks as a significant threat to our economy and to national security.</p>
</blockquote>
<hr>
<p><img src="blog/img/bad-cyberart-06.jpg" alt="Power lines, but with code on them for some reason"></p>
<h2 id="a-namecritical-infraahow-would-you-improve-protection-of-critical-infrastructure-from-cyber-attacks"><a name="critical-infra"></a>How would you improve protection of critical infrastructure from cyber attacks?</h2>
<p><em>With the follow-up: “Would you include election and electronic voting systems under the definition of critical infrastructure?”</em></p>
<h3 id="what-wed-learn-1">What we&rsquo;d learn</h3>
<p>Each candidate would outline their plans for for the federal government’s role in protecting critical infrastructure. Additionally, we’d hear each candidate’s proposals for addressing and solving some of the key challenges in protecting critical infrastructure in order to judge how much they recognize the threat and how effective they’d be in preserving our national security, economy and way of life.</p>
<h3 id="the-context-1">The context</h3>
<p>Critical infrastructure, as per the Patriot Act, is defined as:</p>
<blockquote>
<p>systems and assets, whether physical or virtual, so vital to the U.S. that the incapacity or destruction of such systems or assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters</p>
</blockquote>
<p>The NIST Cybersecurity Framework suggests a host of industries fall under this label, including agriculture, water, public health, emergency services, government, defense, information &amp; telecommunications, energy, transportation &amp; shipping, banking &amp; finance, chemicals &amp; hazardous materials, post, national monuments &amp; icons and critical manufacturing.</p>
<p>Many would argue that the list should also include election and electronic voting systems, as they are a vital component of maintaining democratic elections (and I personally would agree). Particularly in light of the recent revelations that Russian actors <a href="https://www.wired.com/2016/07/heres-know-russia-dnc-hack/">hacked the DNC</a> as well as the <a href="https://www.washingtonpost.com/world/national-security/fbi-is-investigating-foreign-hacks-of-state-election-systems/2016/08/29/6e758ff4-6e00-11e6-8365-b19e428a975e_story.html">Illinois and Arizona election systems</a> (and <a href="http://abcnews.go.com/US/russian-hackers-targeted-half-states-voter-registration-systems/story?id=42435822&amp;cid=abcn_tco">attempted to hack many more</a>), the plea by information security experts to have the security of voting systems be taken more seriously is steadily gaining legitimacy.</p>
<p>The reason why federal-level protection of critical infrastructure from cyber attacks is up for debate is that, currently, the onus is primarily on the private sector to defend itself. However, the same isn’t true for physical threats such as potential terrorist attacks — should the owner of the World Trade Center have conducted their own anti-terrorism operations and had fighter jets ready to escort a hijacked plane? Of course not.</p>
<p>If the federal government is in charge of protecting national security, then it’s logical to suggest that they should also take the lead on <strong>all</strong> national security, including securing national infrastructure. However, given the complexities of our physical and virtual infrastructure, and consequently the large number of industries that fall under the critical infrastructure label, there is disagreement over the extent to which the federal government should help bolster their cybersecurity. We, as citizens, should hear what the candidates respective positions are on this important issue.</p>
<p>Additionally, the <em>how</em> to do it is potentially more subjective and could include anything from recommending minimal security standards (which is the default, albeit ineffective, strategy) to imposing fines on software vendors for vulnerabilities or conducting cyber deterrence (which I dig into more in the last question). I, for one, would like to know the candidates ideas on the “how to” as well.</p>
<h3 id="why-you-should-care-1">Why you should care</h3>
<p>If you care about our national security, you should care about this question. It’s safe to say it’s a bipartisan desire for the local power plant not to blow up, to avoid a food crisis or not have our financial system come to a standstill.</p>
<p>The reason why you should care about each candidate’s specific answer to the question is because the level of cybersecurity in critical infrastructure is, in general, alarmingly poor, increasing the likelihood and severity of devastation of a cyber attack on critical infrastructure at any time. While I don’t condone <a href="https://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt">FUD</a> (fear, uncertainty and doubt as a media strategy), I will say that it’s far better to act now to reduce the probability of a calamitous digital attack on our critical infrastructure than keep our fingers crossed that it won’t happen.</p>
<p>There are a few cybersecurity challenges faced by these industries, though. First, there’s a massive shortage of talent, and those who are practitioners generally go to industries who can pay the most (like tech and financial services). Perhaps workers in declining industries could be given incentives to retrain with cybersecurity skills. Second, critical infrastructure systems usually are complex and a lot of infrastructure software is old, and it’s difficult to install or integrate security measures after the fact.</p>
<p>Third, a single private entity won’t have the same level of information about potential digital threats as the federal government, nor the resources to prevent against every possible scenario. By leveraging the U.S. intelligence community’s data, private entities in critical industries could be given a “heads up” on potential threats and guidance on the tactics, techniques and procedures of groups likely to target them in a cyberattack.</p>
<hr>
<p><img src="blog/img/bad-cyberart-07.jpg" alt="AFB Central Control facility"></p>
<h2 id="a-namedeterrence-offenseawhat-balance-would-you-strike-between-cyber-deterrence-and-offensive-cyber-operations-on-other-nations"><a name="deterrence-offense"></a>What balance would you strike between cyber deterrence and offensive cyber operations on other nations?</h2>
<p><em>With the follow-up: “What role do you think cyber deterrence plays in cyberwarfare?”</em></p>
<h3 id="what-wed-learn-2">What we&rsquo;d learn</h3>
<p>The candidates’ answers to this question should reveal:</p>
<ol>
<li>How they’d invest in and preserve our advantage in the “cyberwar” arena — from technology to human capital</li>
<li>How they’d use offensive cyber operations — would it be covert geopolitical influencing or upfront display of capability? Would we be the first-movers on offense or focused on attacking back?</li>
</ol>
<h3 id="the-context-2">The context</h3>
<p>The domains of warfare were traditionally Land, Sea, Air and Space, but Information Operations (i.e. the digital domain) became the fifth dimension for the U.S. Military in 1995. Since then, information operations, or “cyberwarfare” as dubbed by the media, has become a crucial component of military strategy due to the proliferation of digital systems globally and their importance in all areas of modern life.</p>
<p>The two main types of cyberwarfare are espionage and sabotage. Espionage is used for spying purposes to gain intelligence; for example, the <a href="http://arstechnica.com/security/2015/06/epic-fail-how-opm-hackers-tapped-the-mother-lode-of-espionage-data/">hack of the Office of Personnel Management</a> (presumably by China) was to gain intelligence on people who work for various U.S. government agencies. Sabotage is used to disrupt adversaries’ systems for geopolitical or military gain. For example, rather than conducting some sort of strike on Iran’s nuclear facilities, the U.S. leveraged its offensive cybersecurity capabilities to covertly disrupt Iran’s nuclear program in an attack later dubbed <a href="https://en.wikipedia.org/wiki/Stuxnet">“Stuxnet.”</a></p>
<p>Cyberwarfare is particularly reliant on intelligence (part of why the NSA has expanded so much over the past two decades), and thus most operations tend to fly under the radar. It would reduce a government’s advantage to reveal capabilities or methods, since then adversaries could better thwart attacks or repackage the attack for their own use.</p>
<p>This highlights the difficulty of cyber deterrence. Being able to attribute cyber attacks to a specific nation-state requires revealing, in part, how you were able to figure out who did it. If you don’t present evidence, it can be dismissed as a baseless accusation, which isn’t great for geopolitical maneuvering. Even then, <a href="https://medium.com/@thegrugq/idle-thoughts-on-cyber-82170b2b7280#.k2d7gfrsr">attribution is notoriously difficult</a> since attackers can attempt to mask their digital tracks, including by making it appear that their attack originated from a different location or by using a different language than their own.</p>
<p>In any case, to dissuade adversaries from attacking us, the U.S. has to make it clear that the intelligence community will figure out who is behind any attacks against us, retaliate swiftly and inflict significant damage…all without revealing the extent of our capabilities.</p>
<h3 id="why-you-should-care-2">Why you should care</h3>
<p>It is evident that the U.S. currently has a decisive advantage in the nation-state cybersecurity arena — anyone suggesting otherwise, as seen in this election, is misinformed. We began preparing for, and conducting, offensive cyber operations about a decade before others, giving us a significant head start.</p>
<p>Further, the dominance we have over global digital infrastructure is extremely difficult to replicate and that fact makes our cyber operations smoother to conduct. For example, <a href="http://www.theatlantic.com/international/archive/2013/07/the-creepy-long-standing-practice-of-undersea-cable-tapping/277855/">as revealed by the Snowden leaks</a>, the U.S. taps into undersea fiber optic cables that serve as the fundamental communication rails of the internet — giving access to any data that is transmitted over these cables.</p>
<p>To be clear, Russia, China and Iran all have highly intelligent and capable cybersecurity teams (to varying degrees of size and sophistication). But we can conduct offensive cybersecurity operations on a bigger scale. We not only can perform equally as sophisticated attacks, but we also possess a formidable information advantage to better craft attacks and anticipate attacks against us.</p>
<p>This doesn’t mean we’ll never be attacked, due to the aforementioned abilities of our adversaries — though the cyber deterrence strategy is meant to dissuade others from attacking us by showing our muscle. While we currently have superior offensive cybersecurity capabilities that give us a geopolitical advantage, this does not make us invulnerable to the potentially devastating effects of cyberattack against us by a capable nation-state.</p>
<p>On the other hand, an offensive operation presents the risk of being caught, which might be viewed as a declaration of war — thus leading to retaliation against us (which isn’t ideal). So, we have to be judicious in how we leverage our offensive cybersecurity capabilities to balance optimizing our foreign policy goals while protecting our own national security.</p>
<hr>
<p><img src="blog/img/bad-cyberart-08.jpg" alt="Map of the USA but with code on it"></p>
<h2 id="conclusion">Conclusion</h2>
<p>Do I think these questions will be asked at the debates? No (but fingers crossed). I don’t think there’s a sufficient public understanding of the multifarious policy issues presented by cybersecurity — largely because the media coverage of cybersecurity is notoriously terrible.</p>
<p>However, raising voter awareness of these issues still is critically important. Real change won’t happen if it’s just the information security or privacy community who is concerned…and it really shouldn’t just be them, since cybersecurity issues affect all citizens.</p>
<p>Cybersecurity’s importance in our nation’s ecosystem only will grow, so starting the discussion of these issues now means there’ll be a deeper consciousness of them among voters in the next election — and a greater ability of “the people” to ensure their rights and security are preserved as we march past the point of no return into digital dependence.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Behavioral Models of InfoSec: Prospect Theory</title>
            <link>https://kellyshortridge.com/blog/posts/behavioral-models-infosec-prospect-theory/</link>
            <pubDate>Mon, 01 Aug 2016 20:20:59 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/behavioral-models-infosec-prospect-theory/</guid>
            <description>To those in the information security / cyber security industry, it’s an accepted truth that there exists a pernicious incentive structure that overwhelmingly puts the odds in the attacker’s favor. The consistent narrative is that defenders make irrational decisions and focus on the wrong problems while vendors peddle FUD and snake oil that not just fails to bolster the defensive cause, but inflicts ongoing harm.
But, I’ve seen less in the way of seeking to understand defenders’ irrational decision making patterns and why the industry is the way it currently is…and even less about how to fix this toxic feedback loop. So, armed with my modest background in behavioral economics from undergrad, I’ve decided to take a stab at examining the “why” and proposing some ways to twist these incentives in the defense’s favor.
My hope is that this kicks off a series where I examine different theories within behavioral economics against evidence within infosec. The tl;dr background on behavioral econ is that traditional economics views people as rational decision-making machines (i.e. “Homo economicus”) that can perfectly perform cost benefit analyses and choose an objectively optimal outcome.
Behavioral econ, in contrast, recognizes that our brains are wired in a way that has been optimal for evolution, so it measures how people actually behave vs. how they optimally behave. We have quirks in our thinking that result in us making “irrational” decisions, but for understandable reasons.
This post will cover the O.G. theory in behavioral econ, Prospect Theory, as the first of many (potential) theories to help explain some of the dynamics of the infosec market.
Table of Contents: What is Prospect Theory? Defense vs. Offense InfoSec Reference Points Empirical Examples from InfoSec Incentives in InfoSec Fixing Incentives Conclusion What is Prospect Theory? Prospect Theory is a theory in behavioral econ that helps explain how people make decisions between options that bear certain probabilities and risk. The main thesis in Prospect Theory is that people make decisions by evaluating potential gains and losses through the lens of probability, rather than looking at the final, “objective” outcome. This relies on the decision-maker setting a reference point against which they measure outcomes.
Let’s consider a simple example to get a better sense of what this means in practice, using data from the original paper on Prospect Theory:
Decision #1: A) 100% chance of receiving $3,000 vs. B) 80% chance of receiving $4,000, but a 20% chance of receiving nothing
A’s expected outcome is $3,000 while B’s is $3,200…but 80% of subjects choose option A because it represents a guaranteed gain. Homo Economicus would scoff at these silly people and choose B.
Decision #2: C) 100% chance of losing $3,000 vs. D) 80% chance of losing $4,000, but a 20% chance of losing nothing
C’s expected outcome is losing $3,000 while D’s is losing $3,200. Homo Economicus naturally chooses C, but turns out 92% of people choose D for having the small chance of losing nothing.
A standard Prospect Theory graph
People are inconsistent in their choices based on whether decisions result in a loss or gain, as well as how the decisions are framed. There are four key tenets resulting from Prospect Theory that I’ll examine with the lens of infosec:
Reference dependence: decision makers use a reference point to measure relative gains and losses Loss aversion: people really don’t like experiencing losses, and losses hurt 2.25x more than gains feel good Non-linear probability weighting: people tend to overweight small probabilities and underweight big ones, and they also like certainty Diminishing sensitivity: the farther an outcome is above or below the reference point, the less its marginal effect Defense vs. Offense Get ready for more terrible cyber art throughout the post
Through the lens of Prospect Theory, my own theory is that defenders operate in the “realm of losses” while attackers operate in the “realm of gains.” As shown above, people in the domain of losses tend to be more risk-seeking, while those in the gain domain tend to be risk averse. In fact, losses felt by those in the gain domain are overvalued by 3:1 relative to those in the loss domain. The further defenders get away from their reference point, the more they’ll opt for small probabilities of a big leap closer to it instead of more certain, incremental improvements — that is, become more risk-seeking and pay more attention to potential payoffs rather than probabilistic outcomes.
Defenders take awhile to readjust their point of reference to match the status quo, which can really screw up their decision-making process; if potential outcomes are computed relative to the reference point, an outdated reference point will reinforce risky decision-making as defenders keep trying to jump back up to it. Attackers, on the other hand, will quickly update their reference point to the status quo. Given their predilection towards risk aversion and emphasis on weighing the probability of different outcomes, attackers need a technical and informational advantage to feel confident in their decision.
InfoSec Reference Points Look, they’re pointing
In order to figure out the behavioral predilections of defenders and attackers within the infosec arena, we need to determine the reference points that guide their behavior. My theory on infosec reference points is the following:
Defenders’ reference point is a security posture in which they can only withstand set Z of attacks, do not experience any materially significant breaches (e.g. those requiring disclosure), and spend $X on products to meet minimum compliance standards: Domain of Losses Attackers’ reference point is successfully compromising a target for $X cost without being caught before achieving their goal with value $Y: Domain of Gains Therefore, we have the following conclusions on losses and gains for each party:
Defenders feel a loss when they are breached with set Z of attacks, experience a significant breach, or spend more on security products than the minimum needed to meet compliance requirements. The gain from spending less than $X to meet compliance standards is realistically trivial. The non-trivial gain is from successfully stopping attacks that are not included in set Z (i.e. those they assume they can’t withstand); for example, an advanced remote code execution exploit involving a sandbox escape, kernel privilege escalation and a payload that disables endpoint protection products. Attackers feel a loss whenever they are caught or when their cost of $X is greater than their outcome of $Y, and feel a gain if they either spend less than $X on an attack or have a greater outcome than $Y. Note, a gain here would include exploits that work across multiple platforms or malware that can be repackaged easily, since it’s reducing the marginal cost of $X for crafting each attack and is thus a superior use of the attacker’s development time. For example, an exploit for a design flaw, architectural weakness, or logic-based vulnerability is usually cross-platform, reliable (vs. memory corruption) and very likely will take longer to fix — all of which means it has a larger payoff for the time invested in its development. Empirical Examples from InfoSec ThreatButt: the #1 must-have, best-of-breed, military-grade enterprise cyber defense-in-derpth platform
It’s important to highlight some examples of “irrational” behavior within infosec as a frame of reference for general theory, specifically focusing on differences in adoption (and hype) of various defensive security products. Irrational can be a subjective term, so I mean it in both the “counter to one’s own benefit” way and the “most outside observers think this is illogical” way.
Let’s start with EMET, Microsoft’s Enhanced Mitigation Experience Toolkit, a free tool that helps prevent software exploitation on Windows. Installing it and configuring commonly used applications with ASLR, DEP and other countermeasures significantly increases the difficulty of successfully compromising an application. While there are no official statistics, it’s widely accepted that EMET adoption rates are very low, despite it being free and well-tested.
In the years following the initial release of EMET, some of its features and functionality slowly crept into mainstream operating system releases, where their efficacy forced attackers to move to Office macros — a decision that involved attackers accepting the risk of savvy users who wouldn’t enable the macros rather than investing time in developing and retooling exploits to work in a post-EMET world. This is a good example of attacker risk aversion; they prefer to go for the fluffier target that requires less fancy exploitation, but still has a wide impact. Similarly, Java historically made a fantastic target for attackers because of its uniformity. Attackers could simply write their attack once and reuse it, which made it appealing from a ROI perspective.
Two-factor authentication (2FA) is another example of a solution that isn’t “sexy” per se but should receive greater hype relative to its defensive impact. It’s a low cost solution that’s easily deployed (particularly relative to most security products), and meaningfully bolsters account security beyond just passwords. Yet, it’s taken 7 years to get to the point where it is being widely acknowledged as a standard tool to have in the security arsenal — and adoption still isn’t ubiquitous among the largest consumer-facing firms, despite how inexpensive and simple it is.
Just take a look at the list of the firms who do and don’t have 2FA to see how many notable companies don’t have it yet. And, among the financial services firms who don’t, it’s a somewhat solid bet that they do have a FireEye box, Bromium or some other anti-APT tech which is vastly more expensive and helps against much lower-probability attacks.
The rise of ransomware and how little has been done to preemptively stop its growth and potency is also perplexing. According to PhishMe, 93% of all phishing emails now contain ransomware. McAfee says there were nearly 1.2 million new ransomware kits in Q1 2016 alone, the total nearing 6 million. It’s an unsophisticated attack that can easily be conducted by the 13 year old in Romania using basic malware kits, presenting a high ROI to the attacker. But given the prevalence and impact of ransomware, it seems irrational that companies are not doing more to protect against it.
Part of this is cleverness by the attacker in making the ransom’s cost low enough to not cause their targets to take drastic measures, but high enough that over a big enough target base, it results in lots of cash against a one-time upfront cost they can amortize over the lifetime of the attack. However, it’s more likely an element of defense being slow to update their reference points; companies could still be adopting relatively low-cost solutions and strategies to better defend themselves against ransomware, such as email protection, filesystem canaries, or even just a better backup process. All three of those solutions would benefit any organization beyond just becoming more resilient against ransomware, and yet they remain some of the most “boring,” underlooked categories.
Canaries in general, in fact, are a smart idea. Yet at only 4-figures per box, they are criminally under-adopted relative to 6-figure anti-APT boxes. It’s pretty straightforward: set up something that looks like a juicy target for an attacker, and get alerted when there’s suspicious activity. It helps give you early breach detection, inform your threat model and better understand attacker behaviors, all for a reasonable price. But adoption is very far from ubiquitous. Unfortunately it doesn’t have hand-wavy technology that “stops” advanced attacks — it comes across more like a mouse trap with cheese than a sexy elaborate laser tripwire maze.
As a final example, application whitelisting is a highly effective, albeit mundane technology. Plenty of organizations are still being compromised with new executables running, something easily thwarted by whitelisting. However, there’s a lower probability of catching an “elite” attack, given it’s likely to exploit an application directly. Critics will say that whitelisting reduces flexibility and bears a non-trivial amount of upfront setup, which is fair until you consider how difficult “sexy” tech, commonly using kernel-level modules, is to implement.
Incentives in InfoSec Helps determine the direction of a cyber object from the observer
With the above as a reference, I’m going to walk through each of the four key tenets and examine their likely implications in infosec, and how they can explain the “irrational” decision making that many bemoan.
Reference Dependence While it’s (mostly) simple accounting for defenders to know how much is spent on compliance, it’s a lot harder to know your organization’s security posture. Attackers can rely on (mostly) simple accounting to tally their cost and probably guesstimate the value of a successful attack, particularly if it’s selling personal data for $X per user vs. a nation state calculating how much crippling an enemy’s nuclear facility is worth to national security. Defenders, in contrast, can’t tally their costs as they go.
Figuring out your security posture is complicated for a few reasons. First, there are no sufficient industry benchmarks for security health against which organizations can compare themselves. Second, it’s highly unlikely that organizations will have full situational awareness to know which attacks are working against them and which they’re successfully thwarting. Third, defenders aren’t always sure what the “spoils of war” are, i.e. what value an attacker gains from hacking them, from customer data, intellectual property even to something like carbon credits. When it’s difficult to know what’s at risk, it’s difficult to weigh risk.
And, updating the reference point is a slow process for defenders. If their reference point is their perception of their security posture from 2014, it’s now outdated by two years at the minimum, during which attackers assuredly developed new techniques. Even once the reference point is updated to the status quo, the uncertainty in measuring organizational security risk and health means the new reference point will be equally as fuzzy. Just think about the ransomware example; if the reference point were based on today’s most probable threats, adopting technologies to prevent it should be a top budget priority.
Attackers, however, are quickly updating their reference points and evolving their methods based on the true status quo rather than their prior perception. Because the reference point serves as the foundation for decision making under prospect theory, the fact that attackers have more timely and accurate reference points gives them a decisive advantage at stage 1 over defenders.
Loss Aversion We know that losses hurt 2.25x more than gains, and that attackers weigh losses 3x as much as defenders; to be a bit simplistic, the attacker’s “exchange rate” for gains and losses is therefore 1: 6.75. Defenders “just” need to make sure that for each additional dollar attackers spend towards breaching them, they’re getting less than $6.75 in additional value (I’ll discuss how defenders can do so in the last section).
As mentioned in the EMET example, attackers were probably inclined to switch to less arduous targets once it was released just based on the assumption that organizations would have adopted it, even though there wasn’t yet evidence of adoption. As a free tool, adopting it couldn’t present an easier, cost-effective opportunity for defenders to play into attacker’s loss aversion.
Non-linear Probability Weighting Both sides overweight small probabilities and underweight large ones. Defenders are predisposed towards following small probabilities of a better outcome (risk-seeking) while attackers will care more about certainty and shun options that have smaller probabilities of worse outcomes (risk-averse).
To feel confident in their abilities to pwn their target, attackers need a strong reference point and the ability to calculate the probabilities of different outcomes. The more information the attacker has about the target, the better they can predict probability, and the greater their technical abilities, the better they can minimize the probability of being caught. Consequently, playing with attackers’ sense of certainty is another tactic defenders can use.
In defensive decision making, it’s crucial to understand the impact and probability of an attack on your organization. There’s a reason why there’s been a collection of attempts to come up with a framework for information security risk-weighting — it’s vital, but an arguably unattainable goal. The variables are prohibitively multifarious, from the company’s industry, technology stack, business model, brand power, etc. to attacker motives, current malware landscape, or even geopolitical statuses.
It’s safe to assume that it’s an impossible task to enumerate all attacks and calculate each of their probabilities and impacts. Industry data is pragmatic since it provides a reasonable reflection on what attacks are most likely. There’s also some data to provide historical precedents on impacts; for example, there’s minimal impact to stock prices, but potentially longer-term impact to sales that ends up affecting stock prices (like in Target’s case). This still leaves the defender left to determine whether they’re robust enough to withstand these different types of attacks .
Now, remember that loss domain-ers will overweight small probabilities and my hypothesis that the only “gain” a defender can really have is stopping attacks that they did not think they could. This can easily support why information security is saturated with products that stop APT or “advanced” attacks, while companies are still getting popped with “basic” methods like phishing, simplistic web app vulnerabilities and outdated, repackaged malware. The tools I mentioned above, such as 2FA, canaries and whitelisting help stop the large-probability, quotidian attacks and thus don’t present an opportunity for a “gain.”
Such a limited potential for a gain facilitates greater emotional basis for action as well, such as Clausewitz’s “passionate hatred for the enemy.” It’s no wonder, then, that attribution is so popular while being functionally useless — at least defenders can have some respite that the culprits were found. But I believe it’s more than that; giving a “face” to the attackers provides a greater sense of certainty, however false that feeling might be. And if I’m generous, nicely bound reports on threat group “[clever noun describing the target group] &#43; [noun of Chinese-associated thing]” detailing TTPs might actually help defenders improve their probability weighting of what attacks they’re likely to incur.
Diminishing Sensitivity As defenders experience losses, they experience less “pain” for each additional instance. A big, acutely painful breach will more likely lead to action (of the risk-seeking kind) rather than death by a thousand paper cuts, which fully plays into diminishing sensitivity — each time a defender is hacked via a “stupid” bug, they’ll care less and less, so they’ll be less inclined to adopt security products that stop the repeated, lesser attacks (such as 2FA or canaries).
Another issue is that the outcome for defenders is often all or nothing. For example, if an attacker bypasses ASLR, the yield is 100% of the app; there’s no gray area where only part of the app is compromised, meaning from the defender’s point of view, it’s either a total loss or no loss. By this I mean, if an attacker has a 1/10 chance of guessing an address layout, the app is not 90% protected, and if an attacker guesses correctly, the app is 100% compromised. Thus, the impact of this 100% loss is the initial hit, and any subsequent hits don’t feel nearly as severe in comparison.
This disparity between losses is why incident response is such a lucrative business; when defenders are violently thrust deeper into the loss domain, they’re much more willing to spend whatever money necessary to get closer to their reference point again. This takes the form of expensive services or products that the IR providers say will help avoid this big, nasty pain they’re feeling…although this dynamic is often decried as predatory.
On the attacker side, achieving increasingly awe-inspiring levels of leetness loses its splendor after awhile; that is, there’s less motivation to strive for either an extra level of cost reduction or getting more value out of the attack. However, the initial gain leap can still be appealing, and is where I’d argue a lot of innovation happens…it’s just that there isn’t much incentive to continue to innovate.
This explains a few observations. First, that you commonly see the same attack being repackaged rather than completely new methods being used during a campaign. Second, wildly innovative, “great leap forward” vulnerability research is more common once some sort of new protection is developed and deployed (like ROP being used to work around a non-executable stack/heap), and less common when the status quo attacks can do just fine (like users plugging in shiny USBs they find in the parking lot).
What this also means, combined with attackers being more risk averse, is that reaching the next gain level will decreasingly justify the risk tradeoff. This is yet another benefit to the defense, since it can help deter ongoing campaigns even after an initial compromise — if you can up the cost of persistence, then developing tools for retaining system access on the target system will feel too risky relative to the lower gain payoff.
Fixing Incentives Now that I’ve tried explaining the why, it’s time to discuss how the balance of decision-making power can be shifted in favor of defense and some examples of tech that makes more sense to adopt. Clearly, defense is naturally predisposed to misjudging their real threat model, misallocating resources and miscalculating strategies, resulting in our current industry dystopia of a comically privileged offense, FUD marketing tactics, focus on thwarting sexier “advanced” attacks and a noxious romance with attribution.
Understanding you have a problem, what the problem is, and why you keep having it is step one. I’m not alone in using knowledge of behavioral econ to counter my human instincts towards suboptimal behavior (e.g. instant gratification monkey). So, I fully believe that defenders can leverage the knowledge of their weaknesses to correct their missteps and start leveraging their adversaries’ weaknesses against them.
If you’ve spoken to anyone in infosec with offensive experience, they’ll agree that “raising the cost of attack” is one of the most effective means of deterrence. I think re-framing it as “raising the stakes of attack” is more descriptive than cost, since it includes the notion of risk. The fact that attackers only care about their own outcome relative to the reference point, are extremely loss averse, prioritize certainty over a more valuable outcome and get less benefit out of successive gains all supports the idea of raising the stakes.
Defenders should prioritize efficiency when raising the stakes. Rather than focusing on less probable attacks, they should think about the commonalities between the technical and informational advantages that the spectrum of attackers possess. For example, a platform like Drawbridge Networks lets you detect and control lateral movement in an internal network, which could limit an attack’s impact in both more advanced attacks and common malware. Defenders often believe that cyber security products focused on countering “advanced” threats also counter more basic attacks, but that’s not always the case. It’s far simpler to raise the level of the lowest common denominator than try to stop each type of “sophisticated” method.
Eroding the informational advantage is the wisest move, since tackling the technical advantage is more of a cat-and-mouse game. “Silent” monitoring tech that gives visibility without informing the attacker can give defense the ability to respond quickly without the attacker realizing that they’ve been caught, so defenders can watch the attacker’s methods and gain valuable threat intelligence (the real kind). In contrast, technologies that use blocking are giving data to attackers that they can use to craft a better attack.
An effective technique that’s gaining some popularity is ensuring that the organization’s infrastructure isn’t static; attackers will have a substantially more difficult time attacking something that is constantly changing. Even more simplistically, setting up honey pots and other types of deception, like Thinkst’s Canary, can serve to foster uncertainty in the attacker as well as give defense the heads up that something nefarious is happening.
Defenders should also reduce their adversaries’ potential payoff in conjunction with raising the stakes. Having strict access control rules and a more segmented network means that a compromise of an individual machine doesn’t have much value, and attackers will have to expend more resources to get a bigger payoff. For example, deploying Duo Sec’s 2FA to end users reduces the value of their credentials by adding an extra hoop through which attackers must jump to illicitly access accounts.
But to counter their own weaknesses, defenders should take a data-driven approach — although data can have its flaws, it helps provide rational evidence of what the reference point should be. Having an ongoing picture of the “true” threat model may also encourage defenders to update their reference point more quickly, though it will still require some introspection to be aware of their bias towards being slow to change their views. One tech solution with this approach is Signal Sciences, which uses a data-driven approach to web app security by providing a continuously updated reference point of security posture in that area.
There also needs to be a better understanding in defense of how they define a loss. As I theorized earlier, right now it’s mostly “being breached,” and that may indeed have an immediate impact on the security practitioner’s job security. However, it’s probable that a nation state attacker will breach a company, exfiltrate some data for espionage purposes, and there will be no real effects felt by the company (particularly short-term). Enhancing the equation of probability * outcome with an improved understanding of the real impact different types of attackers has on the organization would meaningfully improve prioritization of what solutions to adopt from lens of what is bad from the organizational point of view vs. what is bad from an “objective” security point of view.
Conclusion Real hackers use this leet virtual reality module for their attacks
I really believe understanding the motivations behind this “irrational” defensive behavior is empowering. While even I tend to veer towards hyperbole when describing offense’s advantages, the offense isn’t invisible nor is their decision-making flawless, and I think Prospect Theory helps identify those vulnerabilities — so now the challenge is for defense to start exploiting them.
My hope is that rather than telling infosec defenders that they’re being stupid or irrational or that they’re totally crappy at their jobs, the industry can take a more empathetic approach and suggest strategies towards ameliorating counterproductive incentives. I don’t think that will eradicate FUD marketing tactics or snake oil products, but it probably can give solutions that actually help a fighting chance to make a difference.
In conclusion, let’s try to be more of a community and practice some collective mindfulness, and just maybe we can start fixing things.
</description>
            <atom:content type="html"><![CDATA[<p>To those in the information security / cyber security industry, it’s an accepted truth that there exists a pernicious incentive structure that overwhelmingly puts the odds in the attacker’s favor. The consistent narrative is that defenders make irrational decisions and focus on the wrong problems while vendors peddle FUD and snake oil that not just fails to bolster the defensive cause, but inflicts ongoing harm.</p>
<p>But, I’ve seen less in the way of seeking to understand defenders’ irrational decision making patterns and why the industry is the way it currently is…and even less about how to fix this toxic feedback loop. So, armed with my modest background in behavioral economics from undergrad, I’ve decided to take a stab at examining the “why” and proposing some ways to twist these incentives in the defense’s favor.</p>
<p>My hope is that this kicks off a series where I examine different theories within behavioral economics against evidence within infosec. The tl;dr background on behavioral econ is that traditional economics views people as rational decision-making machines (i.e. “Homo economicus”) that can perfectly perform cost benefit analyses and choose an objectively optimal outcome.</p>
<p>Behavioral econ, in contrast, recognizes that our brains are wired in a way that has been optimal for evolution, so it measures how people actually behave vs. how they optimally behave. We have quirks in our thinking that result in us making “irrational” decisions, but for understandable reasons.</p>
<p>This post will cover the O.G. theory in behavioral econ, Prospect Theory, as the first of many (potential) theories to help explain some of the dynamics of the infosec market.</p>
<h3 id="table-of-contents">Table of Contents:</h3>
<ol>
<li><a href="#what-is-prospect-theory">What is Prospect Theory?</a></li>
<li><a href="#defense-vs-offense">Defense vs. Offense</a></li>
<li><a href="#infosec-ref-points">InfoSec Reference Points</a></li>
<li><a href="#infosec-examples">Empirical Examples from InfoSec</a></li>
<li><a href="#infosec-incentives">Incentives in InfoSec</a></li>
<li><a href="#fixing-incentives">Fixing Incentives</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ol>
<h2 id="a-namewhat-is-prospect-theoryawhat-is-prospect-theory"><a name="what-is-prospect-theory"></a>What is Prospect Theory?</h2>
<p><a href="https://en.wikipedia.org/wiki/Prospect_theory">Prospect Theory</a> is a theory in behavioral econ that helps explain how people make decisions between options that bear certain probabilities and risk. The main thesis in Prospect Theory is that people make decisions by evaluating potential gains and losses through the lens of probability, rather than looking at the final, “objective” outcome. This relies on the decision-maker setting a reference point against which they measure outcomes.</p>
<p>Let’s consider a simple example to get a better sense of what this means in practice, using data from the original paper on Prospect Theory:</p>
<p>Decision #1: A) 100% chance of receiving $3,000 vs. B) 80% chance of receiving $4,000, but a 20% chance of receiving nothing</p>
<p>A’s expected outcome is $3,000 while B’s is $3,200…but 80% of subjects choose option A because it represents a guaranteed gain. Homo Economicus would scoff at these silly people and choose B.</p>
<p>Decision #2: C) 100% chance of losing $3,000 vs. D) 80% chance of losing $4,000, but a 20% chance of losing nothing</p>
<p>C’s expected outcome is losing $3,000 while D’s is losing $3,200. Homo Economicus naturally chooses C, but turns out 92% of people choose D for having the small chance of losing nothing.</p>
<p><img src="/blog/img/prospect-theory-01.jpg" alt="A standard Prospect Theory graph"><em>A standard Prospect Theory graph</em></p>
<p>People are inconsistent in their choices based on whether decisions result in a loss or gain, as well as how the decisions are framed. There are four key tenets resulting from Prospect Theory that I’ll examine with the lens of infosec:</p>
<ol>
<li><strong>Reference dependence</strong>: decision makers use a reference point to measure relative gains and losses</li>
<li><strong>Loss aversion</strong>: people really don’t like experiencing losses, and losses hurt 2.25x more than gains feel good</li>
<li><strong>Non-linear probability weighting</strong>: people tend to overweight small probabilities and underweight big ones, and they also like certainty</li>
<li><strong>Diminishing sensitivity</strong>: the farther an outcome is above or below the reference point, the less its marginal effect</li>
</ol>
<h2 id="a-namedefense-vs-offenseadefense-vs-offense"><a name="defense-vs-offense"></a>Defense vs. Offense</h2>
<p><img src="/blog/img/bad-cyberart-01.jpg" alt="Toy soliders fighting on a keyboard"><em>Get ready for more terrible cyber art throughout the post</em></p>
<p>Through the lens of Prospect Theory, my own theory is that defenders operate in the “realm of losses” while attackers operate in the “realm of gains.” As shown above, people in the domain of losses tend to be more risk-seeking, while those in the gain domain tend to be risk averse. In fact, losses felt by those in the gain domain are overvalued by 3:1 relative to those in the loss domain. The further defenders get away from their reference point, the more they’ll opt for small probabilities of a big leap closer to it instead of more certain, incremental improvements — that is, become more risk-seeking and pay more attention to potential payoffs rather than probabilistic outcomes.</p>
<p>Defenders take awhile to readjust their point of reference to match the status quo, which can really screw up their decision-making process; if potential outcomes are computed relative to the reference point, an outdated reference point will reinforce risky decision-making as defenders keep trying to jump back up to it. Attackers, on the other hand, will quickly update their reference point to the status quo. Given their predilection towards risk aversion and emphasis on weighing the probability of different outcomes, attackers need a technical and informational advantage to feel confident in their decision.</p>
<h2 id="a-nameinfosec-ref-pointsainfosec-reference-points"><a name="infosec-ref-points"></a>InfoSec Reference Points</h2>
<p><img src="/blog/img/bad-cyberart-02.jpg" alt="Arrows that look oh-so-cyber"><em>Look, they&rsquo;re pointing</em></p>
<p>In order to figure out the behavioral predilections of defenders and attackers within the infosec arena, we need to determine the reference points that guide their behavior. My theory on infosec reference points is the following:</p>
<ul>
<li><strong>Defenders’</strong> reference point is a security posture in which they can only withstand set Z of attacks, do not experience any materially significant breaches (e.g. those requiring disclosure), and spend $X on products to meet minimum compliance standards: Domain of Losses</li>
<li><strong>Attackers’</strong> reference point is successfully compromising a target for $X cost without being caught before achieving their goal with value $Y: Domain of Gains</li>
</ul>
<p>Therefore, we have the following conclusions on losses and gains for each party:</p>
<ul>
<li><strong>Defenders</strong> feel a loss when they are breached with set Z of attacks, experience a significant breach, or spend more on security products than the minimum needed to meet compliance requirements. The gain from spending less than $X to meet compliance standards is realistically trivial. The non-trivial gain is from successfully stopping attacks that are not included in set Z (i.e. those they assume they can’t withstand); for example, an advanced remote code execution exploit involving a sandbox escape, kernel privilege escalation and a payload that disables endpoint protection products.</li>
<li><strong>Attackers</strong> feel a loss whenever they are caught or when their cost of $X is greater than their outcome of $Y, and feel a gain if they either spend less than $X on an attack or have a greater outcome than $Y. Note, a gain here would include exploits that work across multiple platforms or malware that can be repackaged easily, since it’s reducing the marginal cost of $X for crafting each attack and is thus a superior use of the attacker’s development time. For example, an exploit for a design flaw, architectural weakness, or logic-based vulnerability is usually cross-platform, reliable (vs. memory corruption) and very likely will take longer to fix — all of which means it has a larger payoff for the time invested in its development.</li>
</ul>
<h2 id="a-nameinfosec-examplesaempirical-examples-from-infosec"><a name="infosec-examples"></a>Empirical Examples from InfoSec</h2>
<p><img src="/blog/img/threatbutt-map.PNG" alt="Threatbutt&amp;rsquo;s Cyber Attribution Map"><em>ThreatButt: the #1 must-have, best-of-breed, military-grade enterprise cyber defense-in-derpth platform</em></p>
<p>It’s important to highlight some examples of “irrational” behavior within infosec as a frame of reference for general theory, specifically focusing on differences in adoption (and hype) of various defensive security products. Irrational can be a subjective term, so I mean it in both the “counter to one’s own benefit” way and the “most outside observers think this is illogical” way.</p>
<p>Let’s start with EMET, Microsoft’s Enhanced Mitigation Experience Toolkit, a free tool that helps prevent software exploitation on Windows. Installing it and configuring commonly used applications with ASLR, DEP and other countermeasures significantly increases the difficulty of successfully compromising an application. While there are no official statistics, it’s widely accepted that EMET adoption rates are very low, despite it being free and well-tested.</p>
<p>In the years following the initial release of EMET, some of its features and functionality slowly crept into mainstream operating system releases, where their efficacy forced attackers to move to Office macros — a decision that involved attackers accepting the risk of savvy users who wouldn’t enable the macros rather than investing time in developing and retooling exploits to work in a post-EMET world. This is a good example of attacker risk aversion; they prefer to go for the fluffier target that requires less fancy exploitation, but still has a wide impact. Similarly, Java historically made a fantastic target for attackers because of its uniformity. Attackers could simply write their attack once and reuse it, which made it appealing from a ROI perspective.</p>
<p>Two-factor authentication (2FA) is another example of a solution that isn’t “sexy” per se but should receive greater hype relative to its defensive impact. It’s a low cost solution that’s easily deployed (particularly relative to most security products), and meaningfully bolsters account security beyond just passwords. Yet, it’s taken 7 years to get to the point where it is being widely acknowledged as a standard tool to have in the security arsenal — and adoption still isn’t ubiquitous among the largest consumer-facing firms, despite how inexpensive and simple it is.</p>
<p>Just take a look at <a href="https://twofactorauth.org/">the list of the firms who do and don’t have 2FA</a> to see how many notable companies don’t have it yet. And, among the financial services firms who don’t, it’s a somewhat solid bet that they do have a FireEye box, Bromium or some other anti-APT tech which is vastly more expensive and helps against much lower-probability attacks.</p>
<p>The rise of ransomware and how little has been done to preemptively stop its growth and potency is also perplexing. According to PhishMe, 93% of all phishing emails now contain ransomware. McAfee says there were nearly 1.2 million new ransomware kits in Q1 2016 alone, the total nearing 6 million. It’s an unsophisticated attack that can easily be conducted by the 13 year old in Romania using basic malware kits, presenting a high ROI to the attacker. But given the prevalence and impact of ransomware, it seems irrational that companies are not doing more to protect against it.</p>
<p>Part of this is cleverness by the attacker in making the ransom’s cost low enough to not cause their targets to take drastic measures, but high enough that over a big enough target base, it results in lots of cash against a one-time upfront cost they can amortize over the lifetime of the attack. However, it’s more likely an element of defense being slow to update their reference points; companies could still be adopting relatively low-cost solutions and strategies to better defend themselves against ransomware, such as email protection, filesystem canaries, or even just a better backup process. All three of those solutions would benefit any organization beyond just becoming more resilient against ransomware, and yet they remain some of the most “boring,” underlooked categories.</p>
<p>Canaries in general, in fact, are a smart idea. Yet at only 4-figures per box, they are criminally under-adopted relative to 6-figure anti-APT boxes. It’s pretty straightforward: set up something that looks like a juicy target for an attacker, and get alerted when there’s suspicious activity. It helps give you early breach detection, inform your threat model and better understand attacker behaviors, all for a reasonable price. But adoption is very far from ubiquitous. Unfortunately it doesn’t have hand-wavy technology that “stops” advanced attacks — it comes across more like a mouse trap with cheese than a sexy elaborate laser tripwire maze.</p>
<p>As a final example, application whitelisting is a highly effective, albeit mundane technology. Plenty of organizations are still being compromised with new executables running, something easily thwarted by whitelisting. However, there’s a lower probability of catching an “elite” attack, given it’s likely to exploit an application directly. Critics will say that whitelisting reduces flexibility and bears a non-trivial amount of upfront setup, which is fair until you consider how difficult “sexy” tech, commonly using kernel-level modules, is to implement.</p>
<h2 id="a-nameinfosec-incentivesaincentives-in-infosec"><a name="infosec-incentives"></a>Incentives in InfoSec</h2>
<p><img src="/blog/img/bad-cyberart-03.jpg" alt="It&amp;rsquo;s a compass whose needle is pointing to &amp;ldquo;security&amp;rdquo;"><em>Helps determine the direction of a cyber object from the observer</em></p>
<p>With the above as a reference, I’m going to walk through each of the four key tenets and examine their likely implications in infosec, and how they can explain the “irrational” decision making that many bemoan.</p>
<h3 id="reference-dependence">Reference Dependence</h3>
<p>While it’s (mostly) simple accounting for defenders to know how much is spent on compliance, it’s a lot harder to know your organization’s security posture. Attackers can rely on (mostly) simple accounting to tally their cost and probably guesstimate the value of a successful attack, particularly if it’s selling personal data for $X per user vs. a nation state calculating how much crippling an enemy’s nuclear facility is worth to national security. Defenders, in contrast, can’t tally their costs as they go.</p>
<p>Figuring out your security posture is complicated for a few reasons. First, there are no sufficient industry benchmarks for security health against which organizations can compare themselves. Second, it’s highly unlikely that organizations will have full situational awareness to know which attacks are working against them and which they’re successfully thwarting. Third, defenders aren’t always sure what the “spoils of war” are, i.e. what value an attacker gains from hacking them, from customer data, intellectual property even to something like carbon credits. When it’s difficult to know what’s at risk, it’s difficult to weigh risk.</p>
<p>And, updating the reference point is a slow process for defenders. If their reference point is their perception of their security posture from 2014, it’s now outdated by two years at the minimum, during which attackers assuredly developed new techniques. Even once the reference point is updated to the status quo, the uncertainty in measuring organizational security risk and health means the new reference point will be equally as fuzzy. Just think about the ransomware example; if the reference point were based on today’s most probable threats, adopting technologies to prevent it should be a top budget priority.</p>
<p>Attackers, however, are quickly updating their reference points and evolving their methods based on the true status quo rather than their prior perception. Because the reference point serves as the foundation for decision making under prospect theory, the fact that attackers have more timely and accurate reference points gives them a decisive advantage at stage 1 over defenders.</p>
<h3 id="loss-aversion">Loss Aversion</h3>
<p>We know that losses hurt 2.25x more than gains, and that attackers weigh losses 3x as much as defenders; to be a bit simplistic, the attacker’s “exchange rate” for gains and losses is therefore 1: 6.75. Defenders “just” need to make sure that for each additional dollar attackers spend towards breaching them, they’re getting less than $6.75 in additional value (I’ll discuss how defenders can do so in the last section).</p>
<p>As mentioned in the EMET example, attackers were probably inclined to switch to less arduous targets once it was released just based on the assumption that organizations would have adopted it, <em>even though there wasn’t yet evidence of adoption</em>. As a free tool, adopting it couldn’t present an easier, cost-effective opportunity for defenders to play into attacker’s loss aversion.</p>
<h3 id="non-linear-probability-weighting">Non-linear Probability Weighting</h3>
<p>Both sides overweight small probabilities and underweight large ones. Defenders are predisposed towards following small probabilities of a better outcome (risk-seeking) while attackers will care more about certainty and shun options that have smaller probabilities of worse outcomes (risk-averse).</p>
<p>To feel confident in their abilities to pwn their target, attackers need a strong reference point and the ability to calculate the probabilities of different outcomes. The more information the attacker has about the target, the better they can predict probability, and the greater their technical abilities, the better they can minimize the probability of being caught. Consequently, playing with attackers’ sense of certainty is another tactic defenders can use.</p>
<p>In defensive decision making, it’s crucial to understand the impact and probability of an attack on your organization. There’s a reason why there’s been a collection of attempts to come up with a framework for information security risk-weighting — it’s vital, but an arguably unattainable goal. The variables are prohibitively multifarious, from the company’s industry, technology stack, business model, brand power, etc. to attacker motives, current malware landscape, or even geopolitical statuses.</p>
<p>It’s safe to assume that it’s an impossible task to enumerate all attacks and calculate each of their probabilities and impacts. Industry data is pragmatic since it provides a reasonable reflection on what attacks are most likely. There’s also some data to provide historical precedents on impacts; for example, there’s minimal impact to stock prices, but potentially longer-term impact to sales that ends up affecting stock prices (like in Target’s case). This still leaves the defender left to determine whether they’re robust enough to withstand these different types of attacks .</p>
<p>Now, remember that loss domain-ers will overweight small probabilities and my hypothesis that the only “gain” a defender can really have is stopping attacks that they did not think they could. This can easily support why information security is saturated with products that stop APT or “advanced” attacks, while companies are still getting popped with “basic” methods like phishing, simplistic web app vulnerabilities and outdated, repackaged malware. The tools I mentioned above, such as 2FA, canaries and whitelisting help stop the large-probability, quotidian attacks and thus don’t present an opportunity for a “gain.”</p>
<p>Such a limited potential for a gain facilitates greater emotional basis for action as well, such as Clausewitz’s “passionate hatred for the enemy.” It’s no wonder, then, that attribution is so popular while being functionally useless — at least defenders can have some respite that the culprits were found. But I believe it’s more than that; giving a “face” to the attackers provides a greater sense of certainty, however false that feeling might be. And if I’m generous, nicely bound reports on threat group “[clever noun describing the target group] + [noun of Chinese-associated thing]” detailing <a href="https://en.wikipedia.org/wiki/Terrorist_Tactics,_Techniques,_and_Procedures">TTPs</a> might actually help defenders improve their probability weighting of what attacks they’re likely to incur.</p>
<h3 id="diminishing-sensitivity">Diminishing Sensitivity</h3>
<p>As defenders experience losses, they experience less “pain” for each additional instance. A big, acutely painful breach will more likely lead to action (of the risk-seeking kind) rather than death by a thousand paper cuts, which fully plays into diminishing sensitivity — each time a defender is hacked via a “stupid” bug, they’ll care less and less, so they’ll be less inclined to adopt security products that stop the repeated, lesser attacks (such as 2FA or canaries).</p>
<p>Another issue is that the outcome for defenders is often all or nothing. For example, if an attacker bypasses ASLR, the yield is 100% of the app; there’s no gray area where only part of the app is compromised, meaning from the defender’s point of view, it’s either a total loss or no loss. By this I mean, if an attacker has a 1/10 chance of guessing an address layout, the app is not 90% protected, and if an attacker guesses correctly, the app is 100% compromised. Thus, the impact of this 100% loss is the initial hit, and any subsequent hits don’t feel nearly as severe in comparison.</p>
<p>This disparity between losses is why incident response is such a lucrative business; when defenders are violently thrust deeper into the loss domain, they’re much more willing to spend whatever money necessary to get closer to their reference point again. This takes the form of expensive services or products that the IR providers say will help avoid this big, nasty pain they’re feeling…although this dynamic is often decried as predatory.</p>
<p>On the attacker side, achieving increasingly awe-inspiring levels of leetness loses its splendor after awhile; that is, there’s less motivation to strive for either an extra level of cost reduction or getting more value out of the attack. However, the initial gain leap can still be appealing, and is where I’d argue a lot of innovation happens…it’s just that there isn’t much incentive to continue to innovate.</p>
<p>This explains a few observations. First, that you commonly see the same attack being repackaged rather than completely new methods being used during a campaign. Second, wildly innovative, “great leap forward” vulnerability research is more common once some sort of new protection is developed and deployed (like ROP being used to work around a non-executable stack/heap), and less common when the status quo attacks can do just fine (like users plugging in shiny USBs they find in the parking lot).</p>
<p>What this also means, combined with attackers being more risk averse, is that reaching the next gain level will decreasingly justify the risk tradeoff. This is yet another benefit to the defense, since it can help deter ongoing campaigns even after an initial compromise — if you can up the cost of persistence, then developing tools for retaining system access on the target system will feel too risky relative to the lower gain payoff.</p>
<h2 id="a-namefixing-incentivesafixing-incentives"><a name="fixing-incentives"></a>Fixing Incentives</h2>
<p><img src="/img/infosux.png" alt="Infosux comic"></p>
<p>Now that I’ve tried explaining the <em>why</em>, it’s time to discuss how the balance of decision-making power can be shifted in favor of defense and some examples of tech that makes more sense to adopt. Clearly, defense is naturally predisposed to misjudging their real threat model, misallocating resources and miscalculating strategies, resulting in our current industry dystopia of a comically privileged offense, FUD marketing tactics, focus on thwarting sexier “advanced” attacks and a noxious romance with attribution.</p>
<p>Understanding you have a problem, what the problem is, and why you keep having it is step one. I’m not alone in using knowledge of behavioral econ to counter my human instincts towards suboptimal behavior (e.g. <a href="http://waitbutwhy.com/2013/10/why-procrastinators-procrastinate.html">instant gratification monkey</a>). So, I fully believe that defenders can leverage the knowledge of their weaknesses to correct their missteps and start leveraging their adversaries’ weaknesses against them.</p>
<p>If you’ve spoken to anyone in infosec with offensive experience, they’ll agree that “raising the cost of attack” is one of the most effective means of deterrence. I think re-framing it as “raising the stakes of attack” is more descriptive than cost, since it includes the notion of risk. The fact that attackers only care about their own outcome relative to the reference point, are extremely loss averse, prioritize certainty over a more valuable outcome and get less benefit out of successive gains all supports the idea of raising the stakes.</p>
<p>Defenders should prioritize efficiency when raising the stakes. Rather than focusing on less probable attacks, they should think about the commonalities between the technical and informational advantages that the spectrum of attackers possess. For example, a platform like Drawbridge Networks lets you detect and control lateral movement in an internal network, which could limit an attack’s impact in both more advanced attacks and common malware. Defenders often believe that cyber security products focused on countering “advanced” threats also counter more basic attacks, but that’s not always the case. It’s far simpler to raise the level of the lowest common denominator than try to stop each type of “sophisticated” method.</p>
<p>Eroding the informational advantage is the wisest move, since tackling the technical advantage is more of a cat-and-mouse game. “Silent” monitoring tech that gives visibility without informing the attacker can give defense the ability to respond quickly without the attacker realizing that they’ve been caught, so defenders can watch the attacker’s methods and gain valuable threat intelligence (the real kind). In contrast, technologies that use blocking are giving data to attackers that they can use to craft a better attack.</p>
<p>An effective technique that’s gaining some popularity is ensuring that the organization’s infrastructure isn’t static; attackers will have a substantially more difficult time attacking something that is constantly changing. Even more simplistically, setting up honey pots and other types of deception, like <a href="https://canary.tools/">Thinkst’s Canary</a>, can serve to foster uncertainty in the attacker as well as give defense the heads up that something nefarious is happening.</p>
<p>Defenders should also reduce their adversaries’ potential payoff in conjunction with raising the stakes. Having strict access control rules and a more segmented network means that a compromise of an individual machine doesn’t have much value, and attackers will have to expend more resources to get a bigger payoff. For example, deploying Duo Sec’s 2FA to end users reduces the value of their credentials by adding an extra hoop through which attackers must jump to illicitly access accounts.</p>
<p>But to counter their own weaknesses, defenders should take a data-driven approach — although data can have its flaws, it helps provide rational evidence of what the reference point should be. Having an ongoing picture of the “true” threat model may also encourage defenders to update their reference point more quickly, though it will still require some introspection to be aware of their bias towards being slow to change their views. One tech solution with this approach is Signal Sciences, which uses a data-driven approach to web app security by providing a continuously updated reference point of security posture in that area.</p>
<p>There also needs to be a better understanding in defense of how they define a loss. As I theorized earlier, right now it’s mostly “being breached,” and that may indeed have an immediate impact on the security practitioner&rsquo;s job security. However, it’s probable that a nation state attacker will breach a company, exfiltrate some data for espionage purposes, and there will be no real effects felt by the company (particularly short-term). Enhancing the equation of probability * outcome with an improved understanding of the real impact different types of attackers has on the organization would meaningfully improve prioritization of what solutions to adopt from lens of what is bad from the organizational point of view vs. what is bad from an “objective” security point of view.</p>
<h2 id="a-nameconclusionaconclusion"><a name="conclusion"></a>Conclusion</h2>
<p><img src="/blog/img/bad-cyberart-04.gif" alt="A very stupid looking gif of &amp;ldquo;hacking&amp;rdquo;"><em>Real hackers use this leet virtual reality module for their attacks</em></p>
<p>I really believe understanding the motivations behind this “irrational” defensive behavior is empowering. While even I tend to veer towards hyperbole when describing offense’s advantages, the offense isn’t invisible nor is their decision-making flawless, and I think Prospect Theory helps identify those vulnerabilities — so now the challenge is for defense to start exploiting them.</p>
<p>My hope is that rather than telling infosec defenders that they’re being stupid or irrational or that they’re totally crappy at their jobs, the industry can take a more empathetic approach and suggest strategies towards ameliorating counterproductive incentives. I don’t think that will eradicate FUD marketing tactics or snake oil products, but it probably can give solutions that actually help a fighting chance to make a difference.</p>
<p>In conclusion, let’s try to be more of a community and practice some collective mindfulness, and just maybe we can start fixing things.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>WTFunding: Bioinformatics &amp; Genetic Data</title>
            <link>https://kellyshortridge.com/blog/posts/wtfunding-bioinformatics-genetic-data/</link>
            <pubDate>Tue, 17 May 2016 19:35:10 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/wtfunding-bioinformatics-genetic-data/</guid>
            <description>
WTFunding is one of my “spare time” projects to delve into tech sectors attracting VC funding that pique my curiosity. I like connecting dots between disparate things, it’s also pretty useful.
Table of Contents: So what is bioinformatics? What are the applications? What’s hindering adoption? Who cares? What are the risks? What’s the current scene? Conclusion So what is bioinformatics? Simply put, bioinformatics is software used to understand biological data, including (but not limited to) genomic sequences. What’s particularly cool about the field is that it brings in a bunch of expertise and methods from other areas, like statistics and computer science in order to perform analysis and glean insights from this bio data.
If you only care about the business stuff and don’t really care about how it works (and / or hate science), then skip ahead to the “What are the Applications?” section.
How are living things made? The reference, for those who have had the misfortune of never watching “Jurassic Park.”
Let’s start with what constitutes life. The simplest unit of a living thing is a cell; some living things just have one cell, whereas others, like humans, have something like 37.2 trillion cells. Each cell has a nucleus, which is home to most of the genetic material within cells. Within the nucleus lies chromosomes, which are made of protein and DNA.
At a lower level you can think of these cells as being made of molecules, which are atoms grouped together by chemical bonds. Cells depend on three macromolecules, or molecules with upwards of 1,000 atoms, to determine the cell’s’ basic functioning and structure: DNA, RNA and proteins. The relationship between them is that DNA makes RNA, and RNA makes proteins (this is the central dogma of molecular biology). For the computer nerds reading, think of DNA like persistent storage, RNA like volatile memory and proteins like executables.
Zooming further in, genes are regions of DNA that is the recipe book, so to speak, for biological function. They’re what gives a living thing its biological traits — for example, I have the “blue eye” gene. For a sense of scale, there are 23 pairs of chromosomes within the human body that include over 20,000 individual genes that are made up of over 3 billion DNA base pairs.
So what are base pairs? The building blocks of each strand within the famous double helix of DNA are called nucleotides. Each of these nucleotide building blocks is made of either guanine, cytosine, adenine or thymine — which are shortened to C, G, A or T. But, these building blocks pair off to form base pairs, and only like the same partners: G always goes with C and A always goes with T (G-C and A-T). So, if you know the blocks in one strand of the double helix, you also know the blocks in the other.
As I said before, DNA makes RNA and RNA makes proteins, but how? DNA contains exons, which get converted to messenger RNA (mRNA), which delivers the message of which proteins are needed in a cell. The official way to describe exons is as “coding” for proteins, since like a computer program, they end up telling the cell what proteins to have.
Introns, in contrast, don’t code for proteins. Their relevance to this discussion is that they happen to also show where individual genes are separated. That isn’t to say that they are useless otherwise — they’re quite important for regulating how proteins show up in cells, but that’s not important for what I’ll be covering.
This is a good illustration of how all these things link: When you put all the genes within a living thing together, you get its genome. You might be familiar with the Human Genome Project, which is an international research body launched in 1990 dedicated towards mapping out the “human genome.” I have “human genome” in quotes because what they mapped out is an amalgamation of genomes — each human being has a unique genome with special variants in genes. They accomplished their goal in 2003, making it a warm and fuzzy example of the world banding together towards the common good.
The project also spurred a lot of innovation in the field. The way they were going about mapping the human genome was through DNA sequencing, the method for which at the time hadn’t evolved since the 1970s, other than getting a boost as computers became more powerful. But, concurrent with the start of the Human Genome Project, new sequencing methods, novelly called “next generation sequencing” methods, or NGS for short, began being developed and had gained a market foothold by the time the project finished. So, what exactly does DNA sequencing entail?
DNA Sequencing Process Before you can figure out the purpose of specific genes, the DNA in question must first be sequenced. For simplicity’s sake I’m sticking with DNA sequencing in this section vs. other types of sequencing, because an already lengthy post would have to become a novel.
DNA sequencing just means determining the exact order of the building blocks (nucleotides) within the DNA, and more specifically the order of its bases — the Gs, Cs, As and Ts. I’ll walk you through a next-generation sequencing (NGS) process, since that’s been the catalyst for rapidly decreasing costs of sequencing, although there’s also the Sanger method, which is the “old school” method.
Library Preparation 1. DNA Samples First you have to start with some sort of sample from which to extract DNA. It can be from blood, fossils, saliva or tissues, although saliva is now the go-to for collecting samples from humans given its ease of procurement.
2. Starting Material But, there’s also a lot of other stuff in saliva (or any of the samples) that needs to be weeded out. Isolating DNA is done by “purifying” it from a sample through lysis, which disrupts the cell in order to release and separate its biological contents. There are physical and chemical methods for performing cell lysis, the coolest of which is probably sonication — harnessing ultrasonic sound energy to fragment the cell.
Once the cell is broken open, the non-DNA contents are removed via chemical methods, like adding detergents to remove the cell’s surface materials and enzymes that break down proteins and RNA. But those chemicals also have to be separated, so some extra chemistry magic happens (or a wash / spin cycle in a centrifuge) to get the DNA to bind together. Now the purified DNA can be extracted as starting material. As with many things in life, the more starting material, the better.
3. Fragmentation Now, if we’re talking about human DNA, the genome is going to be really long and would thus take a really long time to sequence. So, the next step is cutting the DNA into smaller pieces for sequencing to help speed up the process. These fragments are officially termed “reads,” which I’ll use going forward. And fittingly, the collection of DNA fragments you generate is called a “library.”
But how do you determine how big you want these fragments to be? It largely depends on how the fragments will be sequenced and for what they’re being sequenced. Most NGS methods sequence up to 400 base pairs (“bp”) during one sequence cycle (called a “run”); the old-school Sanger method typically sequences up to 900bp.
The two typical approaches for this are either physical or enzymatic. One example physical method is acoustic shearing, which may be even cooler than sonication; the DNA sample is placed into a glass vial which is then subjected to acoustic energy that continually creates and collapses microbubbles. The process of growing these bubbles and causing them to implode creates shockwaves that have sufficient power to break down the sample DNA into random fragments. The power of the microbubbles fragments DNA pretty quickly with little loss of the DNA sample, and creates fragments ranging from 100bp to 5,000bp.
The other common physical method is nebulization, involving nebulizer devices which use compressed air to convert liquids into a fine mist. DNA is pushed through a small hole in the nebulizer, creating a mist which is then collected. The resulting fragments are typically 500bp to 3000bp, depending on how quickly the DNA is forced through the hole. This method is pretty quick, but can cause DNA to be lost in the process.
Enzymes capable of degrading DNA can also be used to fragment it into smaller pieces. Although it’s more consistent than physical methods, it also has the opportunity to alter the fragment by insertion or deletion. So, your method of choice really depends on your end goal.
4. Repairs &amp; Adapters First, let’s take a look at how a DNA molecule looks, broken apart by its ribbons (or strands):
Each arrow represents one strand of DNA; I’ve laid this out linearly, but as you very likely know (and if the abundance of pictures in this post hasn’t reinforced it), the strands twist to form a double helix. The 3’ end of one of the DNA strands aligns with the 5’ end of its partner strand.The way the DNA molecule is “read” is from 5’ to 5’. So, if the 5’ ends of both strands stick out farther than each other, the strands will be “repaired” by combining them together and filling in the gaps (A with T and C with G).
This is crucial for the next step, which is putting “adapters” onto the DNA strands. Adapters are short DNA molecules that have been synthesized specifically to help in the sequencing process. Adapters are generally provided in a kit, and the adapter sequences therein will be added to the 5’ and 3’ ends of every fragment within one library.
Some may help put “primers” into their correct place, while others may help the DNA fragment stick to a surface in what’s called “immobilization.” The primers have to match the beginning and the end of the DNA fragment, since they’ll serve as the guide for what DNA fragment is amplified in the next step. The final step in the library preparation, immobilization, just means that each single DNA molecule is made up by a bead, which is then anchored to some solid surface, like a glass plate.
Amplification Polymerase chain reactions (PCR) is used effectively as a copy machine for a DNA sequence, or what’s known as “amplification.” This thinking is actually somewhat similar to the methods for improving image quality within satellite imagery that I discussed in my prior post. If you think of the target DNA as an object to be captured within a landscape, it’s far easier to take a bunch of quick pictures of it than take one big, detailed picture of the landscape and try to extract the object from it.
This process follows an exponential curve, too, since you’re making copies of the copies. After just 6 cycles, you have 64 copies of the target gene. After ten, you have 1,024. These copies then make up a “DNA colony” to be sequenced.
Sequence Reaction Note: this is a sequencing UI from a TV show and thus should be disregarded entirely as resembling reality
NGS is also known as high-throughput sequencing, which just means that there’s tons more data that comes out of the sequencing process, and it’s due to parallelization. There are a few different sub-methods of NGS, the most common of which seems to be Illumina’s sequencing by synthesis (SBS) — though the others also depend on the sort of library preparation and amplification discussed above.
In SBS, the immobilized DNA is “washed” with one of the four nucleotide bases (either T, A, C or G) at a time to see which gets incorporated. For example, if the fragment’s template is GCGAATCG, and your wash is “A”, both of the A’s in the middle will incorporate the wash and the others will be washed away. Then, using the power of lasers and fluorescence, a machine can record which part of the fragment incorporated the base, showing _ _ _ A A _ _ _.Then, the “A” dye would be chemically removed so the next cycle can start, a “G” wash would be used, the laser magic happens, and the machine would record G _ G A A _ _ G, and so forth one nucleotide at a time until the sequence is complete.
For frame of reference, these machines are really, really expensive. At the low end, Illumina’s Miniseq “benchtop” sequencer is about $50,000 and produces an output of up to 7.5GB in just 24 hours. But at the high end, its heavy-duty sequencer, the HiSeqX, is sold for $10 million and only in units of 10, producing an output of up to 1.8TB in less than three days. Even despite their hefty price-tag, it’s still far cheaper and faster than Sanger sequencing (the “old school” method).
Sequence Assembly The process above generates sequencing “reads,” but not the genome itself — which is where sequence assembly comes in, and is also where data science starts to get involved. De novo sequence assembly, which seems to be the most commonly used, reconstructs an organism’s likely genome sequence based on the reads. The word “likely” here is important, since there’s no guarantee that it’s correct, but instead is approximating the source of these fragments.
As you might guess, this leaves some room for error, which I’ll touch on in the “what’s hindering adoption” section. Errors can be in the form of false negatives, such as thinking a fragment is a mistake or just a repeat, or the fragments could just be linked up together in the wrong spots…plus there’s no guarantee that there weren’t errors in the sequencing process itself. To better illustrate why this process is so tricky, Wikipedia has a great analogy:
The problem of sequence assembly can be compared to taking many copies of a book, passing each of them through a shredder with a different cutter, and piecing the text of the book back together just by looking at the shredded pieces. Besides the obvious difficulty of this task, there are some extra practical issues: the original may have many repeated paragraphs, and some shreds may be modified during shredding to have typos. Excerpts from another book may also be added in, and some shreds may be completely unrecognizable.
One example algorithm used in de novo assembly is overlap-layout-consensus (OLC), which leverages a graph to represent the reads. First, overlaps between fragments are computed, and then each fragment is connected by an edge if there are indeed overlaps. The computing problem then becomes how to navigate through the graph in a way that still contains each fragment, which involves a lot of graph theory.
The other primary assembly algorithm is the de Bruijn graph (DBG), which also is based on graph theory. First, the reads are broken into smaller pieces of the same size (called k-mers). Then, the graph is created by ordering from the pieces that overlap at the beginning of the sequence to those that overlap at the end. This sounds linear, but isn’t.
The image below should help visualize each algorithm and compare how they differ: To move on, let’s assume for now you’ve done a pretty good job sequencing and assembling the DNA in question. It’s not just there to be admired, but to gain something useful out of it, which leads into sequence analysis.
An example of a DNA sequence and foreshadowing of my “what are the risks” section
Sequence Analysis Sequence analysis is where the startup software solutions addressing bioinformatics begin to appear. This is the process of taking the sequenced DNA and applying meaning to it — determining and reporting what the genes are and what is their purpose.
Homology Search “Homology searching” is a fancy way of saying looking for similarity between sequences, and is step one (at least to try) towards getting to the end goal of understanding the genes you’ve just sequenced. “Homology” itself means shared ancestry between genes in different species — after all, all species evolved from the same ancestors. So, when sequences between two distinct species are similar, you can argue that there’s homology.
If you are able to search a database of known sequences to see where they might be similar to your sequence, then you can much more easily identify certain genes and their functions, as well as speculate on evolutionary relationships. In an ideal scenario, if the results of your homology searches cover all the genes you need identified, and they already describe what the genes are for, then you’ve arrived at your end-result. You’ve also saved yourself a lot of other potential steps, which are described right after this.
So how do researchers compare sequences? It involves a lot of data science and also necessitates having a rigorous dataset with which to compare. The more genes are identified on other species, such as mice, the greater ability there is to search human DNA sequences for similar genes.
The most commonly used tool for homology search is BLAST, for “Basic Local Alignment Search Tool,” which is favored due to its speed, at the expense of some accuracy. For those familiar with statistics, it’s a heuristic algorithm. The way it compares a query sequence to a sequence database is by finding “high-scoring pairs,” or HSPs for short, which represent overlaps that seem statistically significant. The speed comes from creating “words” within a sequence, which are generally 11 letters long in DNA sequences, and running the search on each word, narrowing down which HSPs fit as the search moves through the letters. This constitutes a substitution matrix, and would look something like this in BLAST:
While BLAST is the current market leader, improving upon it is clearly an area of interest within academia. Like in many data science projects, making an algorithm both faster and more accurate is the holy grail. For projects with lots of sequence data, BLAST becomes computationally expensive — and given the explosion of sequence data in recent years, researchers have developed and proposed alternatives that potentially offer greater efficiency and ameliorate computational cost concerns. I could publish an academic journal on all the approaches proposed, but I’ll explain one I find particularly neat and badassly-named, GHOSTX.
GHOSTX’s speed advantage is by analyzing both length and match score — the algorithm will only ingest sequences that are both close to the appropriate length of the target sequence and deemed to be statistically similar content-wise. To oversimplify a bit, it’s able to do this by using suffix arrays, an example of which is below:
This is much more efficient for searching, since now you just have an array with pointers to text snippets (suffixes) that are sorted in alphabetical order. So, what’s being stored isn’t “A$,” “ATA$” and so forth, but “7” or “5”, that is, what’s stored is only the code for where in the list the suffix lies. In other words, you now have an index for all the different suffixes, making it easy to find where your sequence matches others in the database.
Using these various methods, even if you find a matching result for a gene, there still may be insufficient data on what the gene does (i.e. lacking gene annotation, which I’ll describe in a bit). In those cases, and others in which homology search doesn’t complete the puzzle, you’ll have to pursue the following steps in addition.
Gene Identification If the homology search fails, or for whatever reason you just don’t want to perform a homology search, gene identification is the next step. What you’ll be looking for is the region of DNA that specifically codes for its function, i.e. codes for protein. These are called open reading frames (ORF), which you can think of as triplets of base nucleotides (again: A, T, C or G), and researchers use them to initially identify regions that probably show what the gene is.
Fancy data science can be involved in this step, too, as gene prediction (which is just the predictive form of gene identification). There are three primary methods of doing so, which include statistical, comparative or empirical methods (i.e. homology, which I went through above).
As an example of a statistical method, gene prediction programs using neural networks can be trained on the organism whose DNA is in question and then applied to the target sequence. Hidden Markov Models (HMMs) also use training data to return the most likely structure of a gene, which is useful if there’s no gene in a sequence or only a partial gene.
Comparative methods use DNA from another species to help predict gene function, since there’s a lot of overlap in how our genes work. The common example is using mouse DNA to determine genes in human DNA; after all, we’re a lot more like a mouse than we are a plant. In fact, 95% of genes that actually code for a biological function match one for one between mice and humans. The theory behind this is that evolutionary pressures led to a general, optimized blueprint for how mammals should function.
However, both these methods still require experimentation to verify that the gene prediction is accurate…at least for now, until these methods (particularly statistical) become more accurate.
Translation &amp; Database Search Once you have identified your gene, whether through homology search or gene identification, you need to translate that gene into its proteins — the stuff that brings to life whatever the genes have blueprinted. These translators are pretty easily found on various academic websites. For example, the sequence I’ve been using in my examples throughout, “CATCATATCAGTCATAGT,” by pure luck happens to have an ORF (the region that actually codes for proteins). It’s in “ATGACTGATATGATG,” which is in the “opposite” DNA strand of my sequence (specifically, the opposite of “CATCATATCAGTCATAGT”).
Once you have your ORFs, you can search them against protein databases to see if there’s already data on them. If there is, it’ll help enormously in the next and crucial step, gene annotation, since you’ll have a head start on knowing what purpose the gene serves.
Gene Annotation Gene annotation is the process in which a specific gene is assigned a specific functional or physical feature. This ideally is largely automated and just enhanced with human expertise, because it would take a very long time for one person to manually annotate a DNA sequence.The functional features generally are drawn from a specific vocabulary to form an ontology, or a formal method of naming and defining things.
There are two components of annotation: the gene’s structure and its function. Structure mostly just determines which regions code for proteins, while functional annotation records the gene’s biological and biochemical functions. Its function just means what proteins are there, or what proteins are hypothetically there (such as predicted by gene finding programs), which engenders how the gene influences an organism’s operations. The gene is also given a name, whether unknown or known based on being identical to a previously documented gene.
Automated annotation uses data science in order to help predict the relevant gene annotations, like its functions. Bayesian networks and similar statistical methods are used by learning from prior annotations, but doesn’t help with finding new gene functions. Automated annotation certainly helps speed up the process, but it’s basically impossible to be accurate on the whole genome, so manual annotation is still needed to clean up any errors or fill in the gaps.
Annotation is one of those things that sounds really easy to do (“just write down what the gene does!”), but is actually really tricky in process. The quality of annotation is largely dependent on the quality of the process before it. If the gene prediction was faulty, or if the sequencing shoddy, it will result in greater difficulty and / or error in annotation.
Upload &amp; Stop, Collaborate and Listen This is not the sort of DNA data storage I mean.
Once the DNA has been sequenced and annotated, it has to be stored…which sounds simple enough, but isn’t quite. In a brief crossover to my industry, security and compliance is actually pretty important when considering biological storage; after all, someone’s genome is extremely private information (though admittedly a mouse probably doesn’t care if their genomic data gets hacked).
Adding to the complexity of storage, there are various types of databases for genomic data — some that contain empirical genomic data, predicted genomic data or structural data. It might also have just the raw sequencing data, or data that’s been cleaned up a bit. Some are publicly available, but some aren’t. The databases also have to be able to facilitate search across sequences, too.
While databases may be a bit passé in other areas of tech, this is actually a reasonably hot area within bioinformatics. Making it easy for customers to search and combine different types of genomic data, such as via an API, is hugely useful in bioinformatics applications, but even helping them manage their data is a plus as well, such as through versioning control (since this sort of data is always being updated). Some startups are also focused on unifying this data with other sorts of medical data, such as patient outcomes, which helps streamline their customer’s processes considerably by not requiring them to visit lots of different databases and tools.
Collaboration is another key part of the bioinformatics end-stage. For example, when designing a pharmaceutical drug, the R&amp;D team needs to collaborate with the clinical trials team, who needs to collaborate with the clinical team in order to maintain a complete feedback loop. So, bioinformatics software is also about optimizing workflows to produce more robust results — which may sound ridiculously outmoded to techies at the cutting edge, but, well, there it is.
The O.G. thought leader.
Now with these query-able databases of annotated genes, i.e. you have your sequenced DNA handy and know more or less what the random letters mean in a biological sense, you can now get to work on using them towards a variety of use cases.
What are the applications? While genes really shouldn’t be thought of based on what diseases they cause, the most obvious applications of bioinformatics revolve around discerning better treatments of diseases. The ability to map out genes and relevant mutations of complex and pervasive diseases such as cancer, diabetes or those that cause infertility could help research in which drugs or other forms of therapy might alleviate — or even eliminate — them.
With that lens, there are three major areas that can benefit the most from bioinformatics: pharmaceuticals, personalized medicine and genetic testing.
Pharmaceuticals Pharmaceutical companies can leverage bioinformatics for a variety of applications, from target discovery and drug design to improving the efficacy of existing drugs.
Pharmacogenomics studies how people’s responses to drugs are affected by their genes towards a more tailored approach to pharmaceuticals. You also might hear the term “pharmacogenetics,” which is typically used interchangeably with pharmacogenomics but does have a slight difference. Pharmacogenomics involves looking for differences between people at the genetic level that can explain drug response, while pharmacogenetics involves looking for a genetic reason for why there is a specific drug response. So think of pharmacogenomics as a whole-genome approach while pharmacogenetics deals with just one interaction between a drug and genes.
Right now most medications take a “one size fits all” approach, when in reality a person’s genetic makeup may determine how beneficial a drug is or what side effects they experience. In the future, this means doctors could analyze your genome against a variety of drugs for a specific condition to optimize your therapy — part of personalized medicine, which I’ll discuss in greater detail below.
This type of “predictive” approach would be a big benefit to patients, as they would theoretically no longer have to suffer through a trial and error period that can sometimes come with harmful reactions. Everything from the type of drugs to the dose amount could be tailored based on your predicted reaction to the medication. And, when the data on your response based on this predicted approach is recorded, it would help improve prescription algorithms, resulting in a huge data-network-effect.
For example, at least one in ten Americans takes antidepressants. Which antidepressant and dosage works for some people vs. others currently is more art than science. Further, these medications can often come with adverse side effects that reduce the patient’s quality of life — from insomnia to emotional numbness — or even make their depression worse. If instead their genome could be used to accurately predict which drug and how much of it would improve their depression the most, or at least cause them to suffer the least, their outcomes would be far more quickly and painlessly reached.
So how would pharmaceutical companies reach this stage? Genetic data could be used to isolate a particular protein related to a disease and conduct research towards finding a drug effective against this protein. Or, they could use genetic data to better understand receptors (such as a protein) so they know which would be the best target for their drug molecule. There’s also plenty of room for improvement with existing drugs; oftentimes drugs are effective against conditions for reasons that aren’t well understood, and bioinformatics could help determine why that is.
Ultimately, the real dream for the future of pharmaceuticals is a combination of bioinformatics-informed drug development with a feedback loop of patient outcomes to continually inform and improve drug design. This would likely serve as the cornerstone of personalized medicine, which leads us to…
Precision / Personalized Medicine In 2030, scientists will discover the key to all health issues is giving someone a puppy.
“Personalized” or “precision” medicine is all about tailoring treatment and prevention based on an individual’s unique characteristics, including their genes. It’s unrealistic (or so my 2016 brain believes) to assume that in 2100 AD there will be a unique drug for each person and their conditions, but it is realistic to assume, even in the nearish future, that populations can be segmented in a way to substantially improve treatment. This isn’t just about drugs, either — it extends to medical devices along with prevention of diseases, too.
Part of this nearish future is improving the clinical trials process, the experiments performed with human participants to test the efficacy and safety of treatment methods. As discussed in the pharmaceutical example above, having a record of genetic compositions of potential trial participants can help improve the decision-making behind who should be included in the trial. With the clinical trials market estimated to reach $72 billion by 2020, you can see why having the ability to more carefully select trial participants in order to optimize clinical trial outcomes is worthwhile.
In a somewhat extreme case, Estonia is using a government-driven approach for furthering along these initiatives — it’s collecting DNA of all its citizens to create a genetic biobank. While right now the use case is primarily genomic research, farther down the line, having a strong sense of the average genetic “posture,” so to speak, of the Estonian population could help inform public health policy and population-wide personalized medicine.
Here in the United States, Obama created the Precision Medicine Initiative (PMI) in January 2015 within the Health and Human Services (HHS) department, primarily led by the National Institutes of Health (NIH). Its goal is simply to do away with the aforementioned “one size fits all” approach to improve medical outcomes for patients. It’s still in its early innings and has a somewhat slow timeline; in February of this year, NIH announced that it will study the genomes of one million volunteers by 2019.
The hope is the PMI will help kickstart precision medicine both by collecting a large swath of data as well as through developing improved analysis capabilities. It’s likely that everyday citizens won’t see the fruits of personalized medicine until quite a bit after 2019, as personalized treatments will require research, development and testing time, even once the genetic data collection and analysis methods improve. The presumable safety regulations around personalized medicine also may cause snags and delays, which I’ll discuss a bit more in the next section.
Genetic Testing Genetic testing arguably must serve as the basis for any personalized medicine initiative, as only by knowing an individual’s genes can any treatment plans be tailored. A genetic test is simply a test that identifies either the person’s genes or changes in their genes (including proteins). There are currently a few uses for genetic tests, or at least uses cases leveraged in marketing.
Parents or couples considering becoming parents can use genetic testing to evaluate what sort of genes they will pass along to their offspring, generally with the goal of identifying genetic disorders. For couples pursuing in vitro fertilization (IVF), genetic testing can also find defects within the embryos created via IVF before they are implanted into the carrier, which increases the the embryo’s viability.
Individuals can test their own susceptibility to disorders and see what sort of mutated genes they have, or use it as a diagnostic test if they have a currently unidentified ailment. Or, more humorously, as I heard from a friend, being able to tell a sibling that you have less Neanderthal DNA than they have. This, in theory, helps individuals take preventative actions to stave off diseases or illnesses later in life.
While most genetic tests are currently based on DNA, longer-term there will likely be tests involving RNA and proteins. As discussed in the first section, DNA is the blueprint; this means it’s potentially a decent predictor of genetic risks, but not great at measuring the body’s current state for diagnostic purposes.
Genome Editing While this isn’t a major application (yet), it’s pretty cool and a company doing it (Intellia) just IPO’d, so I want to touch on it briefly. Turns out you can also leverage DNA sequences and knowledge of genes’ functions in order to modify an organism’s DNA sequence, such as a virus.
The key tech enabler here has been CRISPR, or “clustered regularly interspaced short palindromic repeats,” which actually is part of a bacterial immune system. To use a reverse analogy from my industry, it’s a bit like most anti-virus software, where “signatures” of viruses are kept around so the system knows how to defend against it in the future.
The next part is using CRISPR-associated proteins, or “Cas,” that will actually chop virus DNA in half like a pair of scissors upon confrontation, the result being that the virus can no longer replicate.
This is pretty cool stuff, and while most of the research leveraging this technology is towards curing diseases, this could make the “designer baby” fears a reality, or even facilitate genetically engineered bioweapons (more on the latter in the “risks” section).
What’s hindering adoption? In the future, apparently we’ll be featureless and wear latex suits with our genomes printed on them.
Now that sequencing costs have decreased, what are the other barriers to progress? A common theme that’s emerging from these WTFunding pieces it that being able to manage and analyze a lot of complex data turns out to be really hard…and bioinformatics is not different.
While it’s now far cheaper to generate genomic data, there’s still no guarantee that it’s accurate. Since gene annotation is still performed by humans, at least to some degree, there’s still plenty of room for human error — not to mention oftentimes the annotations are simply incomplete. You can play a “fun” game if you want and keep backing up the chain I walked you through earlier to spot the numerous chances for error to be incorporated, sequence assembly and sequencing itself being the most likely candidates.
You might have also caught while reading the “how it works” part that the algorithms for various processes require a lot of computational resources. Obviously the past decade has seen enormous strides in the availability of computing power, but it’s still reasonably time and cost prohibitive given the sheer size of genomic data. Someone else has already done the math on the size of the human genome in digital terms, and it’s roughly 200GB of raw sequencing data because, as explained before, you need a bunch of “reads” of the sequence.
On the data management side, a big question is how do you make genomic data easily accessible and combinable with other types of health data (such as patient outcomes), since that’s a key to unlocking a larger addressable market. There are startups addressing precisely this issue, in figuring out how to sensible combine different data types and link them up in an intuitive manner, but it’s safe to say that data harmony hasn’t yet been reached industry-wide.
The analysis portion specifically presents two big challenges. First, in a fragmented market for tools and algorithms and other analysis methods, how do you figure out what to use? Second, how do you know you’re not just seeing patterns in data that aren’t actually there? Many analysis tools are written with a single application in mind, and from what I gather, they aren’t very well-written, either. So organizations that could potentially benefit from bioinformatics may not be willing to dip their toe in the water until a discernable market with user-friendly software emerges.
And on the genetic testing side, the main questions are: Who pays for the tests? Who advises individuals on their test results? How does the data easily get to health care providers? While I’ll get into the incentive problems within genetic testing later, part of the problem currently is that genetic tests are still somewhat expensive for individuals. Will the government? With health insurance providers? It presents a bit of a chicken and egg problem, given patients need to be genetically tested in order to gain benefits from tailored pharmaceuticals or personalized medicine, but neither of those benefits are yet in a mature enough stage to incentivize people taking the tests.
Who cares? Pillz on ya billz
VCs love numbers, so I’m going to present the big, high-level ones first:
Global pharmaceutical industry = ~$1 trillion annually U.S. pharma industry = ~$400 billion annually Global healthcare industry = ~$3 trillion annually So combined, looking at a $4 trillion global market, which should assuage any VC’s fears of insufficient market size since grabbing 1% of the market gets you $40 billion in revenue. For reference, Pfizer has just about 5% of pharma market share and has a market capitalization of just over $200 billion.
But, the clever VC says, the initial markets will just be those with the highest pain points, not the entire industry! Those top therapy classes would be oncology (cancer), diabetes and mental health. So here are more numbers:
Cancer drug spending in the U.S. = ~$42 billion annually (in 2014) Direct medical costs of diagnosed diabetes in the U.S. = ~$176 billion annually (in 2013) Direct cost of mental illness in the U.S. = ~$147 billion annually (in 2009) VCs should still be quite happy, since combined (and given inflation), this is likely nearing a $400 billion market, and just in the U.S. alone. With the 1% market share goal, you’re still looking at $4 billion in revenue today.
Obviously, it isn’t just VCs who care about the big, shiny numbers. The aforementioned pharma companies likely get cartoon-dollar-sign-eyes contemplating the ability to charge more for genomic-data-driven drugs tackling these illnesses. Medical providers could also pad their pockets by leveraging personalized medicine to differentiate from the competition and thus create the ability to markup prices.
But on a less cynical note, there’s the real chance that genome-based personalized treatments won’t be solely a marketing shtick and actually result in significantly better outcomes for patients. So anyone suffering from an illness could potentially see benefits from personalized medicine. Further, there are many illnesses where the current understanding of what causes them can be summed up by ¯\_(ツ)_/¯, and bioinformatics could potentially lead to breakthroughs that would give hope of proper treatment and prevention to those afflicted.
What are the risks? And we all know how well Jurassic Park turned out
There’s a pretty clear incentive problem on the pharmaceutical side. Pharma companies’ goal is to have more people buy their drugs. One way to incentivize more people to buy your drugs is by recommending certain drugs based on a person’s genetic profile. In a world in which genetic testing was performed by an altruistic, neutral third party, people might only be prescribed the drugs they need. In a world in which pharma companies have massive budgets, particularly for marketing, it isn’t difficult to imagine that they might “sponsor” genetic testing recommendations to ensure that their drugs are recommended.
As with any industry with a growing reliance on data-driven and increasingly automated approaches, there’s the potential for error as well. Relying too much on data models could mean that there is less caution than today towards prescribing treatments, which could result in disastrous “black swan” type events. Though, as I mentioned, it’s far more probable that healthcare providers will instead commit the error of seeing patterns that don’t actually exist, the implications of which could range from unnecessary and costly treatments to wrong and costly treatments that set back the patient’s progress, or even harm them.
Taking a look at another risk angle, the recent Theranos debacle is now a cautionary tale for medical startups. Many of the startups in bioinformatics are just in the software part of the space, so have less potential of scientific blundering like Theranos, since they’re instead helping with the data management, analysis and workflow portion.
But startups in the genetic testing area should definitely be heeding the lamentable tale of the “blood unicorn.” As I probably have made abundantly clear by this point, there’s room for error at nearly every step of the process, so there’s a non-trivial chance that end users might get inaccurate results that could have pernicious impacts on their health.
Let’s also not forget the cautionary tale of Jurassic Park, in which dinosaurs (which, spoiler alert, are extinct) are brought back to life through the power of DNA for the purposes of creating a theme park. Then everything goes to shit and people die. I’m pretty skeptical that anyone will be so silly as to bring back dinosaurs, but the general lesson of considering long-term implications is an important one. While the current research focus may be on disease prevention, there could be a future in which you can “enhance” yourself (or your offspring, like the “designer babies” I mentioned earlier) through genetic modification. It sounds really cool, but it could also be really, really disastrous and have far-reaching consequences not just health-wise, but societally (for anyone who has played Bioshock, think of ADAM).
On a geopolitical note, improvements in understanding the human genome also open up nefarious applications. At the “mildly evil” end of the evil scale, there’s genetic modification to win sporting events like the Olympics; at the “really evil” end of the evil scale, there’s creating highly targeted biological weapons to conduct warfare against a specific subpopulation, i.e. biological genocide. In fact, this year the U.S. Intelligence Community added gene editing to its “Worldwide Threat Assessment” report.
Implanting needles in snakes’ mouths as fangs is clearly the most efficient bioweapon distribution mechanism…
Heretofore, the scale needed for biological weapons has limited it due to the rigorous research and development process that synthetic pathogens require. But thanks to improvements in sequencing technology and analysis, it’s far easier to sequence viruses, which means it’s potentially easier to create new viruses, too!
While it’s probably an exaggeration to say that most people could now make a bioweapon in their backyard, as the availability of genomic databases and computing power increases, and the cost of equipment continues to decrease, it will certainly be accessible to well-funded terrorist organizations (if not already).
I definitely don’t think it’s too tin-foil-y to suggest that most of the current world powers have synthetic bioweapons at their disposal already. However, it may be a bit more tin-foily to say that the population’s increasing reliance on antibiotics really doesn’t help proactively defend against some sort of biological attack.
Hopefully these bioweapons, like nuclear weapons, aren’t ever deployed in-the-wild and are instead more about deterrence. In any case, this sort of subject could make for thrilling TV.
What’s the current scene? There are a number of startups scattered about the various areas within the bioinformatics chain, most of which seem to have early leaders emerging within them. As far as incumbents, there’s Illumina and Qiagen in the NGS part, CLC bio (owned by Qiagen) in bioinformatics analysis, Bina Technologies (owned by Roche) and Genomic Health in data management and most recently, Intellia in gene editing. I’ve segmented out the startups based on the different sub-areas below:
Data Management DNAnexus Genome Compiler Genospace MediSapiens SolveBio Benchling Omicia Genomic Analysis Biomatters BioNano Genomics CancerIQ Maverix Biomics One Codex Onramp BioInformatics Spiral Genetics Syapse Tute Genomics Genome Editing Caribou CRISPR Therapeutics Desktop Genetics Homology Medicines Genetic Testing 23andMe Cofactor Genomics Color Genomics Counsyl NextGxDX Recombine There isn’t one major investor that’s all over this space, though in the lead is definitely Google Ventures. Below is the list of the major institutional and corporate VCs that have played in this space in some form:
Andreessen Horowitz Data Collective Felicis Ventures First Round Formation 8 Google Ventures Johnson &amp; Johnson Khosla Ventures Mohr Davidow Ventures New Enterprise Associates Novartis Venture Fund SV Angel WuXi Healthcare Ventures Y Combinator Conclusion Bioinformatics startups will find a way
The implications of a more mature bioinformatics industry are pretty radical — from curing cancer to having vastly improved patient outcomes resulting from personalized medicine. I think it’s evident from the simplicity of some current solutions, such as basic data management and collaboration tools, that there’s a long way to go until this area becomes sufficiently sophisticated and automated.
While the error problem throughout the bioinformatics chain will certainly hinder adoption of end-applications, I think the truly massive dollar size amounts behind those applications will spur along innovation sooner rather than later. Perhaps even some of the washed out founders and engineers who are victims of the current market correction will apply their computer and data science skills towards this area — it seems a more easily accessible industry than others, with collaborative incumbents.
I think the biggest long-term opportunities are in what is some of the traditionally “boring” application infrastructure software — data management, collaboration, workflow management, etc. — both in pharmaceutical research and for healthcare providers. If I were a healthcare data platform like Flatiron Health or Health Catalyst, I might be looking to scoop up some of these companies early with the long-term vision of an integrated genomic and clinical data platform.
In any case, the companies that create this sort of middleware can use those data-driven underpinnings to very easily reach up into the sexier analytics and visualization plays. I honestly wouldn’t be shocked if some of these data management software startups become the SAP or Oracle of Bioinformatics.
In contrast, I think genetic testing will eventually become a commodity, which in my opinion doesn’t justify the level of risk that startups face with it today. Admittedly, it’s a crucial part of unlocking the potential of personalized medicine, but my gut feeling is that there will just end up being a Quest Diagnostics and Laboratory Corp. of genetic testing one day. For comparison, Quest has a market cap of ~$10 billion and Laboratory has one of ~$13 billion, while SAP has one of ~$95 billion and Oracle one of ~$163 billion (not to mention all of their competitors). So I think on the whole, the software bet is the smarter one given the risk / reward trade off.
Finally, I think we’re not at all yet prepared for the ethical challenges that will arise from bioinformatics applications…but that’s another long-read post for a different time.
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/bioinformatics-02.jpg" alt="stylized pic of a double helix"></p>
<p><em>WTFunding is one of my “spare time” projects to delve into tech sectors attracting VC funding that pique my curiosity. I like connecting dots between disparate things, it’s also pretty useful.</em></p>
<h3 id="table-of-contents">Table of Contents:</h3>
<ol>
<li><a href="#so-what-is">So what is bioinformatics?</a></li>
<li><a href="#applications">What are the applications?</a></li>
<li><a href="#adoption">What&rsquo;s hindering adoption?</a></li>
<li><a href="#who-cares">Who cares?</a></li>
<li><a href="#risks">What are the risks?</a></li>
<li><a href="#current-scene">What&rsquo;s the current scene?</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ol>
<h2 id="a-nameso-what-isaso-what-is-bioinformatics"><a name="so-what-is"></a>So what is bioinformatics?</h2>
<p>Simply put, bioinformatics is software used to understand biological data, including (but not limited to) genomic sequences. What’s particularly cool about the field is that it brings in a bunch of expertise and methods from other areas, like statistics and computer science in order to perform analysis and glean insights from this bio data.</p>
<p>If you only care about the business stuff and don’t really care about how it works (and / or hate science), then skip ahead to the <a href="#applications">“What are the Applications?”</a> section.</p>
<h3 id="how-are-living-things-made">How are living things made?</h3>
<p><img src="/blog/img/bioinformatics-01.gif" alt="Mr. DNA from Jurassic Park gif"><em>The <a href="https://www.youtube.com/watch?v=qUaFYzFFbBU">reference</a>, for those who have had the misfortune of never watching “Jurassic Park.”</em></p>
<p>Let’s start with what constitutes life. The simplest unit of a living thing is a cell; some living things just have one cell, whereas others, like humans, have something like 37.2 trillion cells. Each cell has a nucleus, which is home to most of the genetic material within cells. Within the nucleus lies chromosomes, which are made of protein and DNA.</p>
<p>At a lower level you can think of these cells as being made of molecules, which are atoms grouped together by chemical bonds. Cells depend on three macromolecules, or molecules with upwards of 1,000 atoms, to determine the cell’s’ basic functioning and structure: DNA, RNA and proteins. The relationship between them is that DNA makes RNA, and RNA makes proteins (this is the central dogma of molecular biology). For the computer nerds reading, think of DNA like persistent storage, RNA like volatile memory and proteins like executables.</p>
<p>Zooming further in, genes are regions of DNA that is the recipe book, so to speak, for biological function. They’re what gives a living thing its biological traits — for example, I have the “blue eye” gene. For a sense of scale, there are 23 pairs of chromosomes within the human body that include over 20,000 individual genes that are made up of over 3 billion DNA base pairs.</p>
<p>So what are base pairs? The building blocks of each strand within the famous double helix of DNA are called nucleotides. Each of these nucleotide building blocks is made of either guanine, cytosine, adenine or thymine — which are shortened to C, G, A or T. But, these building blocks pair off to form base pairs, and only like the same partners: G always goes with C and A always goes with T (G-C and A-T). So, if you know the blocks in one strand of the double helix, you also know the blocks in the other.</p>
<p>As I said before, DNA makes RNA and RNA makes proteins, but how? DNA contains exons, which get converted to messenger RNA (mRNA), which delivers the message of which proteins are needed in a cell. The official way to describe exons is as “coding” for proteins, since like a computer program, they end up telling the cell what proteins to have.</p>
<p>Introns, in contrast, don’t code for proteins. Their relevance to this discussion is that they happen to also show where individual genes are separated. That isn’t to say that they are useless otherwise — they’re quite important for regulating how proteins show up in cells, but that’s not important for what I’ll be covering.</p>
<p>This is a good illustration of how all these things link:
<img src="/blog/img/bioinformatics-03.png" alt="Reference pic about chromosomes, DNA, nucleosome, etc."></p>
<p>When you put all the genes within a living thing together, you get its genome. You might be familiar with the Human Genome Project, which is an international research body launched in 1990 dedicated towards mapping out the “human genome.” I have “human genome” in quotes because what they mapped out is an amalgamation of genomes — each human being has a unique genome with special variants in genes. They accomplished their goal in 2003, making it a warm and fuzzy example of the world banding together towards the common good.</p>
<p>The project also spurred a lot of innovation in the field. The way they were going about mapping the human genome was through DNA sequencing, the method for which at the time hadn’t evolved since the 1970s, other than getting a boost as computers became more powerful. But, concurrent with the start of the Human Genome Project, new sequencing methods, novelly called “next generation sequencing” methods, or NGS for short, began being developed and had gained a market foothold by the time the project finished. So, what exactly does DNA sequencing entail?</p>
<h3 id="dna-sequencing-process">DNA Sequencing Process</h3>
<p><img src="/blog/img/bioinformatics-04.png" alt="Pic of DNA encoding"></p>
<p>Before you can figure out the purpose of specific genes, the DNA in question must first be sequenced. For simplicity’s sake I’m sticking with DNA sequencing in this section vs. other types of sequencing, because an already lengthy post would have to become a novel.</p>
<p>DNA sequencing just means determining the exact order of the building blocks (nucleotides) within the DNA, and more specifically the order of its bases — the Gs, Cs, As and Ts. I’ll walk you through a next-generation sequencing (NGS) process, since that’s been the catalyst for rapidly decreasing costs of sequencing, although there’s also the Sanger method, which is the “old school” method.</p>
<h4 id="library-preparation">Library Preparation</h4>
<h5 id="1-dna-samples">1. DNA Samples</h5>
<p>First you have to start with some sort of sample from which to extract DNA. It can be from blood, fossils, saliva or tissues, although saliva is now the go-to for collecting samples from humans given its ease of procurement.</p>
<p><img src="/blog/img/bioinformatics-05.jpg" alt="DNA purification"></p>
<h5 id="2-starting-material">2. Starting Material</h5>
<p>But, there’s also a lot of other stuff in saliva (or any of the samples) that needs to be weeded out. Isolating DNA is done by “purifying” it from a sample through lysis, which disrupts the cell in order to release and separate its biological contents. There are physical and chemical methods for performing cell lysis, the coolest of which is probably sonication — harnessing ultrasonic sound energy to fragment the cell.</p>
<p>Once the cell is broken open, the non-DNA contents are removed via chemical methods, like adding detergents to remove the cell’s surface materials and enzymes that break down proteins and RNA. But those chemicals also have to be separated, so some extra chemistry magic happens (or a wash / spin cycle in a centrifuge) to get the DNA to bind together. Now the purified DNA can be extracted as starting material. As with many things in life, the more starting material, the better.</p>
<h5 id="3-fragmentation">3. Fragmentation</h5>
<p>Now, if we’re talking about human DNA, the genome is going to be really long and would thus take a really long time to sequence. So, the next step is cutting the DNA into smaller pieces for sequencing to help speed up the process. These fragments are officially termed “reads,” which I’ll use going forward. And fittingly, the collection of DNA fragments you generate is called a “library.”</p>
<p>But how do you determine how big you want these fragments to be? It largely depends on how the fragments will be sequenced and for what they’re being sequenced. Most NGS methods sequence up to 400 base pairs (“bp”) during one sequence cycle (called a “run”); the old-school Sanger method typically sequences up to 900bp.</p>
<p>The two typical approaches for this are either physical or enzymatic. One example physical method is acoustic shearing, which may be even cooler than sonication; the DNA sample is placed into a glass vial which is then subjected to acoustic energy that continually creates and collapses microbubbles. The process of growing these bubbles and causing them to implode creates shockwaves that have sufficient power to break down the sample DNA into random fragments. The power of the microbubbles fragments DNA pretty quickly with little loss of the DNA sample, and creates fragments ranging from 100bp to 5,000bp.</p>
<p>The other common physical method is nebulization, involving nebulizer devices which use compressed air to convert liquids into a fine mist. DNA is pushed through a small hole in the nebulizer, creating a mist which is then collected. The resulting fragments are typically 500bp to 3000bp, depending on how quickly the DNA is forced through the hole. This method is pretty quick, but can cause DNA to be lost in the process.</p>
<p>Enzymes capable of degrading DNA can also be used to fragment it into smaller pieces. Although it’s more consistent than physical methods, it also has the opportunity to alter the fragment by insertion or deletion. So, your method of choice really depends on your end goal.</p>
<h5 id="4-repairs--adapters">4. Repairs &amp; Adapters</h5>
<p>First, let’s take a look at how a DNA molecule looks, broken apart by its ribbons (or strands):</p>
<p><img src="/blog/img/dna-molecule.png" alt="Diagram of a DNA molecule broken apart by its ribbons / strands, by Kelly Shortridge"></p>
<p>Each arrow represents one strand of DNA; I’ve laid this out linearly, but as you very likely know (and if the abundance of pictures in this post hasn’t reinforced it), the strands twist to form a double helix. The 3’ end of one of the DNA strands aligns with the 5’ end of its partner strand.The way the DNA molecule is “read” is from 5’ to 5’. So, if the 5’ ends of both strands stick out farther than each other, the strands will be “repaired” by combining them together and filling in the gaps (A with T and C with G).</p>
<p>This is crucial for the next step, which is putting “adapters” onto the DNA strands. Adapters are short DNA molecules that have been synthesized specifically to help in the sequencing process. Adapters are generally provided in a kit, and the adapter sequences therein will be added to the 5’ and 3’ ends of every fragment within one library.</p>
<p>Some may help put “primers” into their correct place, while others may help the DNA fragment stick to a surface in what’s called “immobilization.” The primers have to match the beginning and the end of the DNA fragment, since they’ll serve as the guide for what DNA fragment is amplified in the next step. The final step in the library preparation, immobilization, just means that each single DNA molecule is made up by a bead, which is then anchored to some solid surface, like a glass plate.</p>
<h4 id="amplification">Amplification</h4>
<p>Polymerase chain reactions (PCR) is used effectively as a copy machine for a DNA sequence, or what’s known as “amplification.” This thinking is actually somewhat similar to the methods for improving image quality within satellite imagery that <a href="/blog/posts/wtfunding-space-data-satellite-imagery/">I discussed in my prior post</a>. If you think of the target DNA as an object to be captured within a landscape, it’s far easier to take a bunch of quick pictures of it than take one big, detailed picture of the landscape and try to extract the object from it.</p>
<p>This process follows an exponential curve, too, since you’re making copies of the copies. After just 6 cycles, you have 64 copies of the target gene. After ten, you have 1,024. These copies then make up a “DNA colony” to be sequenced.</p>
<h4 id="sequence-reaction">Sequence Reaction</h4>
<p><img src="/blog/img/bioinformatics-06.gif" alt="DNA sequencing gif"><em>Note: this is a sequencing UI from a TV show and thus should be disregarded entirely as resembling reality</em></p>
<p>NGS is also known as high-throughput sequencing, which just means that there’s tons more data that comes out of the sequencing process, and it’s due to parallelization. There are a few different sub-methods of NGS, the most common of which seems to be Illumina’s sequencing by synthesis (SBS) — though the others also depend on the sort of library preparation and amplification discussed above.</p>
<p>In SBS, the immobilized DNA is “washed” with one of the four nucleotide bases (either T, A, C or G) at a time to see which gets incorporated. For example, if the fragment’s template is GCGAATCG, and your wash is “A”, both of the A’s in the middle will incorporate the wash and the others will be washed away. Then, using the power of lasers and fluorescence, a machine can record which part of the fragment incorporated the base, showing _ _ _ A A _ _ _.Then, the “A” dye would be chemically removed so the next cycle can start, a “G” wash would be used, the laser magic happens, and the machine would record G _ G A A _ _ G, and so forth one nucleotide at a time until the sequence is complete.</p>
<p>For frame of reference, these machines are really, really expensive. At the low end, Illumina’s Miniseq “benchtop” sequencer is about $50,000 and produces an output of up to 7.5GB in just 24 hours. But at the high end, its heavy-duty sequencer, the HiSeqX, is sold for $10 million and only in units of 10, producing an output of up to 1.8TB in less than three days. Even despite their hefty price-tag, it’s still far cheaper and faster than Sanger sequencing (the “old school” method).</p>
<h4 id="sequence-assembly">Sequence Assembly</h4>
<p>The process above generates sequencing “reads,” but not the genome itself — which is where sequence assembly comes in, and is also where data science starts to get involved. De novo sequence assembly, which seems to be the most commonly used, reconstructs an organism’s likely genome sequence based on the reads. The word “likely” here is important, since there’s no guarantee that it’s correct, but instead is approximating the source of these fragments.</p>
<p>As you might guess, this leaves some room for error, which I’ll touch on in the “what’s hindering adoption” section. Errors can be in the form of false negatives, such as thinking a fragment is a mistake or just a repeat, or the fragments could just be linked up together in the wrong spots…plus there’s no guarantee that there weren’t errors in the sequencing process itself. To better illustrate why this process is so tricky, Wikipedia has a great analogy:</p>
<blockquote>
<p>The problem of sequence assembly can be compared to taking many copies of a book, passing each of them through a shredder with a different cutter, and piecing the text of the book back together just by looking at the shredded pieces. Besides the obvious difficulty of this task, there are some extra practical issues: the original may have many repeated paragraphs, and some shreds may be modified during shredding to have typos. Excerpts from another book may also be added in, and some shreds may be completely unrecognizable.</p>
</blockquote>
<p>One example algorithm used in de novo assembly is overlap-layout-consensus (OLC), which leverages a graph to represent the reads. First, overlaps between fragments are computed, and then each fragment is connected by an edge if there are indeed overlaps. The computing problem then becomes how to navigate through the graph in a way that still contains each fragment, which involves a lot of graph theory.</p>
<p>The other primary assembly algorithm is the de Bruijn graph (DBG), which also is based on graph theory. First, the reads are broken into smaller pieces of the same size (called k-mers). Then, the graph is created by ordering from the pieces that overlap at the beginning of the sequence to those that overlap at the end. This sounds linear, but isn’t.</p>
<p>The image below should help visualize each algorithm and compare how they differ:
<img src="/blog/img/overlap-consensus-de-bruijn-graph.png" alt="Visualizing the de Bruijn graph and overlap-layout-consensus algorithms, by Kelly Shortridge"></p>
<p>To move on, let’s assume for now you’ve done a pretty good job sequencing and assembling the DNA in question. It’s not just there to be admired, but to gain something useful out of it, which leads into sequence analysis.</p>
<p><img src="/blog/img/velociraptor-dna.jpg" alt="Raptor with DNA overlayed"><em>An example of a DNA sequence and foreshadowing of my “what are the risks” section</em></p>
<h3 id="sequence-analysis">Sequence Analysis</h3>
<p>Sequence analysis is where the startup software solutions addressing bioinformatics begin to appear. This is the process of taking the sequenced DNA and applying meaning to it — determining and reporting what the genes are and what is their purpose.</p>
<h4 id="homology-search">Homology Search</h4>
<p>“Homology searching” is a fancy way of saying looking for similarity between sequences, and is step one (at least to try) towards getting to the end goal of understanding the genes you’ve just sequenced. “Homology” itself means shared ancestry between genes in different species — after all, all species evolved from the same ancestors. So, when sequences between two distinct species are similar, you can argue that there’s homology.</p>
<p>If you are able to search a database of known sequences to see where they might be similar to your sequence, then you can much more easily identify certain genes and their functions, as well as speculate on evolutionary relationships. In an ideal scenario, if the results of your homology searches cover all the genes you need identified, and they already describe what the genes are for, then you’ve arrived at your end-result. You’ve also saved yourself a lot of other potential steps, which are described right after this.</p>
<p>So how do researchers compare sequences? It involves a lot of data science and also necessitates having a rigorous dataset with which to compare. The more genes are identified on other species, such as mice, the greater ability there is to search human DNA sequences for similar genes.</p>
<p>The most commonly used tool for homology search is BLAST, for “Basic Local Alignment Search Tool,” which is favored due to its speed, at the expense of some accuracy. For those familiar with statistics, it’s a heuristic algorithm. The way it compares a query sequence to a sequence database is by finding “high-scoring pairs,” or HSPs for short, which represent overlaps that seem statistically significant. The speed comes from creating “words” within a sequence, which are generally 11 letters long in DNA sequences, and running the search on each word, narrowing down which HSPs fit as the search moves through the letters. This constitutes a substitution matrix, and would look something like this in BLAST:</p>
<p><img src="/blog/img/substitution-matrix-dna.png" alt="An illustration of a substitution matrix for DNA sequence analysis, by Kelly Shortridge"></p>
<p>While BLAST is the current market leader, improving upon it is clearly an area of interest within academia. Like in many data science projects, making an algorithm both faster and more accurate is the holy grail. For projects with lots of sequence data, BLAST becomes computationally expensive — and given the explosion of sequence data in recent years, researchers have developed and proposed alternatives that potentially offer greater efficiency and ameliorate computational cost concerns. I could publish an academic journal on all the approaches proposed, but I’ll explain one I find particularly neat and badassly-named, GHOSTX.</p>
<p>GHOSTX’s speed advantage is by analyzing both length and match score — the algorithm will only ingest sequences that are both close to the appropriate length of the target sequence and deemed to be statistically similar content-wise. To oversimplify a bit, it’s able to do this by using suffix arrays, an example of which is below:</p>
<p><img src="/blog/img/suffix-array-dna.png" alt="An illustration of GHOSTX and suffix arrays for DNA sequence analysis, by Kelly Shortridge"></p>
<p>This is much more efficient for searching, since now you just have an array with pointers to text snippets (suffixes) that are sorted in alphabetical order. So, what’s being stored isn’t “A$,” “ATA$” and so forth, but “7” or “5”, that is, what’s stored is only the code for where in the list the suffix lies. In other words, you now have an index for all the different suffixes, making it easy to find where your sequence matches others in the database.</p>
<p>Using these various methods, even if you find a matching result for a gene, there still may be insufficient data on what the gene does (i.e. lacking gene annotation, which I’ll describe in a bit). In those cases, and others in which homology search doesn’t complete the puzzle, you’ll have to pursue the following steps in addition.</p>
<h4 id="gene-identification">Gene Identification</h4>
<p>If the homology search fails, or for whatever reason you just don’t want to perform a homology search, gene identification is the next step. What you’ll be looking for is the region of DNA that specifically codes for its function, i.e. codes for protein. These are called open reading frames (ORF), which you can think of as triplets of base nucleotides (again: A, T, C or G), and researchers use them to initially identify regions that probably show what the gene is.</p>
<p>Fancy data science can be involved in this step, too, as gene prediction (which is just the predictive form of gene identification). There are three primary methods of doing so, which include statistical, comparative or empirical methods (i.e. homology, which I went through above).</p>
<p>As an example of a statistical method, gene prediction programs using neural networks can be trained on the organism whose DNA is in question and then applied to the target sequence. Hidden Markov Models (HMMs) also use training data to return the most likely structure of a gene, which is useful if there’s no gene in a sequence or only a partial gene.</p>
<p>Comparative methods use DNA from another species to help predict gene function, since there’s a lot of overlap in how our genes work. The common example is using mouse DNA to determine genes in human DNA; after all, we’re a lot more like a mouse than we are a plant. In fact, 95% of genes that actually code for a biological function match one for one between mice and humans. The theory behind this is that evolutionary pressures led to a general, optimized blueprint for how mammals should function.</p>
<p>However, both these methods still require experimentation to verify that the gene prediction is accurate…at least for now, until these methods (particularly statistical) become more accurate.</p>
<h4 id="translation--database-search">Translation &amp; Database Search</h4>
<p>Once you have identified your gene, whether through homology search or gene identification, you need to translate that gene into its proteins — the stuff that brings to life whatever the genes have blueprinted. These translators are pretty easily found on various academic websites. For example, the sequence I’ve been using in my examples throughout, “CATCATATCAGTCATAGT,” by pure luck happens to have an ORF (the region that actually codes for proteins). It’s in “ATGACTGATATGATG,” which is in the “opposite” DNA strand of my sequence (specifically, the opposite of “CATCATATCAGTCATAGT”).</p>
<p>Once you have your ORFs, you can search them against protein databases to see if there’s already data on them. If there is, it’ll help enormously in the next and crucial step, gene annotation, since you’ll have a head start on knowing what purpose the gene serves.</p>
<h4 id="gene-annotation">Gene Annotation</h4>
<p>Gene annotation is the process in which a specific gene is assigned a specific functional or physical feature. This ideally is largely automated and just enhanced with human expertise, because it would take a very long time for one person to manually annotate a DNA sequence.The functional features generally are drawn from a specific vocabulary to form an ontology, or a formal method of naming and defining things.</p>
<p>There are two components of annotation: the gene’s structure and its function. Structure mostly just determines which regions code for proteins, while functional annotation records the gene’s biological and biochemical functions. Its function just means what proteins are there, or what proteins are hypothetically there (such as predicted by gene finding programs), which engenders how the gene influences an organism’s operations. The gene is also given a name, whether unknown or known based on being identical to a previously documented gene.</p>
<p>Automated annotation uses data science in order to help predict the relevant gene annotations, like its functions. Bayesian networks and similar statistical methods are used by learning from prior annotations, but doesn’t help with finding new gene functions. Automated annotation certainly helps speed up the process, but it’s basically impossible to be accurate on the whole genome, so manual annotation is still needed to clean up any errors or fill in the gaps.</p>
<p>Annotation is one of those things that sounds really easy to do (“just write down what the gene does!”), but is actually really tricky in process. The quality of annotation is largely dependent on the quality of the process before it. If the gene prediction was faulty, or if the sequencing shoddy, it will result in greater difficulty and / or error in annotation.</p>
<h4 id="upload--stop-collaborate-and-listen">Upload &amp; Stop, Collaborate and Listen</h4>
<p><img src="/blog/img/cold-storage-dino-dna.jpg" alt="Cold storage of dino DNA"><em>This is not the sort of DNA data storage I mean.</em></p>
<p>Once the DNA has been sequenced and annotated, it has to be stored…which sounds simple enough, but isn’t quite. In a brief crossover to my industry, security and compliance is actually pretty important when considering biological storage; after all, someone’s genome is extremely private information (though admittedly a mouse probably doesn’t care if their genomic data gets hacked).</p>
<p>Adding to the complexity of storage, there are various types of databases for genomic data — some that contain empirical genomic data, predicted genomic data or structural data. It might also have just the raw sequencing data, or data that’s been cleaned up a bit. Some are publicly available, but some aren’t. The databases also have to be able to facilitate search across sequences, too.</p>
<p>While databases may be a bit passé in other areas of tech, this is actually a reasonably hot area within bioinformatics. Making it easy for customers to search and combine different types of genomic data, such as via an API, is hugely useful in bioinformatics applications, but even helping them manage their data is a plus as well, such as through versioning control (since this sort of data is always being updated). Some startups are also focused on unifying this data with other sorts of medical data, such as patient outcomes, which helps streamline their customer’s processes considerably by not requiring them to visit lots of different databases and tools.</p>
<p>Collaboration is another key part of the bioinformatics end-stage. For example, when designing a pharmaceutical drug, the R&amp;D team needs to collaborate with the clinical trials team, who needs to collaborate with the clinical team in order to maintain a complete feedback loop. So, bioinformatics software is also about optimizing workflows to produce more robust results — which may sound ridiculously outmoded to techies at the cutting edge, but, well, there it is.</p>
<p><img src="/blog/img/ian-malcolm-there-it-is.gif" alt="Ian Malcolm gif saying &amp;ldquo;Well, there it is&amp;rdquo;"><em>The O.G. thought leader.</em></p>
<p>Now with these query-able databases of annotated genes, i.e. you have your sequenced DNA handy and know more or less what the random letters mean in a biological sense, you can now get to work on using them towards a variety of use cases.</p>
<hr>
<h2 id="a-nameapplicationsawhat-are-the-applications"><a name="applications"></a>What are the applications?</h2>
<p>While genes really shouldn’t be thought of based on what diseases they cause, the most obvious applications of bioinformatics revolve around discerning better treatments of diseases. The ability to map out genes and relevant mutations of complex and pervasive diseases such as cancer, diabetes or those that cause infertility could help research in which drugs or other forms of therapy might alleviate — or even eliminate — them.</p>
<p>With that lens, there are three major areas that can benefit the most from bioinformatics: pharmaceuticals, personalized medicine and genetic testing.</p>
<h3 id="pharmaceuticals">Pharmaceuticals</h3>
<p><img src="/blog/img/bioinformatics-07.JPG" alt="Pharmaceutical person doing something"></p>
<p>Pharmaceutical companies can leverage bioinformatics for a variety of applications, from target discovery and drug design to improving the efficacy of existing drugs.</p>
<p>Pharmacogenomics studies how people’s responses to drugs are affected by their genes towards a more tailored approach to pharmaceuticals. You also might hear the term “pharmacogenetics,” which is typically used interchangeably with pharmacogenomics but does have a slight difference. Pharmacogenomics involves looking for differences between people at the genetic level that can explain drug response, while pharmacogenetics involves looking for a genetic reason for why there is a specific drug response. So think of pharmacogenomics as a whole-genome approach while pharmacogenetics deals with just one interaction between a drug and genes.</p>
<p>Right now most medications take a “one size fits all” approach, when in reality a person’s genetic makeup may determine how beneficial a drug is or what side effects they experience. In the future, this means doctors could analyze your genome against a variety of drugs for a specific condition to optimize your therapy — part of personalized medicine, which I’ll discuss in greater detail below.</p>
<p>This type of “predictive” approach would be a big benefit to patients, as they would theoretically no longer have to suffer through a trial and error period that can sometimes come with harmful reactions. Everything from the type of drugs to the dose amount could be tailored based on your predicted reaction to the medication. And, when the data on your response based on this predicted approach is recorded, it would help improve prescription algorithms, resulting in a huge data-network-effect.</p>
<p>For example, at least one in ten Americans takes antidepressants. Which antidepressant and dosage works for some people vs. others currently is more art than science. Further, these medications can often come with adverse side effects that reduce the patient’s quality of life — from insomnia to emotional numbness — or even make their depression worse. If instead their genome could be used to accurately predict which drug and how much of it would improve their depression the most, or at least cause them to suffer the least, their outcomes would be far more quickly and painlessly reached.</p>
<p>So how would pharmaceutical companies reach this stage? Genetic data could be used to isolate a particular protein related to a disease and conduct research towards finding a drug effective against this protein. Or, they could use genetic data to better understand receptors (such as a protein) so they know which would be the best target for their drug molecule. There’s also plenty of room for improvement with existing drugs; oftentimes drugs are effective against conditions for reasons that aren’t well understood, and bioinformatics could help determine why that is.</p>
<p>Ultimately, the real dream for the future of pharmaceuticals is a combination of bioinformatics-informed drug development with a feedback loop of patient outcomes to continually inform and improve drug design. This would likely serve as the cornerstone of personalized medicine, which leads us to…</p>
<h3 id="precision--personalized-medicine">Precision / Personalized Medicine</h3>
<p><img src="/blog/img/bioinformatics-08.jpg" alt="Doctor with an adorable puppy"><em>In 2030, scientists will discover the key to all health issues is giving someone a puppy.</em></p>
<p>“Personalized” or “precision” medicine is all about tailoring treatment and prevention based on an individual’s unique characteristics, including their genes. It’s unrealistic (or so my 2016 brain believes) to assume that in 2100 AD there will be a unique drug for each person and their conditions, but it is realistic to assume, even in the nearish future, that populations can be segmented in a way to substantially improve treatment. This isn’t just about drugs, either — it extends to medical devices along with prevention of diseases, too.</p>
<p>Part of this nearish future is improving the clinical trials process, the experiments performed with human participants to test the efficacy and safety of treatment methods. As discussed in the pharmaceutical example above, having a record of genetic compositions of potential trial participants can help improve the decision-making behind who should be included in the trial. With the clinical trials market estimated to reach $72 billion by 2020, you can see why having the ability to more carefully select trial participants in order to optimize clinical trial outcomes is worthwhile.</p>
<p>In a somewhat extreme case, Estonia is using a government-driven approach for furthering along these initiatives — it’s collecting DNA of all its citizens to create a genetic biobank. While right now the use case is primarily genomic research, farther down the line, having a strong sense of the average genetic “posture,” so to speak, of the Estonian population could help inform public health policy and population-wide personalized medicine.</p>
<p>Here in the United States, Obama created the Precision Medicine Initiative (PMI) in January 2015 within the Health and Human Services (HHS) department, primarily led by the National Institutes of Health (NIH). Its goal is simply to do away with the aforementioned “one size fits all” approach to improve medical outcomes for patients. It’s still in its early innings and has a somewhat slow timeline; in February of this year, NIH announced that it will study the genomes of one million volunteers by 2019.</p>
<p>The hope is the PMI will help kickstart precision medicine both by collecting a large swath of data as well as through developing improved analysis capabilities. It’s likely that everyday citizens won’t see the fruits of personalized medicine until quite a bit after 2019, as personalized treatments will require research, development and testing time, even once the genetic data collection and analysis methods improve. The presumable safety regulations around personalized medicine also may cause snags and delays, which I’ll discuss a bit more in the next section.</p>
<h3 id="genetic-testing">Genetic Testing</h3>
<p><img src="/blog/img/bioinformatics-09.jpg" alt="Newborn baby&amp;rsquo;s foot"></p>
<p>Genetic testing arguably must serve as the basis for any personalized medicine initiative, as only by knowing an individual’s genes can any treatment plans be tailored. A genetic test is simply a test that identifies either the person’s genes or changes in their genes (including proteins). There are currently a few uses for genetic tests, or at least uses cases leveraged in marketing.</p>
<p>Parents or couples considering becoming parents can use genetic testing to evaluate what sort of genes they will pass along to their offspring, generally with the goal of identifying genetic disorders. For couples pursuing in vitro fertilization (IVF), genetic testing can also find defects within the embryos created via IVF before they are implanted into the carrier, which increases the the embryo’s viability.</p>
<p>Individuals can test their own susceptibility to disorders and see what sort of mutated genes they have, or use it as a diagnostic test if they have a currently unidentified ailment. Or, more humorously, as I heard from a friend, being able to tell a sibling that you have less Neanderthal DNA than they have. This, in theory, helps individuals take preventative actions to stave off diseases or illnesses later in life.</p>
<p>While most genetic tests are currently based on DNA, longer-term there will likely be tests involving RNA and proteins. As discussed in the first section, DNA is the blueprint; this means it’s potentially a decent predictor of genetic risks, but not great at measuring the body’s current state for diagnostic purposes.</p>
<h3 id="genome-editing">Genome Editing</h3>
<p>While this isn’t a major application (yet), it’s pretty cool and a company doing it (Intellia) just IPO’d, so I want to touch on it briefly. Turns out you can also leverage DNA sequences and knowledge of genes’ functions in order to modify an organism’s DNA sequence, such as a virus.</p>
<p>The key tech enabler here has been CRISPR, or “clustered regularly interspaced short palindromic repeats,” which actually is part of a bacterial immune system. To use a reverse analogy from my industry, it’s a bit like most anti-virus software, where “signatures” of viruses are kept around so the system knows how to defend against it in the future.</p>
<p>The next part is using CRISPR-associated proteins, or “Cas,” that will actually chop virus DNA in half like a pair of scissors upon confrontation, the result being that the virus can no longer replicate.</p>
<p><img src="/blog/img/crispr.gif" alt="Gif of how Crispr works"></p>
<p>This is pretty cool stuff, and while most of the research leveraging this technology is towards curing diseases, this could make the “designer baby” fears a reality, or even facilitate genetically engineered bioweapons (more on the latter in the “risks” section).</p>
<hr>
<h2 id="a-nameadoptionawhats-hindering-adoption"><a name="adoption"></a>What&rsquo;s hindering adoption?</h2>
<p><img src="/blog/img/bioinformatics-12.jpg" alt="A faceless person in a latex suit with DNA printed on them"><em>In the future, apparently we’ll be featureless and wear latex suits with our genomes printed on them.</em></p>
<p>Now that sequencing costs have decreased, what are the other barriers to progress? A common theme that’s emerging from these WTFunding pieces it that being able to manage and analyze a lot of complex data turns out to be really hard…and bioinformatics is not different.</p>
<p>While it’s now far cheaper to generate genomic data, there’s still no guarantee that it’s accurate. Since gene annotation is still performed by humans, at least to some degree, there’s still plenty of room for human error — not to mention oftentimes the annotations are simply incomplete. You can play a “fun” game if you want and keep backing up the chain I walked you through earlier to spot the numerous chances for error to be incorporated, sequence assembly and sequencing itself being the most likely candidates.</p>
<p>You might have also caught while reading the “how it works” part that the algorithms for various processes require a lot of computational resources. Obviously the past decade has seen enormous strides in the availability of computing power, but it’s still reasonably time and cost prohibitive given the sheer size of genomic data. Someone else has already done the math on the size of the human genome in digital terms, and it’s <a href="https://medium.com/precision-medicine/how-big-is-the-human-genome-e90caa3409b0#.fstnfweh1">roughly 200GB of raw sequencing data</a> because, as explained before, you need a bunch of “reads” of the sequence.</p>
<p>On the data management side, a big question is how do you make genomic data easily accessible and combinable with other types of health data (such as patient outcomes), since that’s a key to unlocking a larger addressable market. There are startups addressing precisely this issue, in figuring out how to sensible combine different data types and link them up in an intuitive manner, but it’s safe to say that data harmony hasn’t yet been reached industry-wide.</p>
<p>The analysis portion specifically presents two big challenges. First, in a fragmented market for tools and algorithms and other analysis methods, how do you figure out what to use? Second, how do you know you’re not just seeing patterns in data that aren’t actually there? Many analysis tools are written with a single application in mind, and from what I gather, they aren’t very well-written, either. So organizations that could potentially benefit from bioinformatics may not be willing to dip their toe in the water until a discernable market with user-friendly software emerges.</p>
<p>And on the genetic testing side, the main questions are: Who pays for the tests? Who advises individuals on their test results? How does the data easily get to health care providers? While I’ll get into the incentive problems within genetic testing later, part of the problem currently is that genetic tests are still somewhat expensive for individuals. Will the government? With health insurance providers? It presents a bit of a chicken and egg problem, given patients need to be genetically tested in order to gain benefits from tailored pharmaceuticals or personalized medicine, but neither of those benefits are yet in a mature enough stage to incentivize people taking the tests.</p>
<hr>
<h2 id="a-namewho-caresawho-cares"><a name="who-cares"></a>Who cares?</h2>
<p><img src="/blog/img/bioinformatics-11.jpg" alt="Image of pills over twenty dollar bills"><em>Pillz on ya billz</em></p>
<p>VCs love numbers, so I’m going to present the big, high-level ones first:</p>
<ul>
<li>Global pharmaceutical industry = ~$1 trillion annually</li>
<li>U.S. pharma industry = ~$400 billion annually</li>
<li>Global healthcare industry = ~$3 trillion annually</li>
</ul>
<p>So combined, looking at a $4 trillion global market, which should assuage any VC’s fears of insufficient market size since grabbing 1% of the market gets you $40 billion in revenue. For reference, Pfizer has just about 5% of pharma market share and has a market capitalization of just over $200 billion.</p>
<p>But, the clever VC says, the initial markets will just be those with the highest pain points, not the entire industry! Those top therapy classes would be oncology (cancer), diabetes and mental health. So here are more numbers:</p>
<ul>
<li>Cancer drug spending in the U.S. = ~$42 billion annually (in 2014)</li>
<li>Direct medical costs of diagnosed diabetes in the U.S. = ~$176 billion annually (in 2013)</li>
<li>Direct cost of mental illness in the U.S. = ~$147 billion annually (in 2009)</li>
</ul>
<p>VCs should still be quite happy, since combined (and given inflation), this is likely nearing a $400 billion market, and just in the U.S. alone. With the 1% market share goal, you’re still looking at $4 billion in revenue today.</p>
<p>Obviously, it isn’t just VCs who care about the big, shiny numbers. The aforementioned pharma companies likely get cartoon-dollar-sign-eyes contemplating the ability to charge more for genomic-data-driven drugs tackling these illnesses. Medical providers could also pad their pockets by leveraging personalized medicine to differentiate from the competition and thus create the ability to markup prices.</p>
<p>But on a less cynical note, there’s the real chance that genome-based personalized treatments won’t be solely a marketing shtick and actually result in significantly better outcomes for patients. So anyone suffering from an illness could potentially see benefits from personalized medicine. Further, there are many illnesses where the current understanding of what causes them can be summed up by ¯\_(ツ)_/¯, and bioinformatics could potentially lead to breakthroughs that would give hope of proper treatment and prevention to those afflicted.</p>
<hr>
<h2 id="a-namerisksawhat-are-the-risks"><a name="risks"></a>What are the risks?</h2>
<p><img src="/blog/img/ian-malcolm-warning.gif" alt="Ian Malcolm saying &amp;ldquo;yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn&amp;rsquo;t stop to think if they should&amp;rdquo;"><em>And we all know how well Jurassic Park turned out</em></p>
<p>There’s a pretty clear incentive problem on the pharmaceutical side. Pharma companies’ goal is to have more people buy their drugs. One way to incentivize more people to buy your drugs is by recommending certain drugs based on a person’s genetic profile. In a world in which genetic testing was performed by an altruistic, neutral third party, people might only be prescribed the drugs they need. In a world in which pharma companies have massive budgets, particularly for marketing, it isn’t difficult to imagine that they might “sponsor” genetic testing recommendations to ensure that their drugs are recommended.</p>
<p>As with any industry with a growing reliance on data-driven and increasingly automated approaches, there’s the potential for error as well. Relying too much on data models could mean that there is less caution than today towards prescribing treatments, which could result in disastrous “black swan” type events. Though, as I mentioned, it’s far more probable that healthcare providers will instead commit the error of seeing patterns that don’t actually exist, the implications of which could range from unnecessary and costly treatments to wrong and costly treatments that set back the patient’s progress, or even harm them.</p>
<p>Taking a look at another risk angle, the recent <a href="http://www.wsj.com/articles/theranos-is-subject-of-criminal-probe-by-u-s-1461019055">Theranos debacle</a> is now a cautionary tale for medical startups. Many of the startups in bioinformatics are just in the software part of the space, so have less potential of scientific blundering like Theranos, since they’re instead helping with the data management, analysis and workflow portion.</p>
<p>But startups in the genetic testing area should definitely be heeding the lamentable tale of the “blood unicorn.” As I probably have made abundantly clear by this point, there’s room for error at nearly every step of the process, so there’s a non-trivial chance that end users might get inaccurate results that could have pernicious impacts on their health.</p>
<p>Let’s also not forget the cautionary tale of Jurassic Park, in which dinosaurs (which, spoiler alert, are extinct) are brought back to life through the power of DNA for the purposes of creating a theme park. Then everything goes to shit and people die. I’m pretty skeptical that anyone will be so silly as to bring back dinosaurs, but the general lesson of considering long-term implications is an important one. While the current research focus may be on disease prevention, there could be a future in which you can “enhance” yourself (or your offspring, like the “designer babies” I mentioned earlier) through genetic modification. It sounds really cool, but it could also be really, really disastrous and have far-reaching consequences not just health-wise, but societally (for anyone who has played Bioshock, think of ADAM).</p>
<p>On a geopolitical note, improvements in understanding the human genome also open up nefarious applications. At the “mildly evil” end of the evil scale, there’s genetic modification to win sporting events like the Olympics; at the “really evil” end of the evil scale, there’s creating highly targeted biological weapons to conduct warfare against a specific subpopulation, i.e. biological genocide. In fact, this year the U.S. Intelligence Community added gene editing to its <a href="https://www.dni.gov/files/documents/SASC_Unclassified_2016_ATA_SFR_FINAL.pdf">“Worldwide Threat Assessment”</a> report.</p>
<p><img src="/blog/img/bioinformatics-10.jpg" alt="Snake with needles for fangs, it&amp;rsquo;s pretty weird"><em>Implanting needles in snakes’ mouths as fangs is clearly the most efficient bioweapon distribution mechanism…</em></p>
<p>Heretofore, the scale needed for biological weapons has limited it due to the rigorous research and development process that synthetic pathogens require. But thanks to improvements in sequencing technology and analysis, it’s far easier to sequence viruses, which means it’s potentially easier to create new viruses, too!</p>
<p>While it’s probably an exaggeration to say that most people could now make a bioweapon in their backyard, as the availability of genomic databases and computing power increases, and the cost of equipment continues to decrease, it will certainly be accessible to well-funded terrorist organizations (if not already).</p>
<p>I definitely don’t think it’s too tin-foil-y to suggest that most of the current world powers have synthetic bioweapons at their disposal already. However, it may be a bit more tin-foily to say that the population’s increasing reliance on antibiotics really doesn’t help proactively defend against some sort of biological attack.</p>
<p>Hopefully these bioweapons, like nuclear weapons, aren’t ever deployed in-the-wild and are instead more about deterrence. In any case, this sort of subject could make for thrilling TV.</p>
<hr>
<h2 id="a-namecurrent-sceneawhats-the-current-scene"><a name="current-scene"></a>What&rsquo;s the current scene?</h2>
<p>There are a number of startups scattered about the various areas within the bioinformatics chain, most of which seem to have early leaders emerging within them. As far as incumbents, there’s Illumina and Qiagen in the NGS part, CLC bio (owned by Qiagen) in bioinformatics analysis, Bina Technologies (owned by Roche) and Genomic Health in data management and most recently, Intellia in gene editing. I’ve segmented out the startups based on the different sub-areas below:</p>
<h3 id="data-management">Data Management</h3>
<ul>
<li>DNAnexus</li>
<li>Genome Compiler</li>
<li>Genospace</li>
<li>MediSapiens</li>
<li>SolveBio</li>
<li>Benchling</li>
<li>Omicia</li>
</ul>
<h3 id="genomic-analysis">Genomic Analysis</h3>
<ul>
<li>Biomatters</li>
<li>BioNano Genomics</li>
<li>CancerIQ</li>
<li>Maverix Biomics</li>
<li>One Codex</li>
<li>Onramp BioInformatics</li>
<li>Spiral Genetics</li>
<li>Syapse</li>
<li>Tute Genomics</li>
</ul>
<h3 id="genome-editing-1">Genome Editing</h3>
<ul>
<li>Caribou</li>
<li>CRISPR Therapeutics</li>
<li>Desktop Genetics</li>
<li>Homology Medicines</li>
</ul>
<h3 id="genetic-testing-1">Genetic Testing</h3>
<ul>
<li>23andMe</li>
<li>Cofactor Genomics</li>
<li>Color Genomics</li>
<li>Counsyl</li>
<li>NextGxDX</li>
<li>Recombine</li>
</ul>
<p>There isn’t one major investor that’s all over this space, though in the lead is definitely Google Ventures. Below is the list of the major institutional and corporate VCs that have played in this space in some form:</p>
<ul>
<li>Andreessen Horowitz</li>
<li>Data Collective</li>
<li>Felicis Ventures</li>
<li>First Round</li>
<li>Formation 8</li>
<li>Google Ventures</li>
<li>Johnson &amp; Johnson</li>
<li>Khosla Ventures</li>
<li>Mohr Davidow Ventures</li>
<li>New Enterprise Associates</li>
<li>Novartis Venture Fund</li>
<li>SV Angel</li>
<li>WuXi Healthcare Ventures</li>
<li>Y Combinator</li>
</ul>
<hr>
<h2 id="a-nameconclusionaconclusion"><a name="conclusion"></a>Conclusion</h2>
<p><img src="/blog/img/life-finds-a-way.gif" alt="Ian Malcolm saying &amp;ldquo;Life finds a way&amp;rdquo;"><em>Bioinformatics startups will find a way</em></p>
<p>The implications of a more mature bioinformatics industry are pretty radical — from curing cancer to having vastly improved patient outcomes resulting from personalized medicine. I think it’s evident from the simplicity of some current solutions, such as basic data management and collaboration tools, that there’s a long way to go until this area becomes sufficiently sophisticated and automated.</p>
<p>While the error problem throughout the bioinformatics chain will certainly hinder adoption of end-applications, I think the truly massive dollar size amounts behind those applications will spur along innovation sooner rather than later. Perhaps even some of the washed out founders and engineers who are victims of the current market correction will apply their computer and data science skills towards this area — it seems a more easily accessible industry than others, with collaborative incumbents.</p>
<p>I think the biggest long-term opportunities are in what is some of the traditionally “boring” application infrastructure software — data management, collaboration, workflow management, etc. — both in pharmaceutical research and for healthcare providers. If I were a healthcare data platform like Flatiron Health or Health Catalyst, I might be looking to scoop up some of these companies early with the long-term vision of an integrated genomic and clinical data platform.</p>
<p>In any case, the companies that create this sort of middleware can use those data-driven underpinnings to very easily reach up into the sexier analytics and visualization plays. I honestly wouldn’t be shocked if some of these data management software startups become the SAP or Oracle of Bioinformatics.</p>
<p>In contrast, I think genetic testing will eventually become a commodity, which in my opinion doesn’t justify the level of risk that startups face with it today. Admittedly, it’s a crucial part of unlocking the potential of personalized medicine, but my gut feeling is that there will just end up being a Quest Diagnostics and Laboratory Corp. of genetic testing one day. For comparison, Quest has a market cap of ~$10 billion and Laboratory has one of ~$13 billion, while SAP has one of ~$95 billion and Oracle one of ~$163 billion (not to mention all of their competitors). So I think on the whole, the software bet is the smarter one given the risk / reward trade off.</p>
<p>Finally, I think we’re not at all yet prepared for the ethical challenges that will arise from bioinformatics applications…but that’s another long-read post for a different time.</p>
]]></atom:content>
        </item>
        
        <item>
            <title>Apple vs. FBI: Privacy &amp; Inequality</title>
            <link>https://kellyshortridge.com/blog/posts/apple-vs-fbi-privacy-inequality/</link>
            <pubDate>Thu, 17 Mar 2016 19:29:13 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/apple-vs-fbi-privacy-inequality/</guid>
            <description>
In recent news, there’s been a fiery public relations battle between the FBI and Apple. While there has been vocal protest by the technology community, and particularly the information security community, against the FBI’s request, it has largely been rooted in (very valid) concerns over the degradation of software security, national security and privacy in a general sense. However, there’s a larger societal concern that cannot be ignored.
By way of brief background, the gist of the legal fight is that the FBI wants Apple to create a custom version of its operating system for iPhones, iOS, that would allow it to circumvent the safety measure of reducing the frequency with which the device’s passcode can be guessed when locked. The FBI wants this in order to access the contents of the San Bernardino shooter’s personal device, although this software, if created, would in fact allow them to circumvent the safety measure of any iPhone. I recommend viewing John Oliver’s segment on the debate for an accessible, but more comprehensive overview.
The issue itself, I fear, is representative of the early stages of a dichotomy between the “haves” and “have nots,” but for privacy. This is more important than I believe many realize as far as deepening inequality and injustice. This isn’t just about protecting photos of your children, or keeping secret the absurd questions you type into Google.
This is about the fact that devices are now an integral part of our digital identities; that, as dictated by the Constitution, all citizens have a right to privacy; and that, while not necessarily the norm, there are very real abuses of power within the law enforcement and legal system and bias against underprivileged groups.
Society and technology are now in an entrenched dependency; tech inequality will progressively manifest as socioeconomic inequality. The stratification of tech knowledge between higher and lower socioeconomic groups is leading to the latter’s vicissitudes in education and employment opportunities, but I also argue that the Apple vs. FBI case is the start of the age in which it manifests in legal rights as well.
Even if Apple wins, they will continue their invigorated efforts towards ensuring that any requests for customer data by law enforcement simply cannot be fulfilled. This will be a fantastic result — for those who can afford an iPhone. The reality is that owning an Android is a less expensive option, but does not come with these protections and is unlikely to in the near future.
Apple can enforce this level of security because it manufactures both the device and develops its operating system. Google could do the same with its own line of Nexus devices, but otherwise there would need to be a collaboration across all the various manufacturers of Android devices in order for it to happen. Even if such a collaboration does happen, it will take time and likely still command a price premium in the market for these efforts.
While there are a number of security measures individual Android users can take to reproduce most of Apple’s protections, they require a higher degree of tech ability than most consumers, not to mention lower socioeconomic groups, possess. So even though it is technologically feasible, the nature of socio-tech inequality as it relates to education still creates a substantial hurdle towards an accessible method of ensuring privacy.
Less affluent consumers are thus left without these protections, the protections that protect our private data on our mobile devices from law enforcement and enforce the Fourth Amendment. These less affluent consumers, particularly those belonging to racial minorities, are also much more likely to be accused of criminal activity.
There are two key disadvantages in this arena that underprivileged consumers subject to discrimination face, which I’ll illustrate by way of analogy (which may seem superficial at first blush). In Jay-Z’s song “99 Problems,” he raps that, when asked by a police officer to search his car, he retorts, “Well, my glove compartment is locked, so is the trunk in the back / And I know my rights, so you gon’ need a warrant for that.”
First, underprivileged people may not have the ability to “lock” the digital versions of their glove compartments and trunks, because they cannot afford the luxury of privacy via strong security. Second, Jay-Z is aware of his rights and the need for the officer to have a warrant, and such knowledge is invaluable in maintaining his privacy — more importantly, his innocence — in the face of accusatory and biased law enforcement. However, the knowledge and understanding of digital privacy rights, and how to protect them, is barely existent among even affluent, highly-educated groups, so consumers with greater resource constraints are even less likely to have satisfactory awareness.
With many highly public instances of abuse of power by law enforcement, it would be naive to assume, should the FBI win this case, that the power that having full access to the contents of a mobile device brings would not have the same potential to be similarly abused. When you consider that divulging passwords is protected by the Fifth Amendment as “knowledge,” having an easy technology option — simply automatically updating the device with law enforcement’s proprietary version of the operating system, thereby removing passcode protections — is understandably seductive.
Parallel construction is a very real concern among groups subject to routine discrimination, and it isn’t difficult to imagine law enforcement illegally unlocking phones, searching for the evidence they want and retroactively claiming that this data was the cause for the suspicion. This includes liberal interpretations of incriminating evidence — jokes could potentially be purposefully misconstrued as evidence, with the defendants facing racial or socioeconomic bias that would make it difficult for them to contest.
The combination of pervasive smartphone cameras and the viral nature of social media has helped foment the recent outrage against transgressions by law enforcement. Knowing that data recorded on smartphones now has the potential to highlight unprofessional and illegal conduct, it also not difficult to imagine that law enforcement would desire the ability to easily circumvent security protections to prevent such ignominy.
Even with these examples, it’s essential that we not forget that privacy is granted to us as a fundamental right, and that even the potential for its violation is worthy of outrage, regardless of cause. Rights are meant to be for all citizens, but in a FBI-wins world, only those who can afford the efforts of Apple’s R&amp;D department will be able to buy their rights back in a depressingly simoniacal manner.
Though it may appear hyperbolic, this case may be one that has a resounding impact in shaping our society going forward. Even though many will still be unable to afford the luxury of privacy, at least in the short-term, even if Apple wins, their prospects are far more dire if Apple loses. Should the FBI win, we are unequivocally cementing this inequality and further disenfranchising those who cannot afford the luxury of privacy.
I like to think that the United States, at its best, can serve as a role model for innovation to the rest of the world. Is the sort of society in which only those who are well-off can realize their right to privacy the one we want to build?
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/apple-privacy-scales.png" alt="Scales of justice, with an iPhone on one side"></p>
<p>In recent news, there’s been a fiery public relations battle between the FBI and Apple. While there has been vocal protest by the technology community, and particularly the information security community, against the FBI’s request, it has largely been rooted in (very valid) concerns over the degradation of software security, national security and privacy in a general sense. However, there’s a larger societal concern that cannot be ignored.</p>
<p>By way of brief background, the gist of the legal fight is that the FBI wants Apple to create a custom version of its operating system for iPhones, iOS, that would allow it to circumvent the safety measure of reducing the frequency with which the device’s passcode can be guessed when locked. The FBI wants this in order to access the contents of the San Bernardino shooter’s personal device, although this software, if created, would in fact allow them to circumvent the safety measure of any iPhone. I recommend viewing <a href="https://www.youtube.com/watch?v=zsjZ2r9Ygzw">John Oliver’s segment</a> on the debate for an accessible, but more comprehensive overview.</p>
<p>The issue itself, I fear, is representative of the early stages of a dichotomy between the “haves” and “have nots,” but for privacy. This is more important than I believe many realize as far as deepening inequality and injustice. This isn’t just about protecting photos of your children, or keeping secret the absurd questions you type into Google.</p>
<p>This is about the fact that devices are now an integral part of our digital identities; that, as dictated by the Constitution, all citizens have a right to privacy; and that, while not necessarily the norm, there are very real abuses of power within the law enforcement and legal system and bias against underprivileged groups.</p>
<p>Society and technology are now in an entrenched dependency; tech inequality will progressively manifest as socioeconomic inequality. The stratification of tech knowledge between higher and lower socioeconomic groups is leading to the latter’s vicissitudes in education and employment opportunities, but I also argue that the Apple vs. FBI case is the start of the age in which it manifests in legal rights as well.</p>
<p>Even if Apple wins, they will continue their invigorated efforts towards ensuring that any requests for customer data by law enforcement simply cannot be fulfilled. This will be a fantastic result — for those who can afford an iPhone. The reality is that owning an Android is a less expensive option, but does not come with these protections and is unlikely to in the near future.</p>
<p>Apple can enforce this level of security because it manufactures both the device and develops its operating system. Google could do the same with its own line of Nexus devices, but otherwise there would need to be a collaboration across all the various manufacturers of Android devices in order for it to happen. Even if such a collaboration does happen, it will take time and likely still command a price premium in the market for these efforts.</p>
<p>While there are a number of security measures individual Android users can take to reproduce most of Apple’s protections, they require a higher degree of tech ability than most consumers, not to mention lower socioeconomic groups, possess. So even though it is technologically feasible, the nature of socio-tech inequality as it relates to education still creates a substantial hurdle towards an accessible method of ensuring privacy.</p>
<p>Less affluent consumers are thus left without these protections, the protections that protect our private data on our mobile devices from law enforcement and enforce the Fourth Amendment. These less affluent consumers, particularly those belonging to racial minorities, are also much more likely to be accused of criminal activity.</p>
<p>There are two key disadvantages in this arena that underprivileged consumers subject to discrimination face, which I’ll illustrate by way of analogy (which may seem superficial at first blush). In Jay-Z’s song <a href="https://www.youtube.com/watch?v=32Xh9L-AqA8">“99 Problems,”</a> he raps that, when asked by a police officer to search his car, he retorts, <a href="http://genius.com/17560/Jay-z-99-problems/Well-my-glove-compartment-is-locked-so-is-the-trunk-in-the-back-and-i-know-my-rights-so-you-gon-need-a-warrant-for-that">“Well, my glove compartment is locked, so is the trunk in the back / And I know my rights, so you gon’ need a warrant for that.”</a></p>
<p>First, underprivileged people may not have the ability to “lock” the digital versions of their glove compartments and trunks, because they cannot afford the luxury of privacy via strong security. Second, Jay-Z is aware of his rights and the need for the officer to have a warrant, and such knowledge is invaluable in maintaining his privacy — more importantly, his innocence — in the face of accusatory and biased law enforcement. However, the knowledge and understanding of digital privacy rights, and how to protect them, is barely existent among even affluent, highly-educated groups, so consumers with greater resource constraints are even less likely to have satisfactory awareness.</p>
<p>With many highly public instances of abuse of power by law enforcement, it would be naive to assume, should the FBI win this case, that the power that having full access to the contents of a mobile device brings would not have the same potential to be similarly abused. When you consider that divulging passwords is protected by the Fifth Amendment as “knowledge,” having an easy technology option — simply automatically updating the device with law enforcement’s proprietary version of the operating system, thereby removing passcode protections — is understandably seductive.</p>
<p><a href="https://en.wikipedia.org/wiki/Parallel_construction">Parallel construction</a> is a very real concern among groups subject to routine discrimination, and it isn’t difficult to imagine law enforcement illegally unlocking phones, searching for the evidence they want and retroactively claiming that this data was the cause for the suspicion. This includes liberal interpretations of incriminating evidence — jokes could potentially be purposefully misconstrued as evidence, with the defendants facing racial or socioeconomic bias that would make it difficult for them to contest.</p>
<p>The combination of pervasive smartphone cameras and the viral nature of social media has helped foment the recent outrage against transgressions by law enforcement. Knowing that data recorded on smartphones now has the potential to highlight unprofessional and illegal conduct, it also not difficult to imagine that law enforcement would desire the ability to easily circumvent security protections to prevent such ignominy.</p>
<p>Even with these examples, it’s essential that we not forget that privacy is granted to us as a fundamental right, and that even the potential for its violation is worthy of outrage, regardless of cause. Rights are meant to be for all citizens, but in a FBI-wins world, only those who can afford the efforts of Apple’s R&amp;D department will be able to buy their rights back in a depressingly simoniacal manner.</p>
<p>Though it may appear hyperbolic, this case may be one that has a resounding impact in shaping our society going forward. Even though many will still be unable to afford the luxury of privacy, at least in the short-term, even if Apple wins, their prospects are far more dire if Apple loses. Should the FBI win, we are unequivocally cementing this inequality and further disenfranchising those who cannot afford the luxury of privacy.</p>
<p>I like to think that the United States, at its best, can serve as a role model for innovation to the rest of the world. Is the sort of society in which only those who are well-off can realize their right to privacy the one we want to build?</p>
]]></atom:content>
        </item>
        
        <item>
            <title>WTFunding: Space Data (Satellite Imagery)</title>
            <link>https://kellyshortridge.com/blog/posts/wtfunding-space-data-satellite-imagery/</link>
            <pubDate>Mon, 04 Jan 2016 18:49:24 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/wtfunding-space-data-satellite-imagery/</guid>
            <description>
WTFunding is one of my “spare time” projects to delve into tech sectors attracting VC funding that pique my curiosity. I like connecting dots between disparate things, it’s also pretty useful.
Table of Contents: So what is “space data” and “satellite imagery”? What are the applications? What’s hindering adoption? Who cares? What are the risks? What’s the current scene? Conclusion So what is “space data” and “satellite imagery”? I derive a lot of pleasure by calling it “space data,” but it is more accurately termed satellite or geospatial imagery, and the analysis thereof. This is all about image data collected by satellites about the Earth’s surface and the software that helps make these images useful to humans.
To get to the final, useful image, there are three main steps: launching a satellite into orbit, collecting the images of the Earth’s surface, then processing and analyzing the images. While image processing &amp; analysis, with the goal of gleaning intelligence from satellite imagery, is drawing the most VC funding at the moment, it’s important to understand each step to get both the opportunities and risks startups face within this sub-sector.
If you are more of a tl;dr person, you might want to skip down the page a bit to the next section, “What are the Applications?” and continue from there.
Step 1: Launch to Orbit Landsat 8, an Earth Observation satellite operated by NASA.
I’ll be focusing on remote sensing satellites, but there are also communications and weather satellites and reconnaissance (aka spy) satellites and maybe even satellites that can shoot lasers out of the sky (nothing is beyond their reach, after all). First, let’s cover some basics about satellites.
Rockets are the things who take satellites on their journey into Earth’s orbit. Earth’s gravity is what keeps satellites in orbit, just like how gravity causes the Moon to orbit us. Satellites are fitted with gyroscopes, spinning discs that leverage the Earth’s magnetic field to keep the satellites on course. Solar panels power the satellites once they’re in their proper orbit, since they have nice access to the Sun’s rays.
There are three different types of orbit: geostationary, geosynchronous and polar (or sun synchronous). Geostationary means that the satellite is stationary relative to the Earth’s surface. Communication and weather satellites use this orbit, staying way above the Earth at a specific point on the Equator (way above = 30k&#43; km). Polar (aka sun synchronous) means the satellite orbits at an altitude at which it will consistently pass over a given location at the same local time. This is what remote sensing satellites use, and are typically closer to the Earth’s surface than the communications satellites (only hundreds of km above).
Step 2: Data Collection Objects on Earth reflect energy from the Sun. This energy is in different “bands,” typically visible, infrared and water vapor. Each object reflects a different amount of energy, giving them a “spectral signature.” “Remote sensing” is how satellites collects these reflections — and just means sensing the energy bands from a remote place (like the Earth’s orbit).
How remote sensing works (no animals were harmed in the making of this image).
Satellites carry sensors that allow them to do this remote sensing. Passive sensors just collect the reflections (radiation) that are emitted from Earth, using the Sun’s energy as its source of electromagnetic radiation. Active sensing systems carry their own source of electromagnetic radiation, which is directed to the Earth’s surface. Just like the passive sensing system, it then measures the amount of energy that is reflected back.
Different sensors are designed to measure different parts of the electromagnetic spectrum. For example, the 450nm - 495nm wavelength band would measure the visible color blue. The 570nm - 590nm wavelength band would measure the visible color yellow. And near-infrared is the ~700nm - 1mm wavelength band.
The Electromagnetic Spectrum
Luckily, the electromagnetic spectrum is pretty handy for identifying objects. The way objects reflect back energy is consistent and tends to be unique. Water doesn’t reflect much visible or infrared energy, but vegetation strongly reflects infrared.
Satellites can even be used for things like detecting gravitational pull on the Earth’s surface. If you have two satellites, one leading and one following, when the lead passes over an area with high gravitational pull, it’ll start speeding up (and slow down over areas with weaker gravitational pull). NASA launched twin satellites in 2002 to do just that (GRACE mission). Being able to detect gravity helps map out areas below the Earth’s surface, from tunnels to potential oil wells.
Capturing lots of energy/wavelength bands creates what are called “multispectral images.” This means that the final image contains a few layers of images that each capture a different energy band. For example, one layer might capture the infrared band while another might capture the green band. You can even have superspectral and hyperspectral, but that just means there are lots and lots of layers with different energy bands. The more bands, the more opportunity for differentiation of objects on the ground. On the opposite side of the spectrum (pun intended), panchromatic images capture a single band of energy in black and white (and shades of grey).
As the satellite orbits around the Earth, it looks at a small chunk of the planet at a time. Even if it goes around the Earth every few hours, it is only capturing as little as less than 1% of the Earth each time. This small chunk is represented by a pixel, just like on your TV or computer monitor. For example, one pixel could represent 50x50m on Earth. Different types of sensors allow for different pixel sizes, typically from 5m to 1km — though some have pixel sizes as small as the micrometer scale (and tend to capture a smaller portion of the Earth as a result).
As the satellite orbits, it has to capture as many pixels as it can. However, even if pixel size is the same among different sensors, this doesn’t mean the resolution is as well. Higher resolution is indicated with a smaller physical measurement; for example, an image with a pixel size of 5m will look much clearer with a resolution of 5m vs. 20m.
According to one news report, in 2013 a spy satellite was launched in California capable of “snapping pictures detailed enough to distinguish the make and model of an automobile hundreds of miles below.” Given the length of a car is approximately 5m, let’s assume that the resolution needs to be 10% of that in order to see the necessary level of detail about the car, so 0.5m or smaller. And we’d likely assume that it isn’t just panchromatic since the government likely also wants to know the color of the car. This makes it quite a bit more powerful than most commercially available multispectral resolutions. As a frame of reference, DigitalGlobe’s satellite scheduled to be launched in September 2016, the WorldView-4, will have a multispectral resolution of 1.2m.
When reading about a satellite’s imagery capabilities, it’s crucial to understand that pixels and resolution are not the same thing. The difference between a pixel and resolution is that resolution is the size of the smallest detectable feature captured whereas a pixel is the size of the smallest physical area captured (or in other words, the smallest unit of the image).
There are also different types of resolution. Spatial resolution depends on what’s called the sensor’s “Instantaneous Field of View” (IFOV), which just means the visibility the sensor has at a particular altitude at a particular moment in time. Spectral resolution just means which parts of the electromagnetic spectrum the sensor can capture; higher spectral resolution indicates a narrower band of wavelengths captured. Radiometric resolution measures how many shades of grey there are between black and white, measured in bits; an 8-bit radiometric resolution means the sensor can measure 256 unique shades of gray. And temporal resolution is the length of time it takes for the satellite to complete one orbit; for example, some satellites may capture the same area every 5 days, while for others it will be every 15 days.
Another important thing to keep in mind is that satellites are not necessarily taking photographs; these sensors are not cameras (though those spy satellites sometimes do use long-focus lenses). The mechanism is through sensing energy bands. Hence the industry lingo is remote sensing. Each image represents a lot of pixels arranged in rows and columns — think of it like putting a fragment of an image into 1,000 rows by 1,000 columns in Excel, and then zooming way out to see the full picture.
To continue that analogy, that Excel file would then represent what’s called a “scene.” Scenes can represent over 2,000km on each side. These scenes are what most people buy since they want the “big picture,” not a tiny snippet of whatever is being observed.
Again, this scene represents a physical object on the Earth’s surface. In our cat’s case, let’s say it’s just over a foot tall sitting upright, which is about 0.33m. In the scene, we can see the entire physical object (the cat) and features as small as its nose, so using its nose as a proxy for the smallest detectable feature, our resolution is approximately 0.01m — meaning we’re likely using a super secret spy satellite to spot Mr. Whiskers.
Multispectral scenes can represent a large amount of data, often with tens of millions of bytes (10MB&#43;) per scene. This is because the intensity of each pixels is stored as a single byte (an 8-bit digital number), and there are typically millions of pixels in any given scene. Recorded video can hog even more data, and high quality video is a more newly available product — and typically no longer than two minutes. How video can be used to track objects on the ground is a longer discussion that mostly talks about error correction.
Step 3: Image Processing &amp; Analysis This is just where the fun begins! Now the imagery needs to be processed, in what’s novelly called “image processing.” This just means a human uses a computer to work with the images. Having higher spatial resolution and more data doesn’t mean you’ll necessarily get more information out of the images — and that’s why image processing is so important.
Why is there a human required, you might ask? While it’d be fantastic to have automated image analysis software that can intelligently scan images and highlight relevant objects or issues, that’s not the current state of things at all. Humans still sadly need to be involved to help with filtering and classification based on their project goals.
Satellite imagery of wildfires in Idaho
Processing is part of image analysis, which requires special types of statistics and analytical methods that are tailored for spatial data — though primarily anchored around interpolation (predicting new, unknown data points within a group of some known data points). There’s an emphasis in geostatistics in being able to estimate how much the gaps that are filled in via prediction might be wrong; things like elevation for 3D modeling are particularly tricky to determine aerially, so there is ripe opportunity for error. One example method is kriging, which helps predict the appropriate values in unobserved locations by using a weighted average of surrounding areas and estimating its accuracy.
For satellite imagery, there are software tools that have been developed with the specific use case of geospatial imagery processing and analysis in mind. These tools are called geographical information systems (GIS for short). The usual aim of image processing is to create an image that makes sense to a human — in essence, make the energy band sensing look more like a photograph. Or more simply put, “what the heck is in this spot?”
At a higher level, the end goal from all of this is information extraction. Even with basic mapping (like Google Maps), the point of the image is to gain some insight about stuff here on the ground. The type of information that is desired can be different between applications (which I’ll discuss later), and very rarely is available with just the basic image taken. Thus, humans need tools to help extract information from the images.
The typical chain of image processing is data import, image restoration &amp; rectification, image enhancement and information extraction. The methods for performing this processing chain have been shifting somewhat in recent years. Now, satellite data can be retrieved via APIs and similarly are various tools for processing made available via APIs. But, to understand some of the challenges, it’s important to walk through these steps.
Data Import While spy satellites might take actual film, eject it and have it intercepted by military aircraft, the process to transmit imagery back to Earth isn’t quite as cool for commercial satellites. As the satellites orbit around the Earth, they send data down (“downlinked”) via directional antennae to receiving stations on Earth. They can also receive instructions on what to capture, which are sent up from Earth (“uplinked”). These communications are conducted over the X-band, a specific frequency range.
The images the satellites take are generally compressed on-board, with their total storage nearing a terabyte. Some can even perform image fusion (which is discussed shortly) before transmission to help improve the image’s resolution.
There are various GIS data formats and different types of data that can be imported. Much like in my prior post, a number of different vendors have their own data formats, and they are often proprietary, making a lot of big data analysis stuff a lot harder. There are also unique data formats for specific parts of imagery – such as the feature geometry, feature attributes, topology, etc.
Satellite imagery is commonly stored in digital numbers (DN), which means each pixel gets a value in 8-16 bits for a physical value (like color or temperature). This helps minimizing the storage volume.
There are a number of geospatial-specific database management systems (DBMS), which is software used to store GIS data to allow it to be later retrieved and modified – essentially the tool that helps organize data. This is actually an important part of image processing, since it allows for querying and comparisons across different pieces of data.
Most of the time, this data is stored in distributed systems, which just means it isn’t all in one database. Distributed databases optimize scalability, which is why they are used in this use case, due to geospatial data’s typically large file sizes.
The relational database model is the one that is primarily used for GIS data. I’ll skip an explanation and discussion of the various database systems and types for now, but if you aren’t familiar with them, here’s a link to the Wiki.
Analysis techniques are used to create and define models of relationships between different data sets. The traditionally used technique is the entity-relationship model. In this, there are specific entities, like buildings, forests, and agricultural land, which have different attributes and “members” of the entities (like government, residential or religious buildings). The relationship part of entity-relationship just means members of these entities are compared; for example, you could compare different residential buildings in New York City to determine what attributes they have in common (perhaps they have rooftop gardens, are thinner, etc.).
The relational model works well for this sort of data, since it allows for different datasets to be compared. The key (pun intended) here is that there are keys for each entry in the data sets that link the data sets between each other.
Using a relational model for databases allows the humans to make queries with the Structured Query Language (known best by its acronym, SQL). You can make a query such as: select all buildings with rooftops within 100km of Union Square in Manhattan. Each building will have a specific coordinate (i.e. spatial relation), and (which we’ll cover in a few paragraph), can be marked as having a rooftop. Geographic search is the most important query out of these, and a large part of why there are geospatial-specific DBMS.
There are two types of data stored in these systems: spatial or attribute data. Spatial meaning “georeference” (i.e. location on Earth), and attribute meaning feature-related data (normally stored in tables). From the example above, the building coordinates are within the spatial data, and whether they have rooftops is within attribute data.
Spatial data has two further subcategories: vector and raster. Vector data deals with geometry in a few different ways. Think 2D, like boundaries of forests, or 1D (lines), like following the path of a river. You can also have specific spatial points that represent a particular item, and they are technically 0D. Raster data represents surfaces, but not just as far as 3D things like elevation; it can also show things like population density or temperature.
Raster data is where our spectral data comes in, along with the actual imagery and topography. Each cell (remember, this means pixel) gets assigned a specific value based on its primary feature. For example, one cell may have vegetation assigned to it, while another may have water.
GIS software will link all this data and create these models with a specific structure. Typically, it will start with geometrical information, then add topological and finally thematic (the raster data). This order is logical since geometrical represents the physical form and position of objects, topological is about relational position (intersections of objects), and thematic adds in the detail about objects (typically in layers).
There’s some trickiness in importing and managing this data, however. The data needs to be both accurate, meaning the map needs to match real world values, and precise, meaning described as exactly as possible. The distinction is important; you can have a map perfectly overlaid on real coordinates (accurate), but showing “this area is green” rather than a breakdown by type of vegetation (imprecise). Similarly, you could have detail down to different types of weeds (precise), but your coordinates are a mile off (inaccurate).
I’ll get into more of the challenges later, but it’s important to touch on how these errors happen, most of which are during processing. Formatting data can cause scaling to change, the data can be outdated, or there could even be an errant sensor. There are issues even with positional accuracy as far as non-land things go — it’s a lot easier to accurately determine the boundaries of a lake than the boundaries of population density.
There will also be labeling errors, whether by humans or automated processing. If I saw an image showing there was a Magnolia tree forest in some region of China, I’m unlikely to know if that is the correct type of tree in the forest or not.
These errors can snowball and ultimately make the entire analysis worthless, which is why they are such a big deal. A mining company starting a drilling project for gold in a specific spot will be none too pleased when the imagery they used actually was kilometers off, or showing the wrong type of mineral.
Some can be mitigated by supervised classification, in which a human selects an area of land they do know a lot about so that the software can then classify other areas accordingly. But that’s an inefficient and time-consuming process that requires domain expertise — so, far from ideal.
There are new sorts of structures being developed to improve object retrieval from satellite imagery databases. Some involve automatically extracting objects from imagery, then encoding their descriptors into much smaller (&lt;1% of original imagery) sizes. This allows for very fast retrieval by object shapes, which seems like a nice improvement.
Image Restoration &amp; Rectification Processing might start with getting true color on the images (i.e. making it look like a photograph, with blue water, green forests, etc.). You might be familiar with RGB values, (255, 255, 255 being white), which represent an 8-bit image color range. Sometimes it’s better to have false color, which just means using unnatural colors to help highlight differences between energy bands. If your goal is to see levels of vegetation, then you may prefer a false-color composite (FCC) that lets the infrared bands really pop.
False color imagery
After that, you might want to make sure your image is accurate, or what is called “georectification.” You can use what are called ground control points (GCP), which just means using the coordinates of known locations on a map in order to make sure the image’s coordinates map the real physical location. There’s also “orthorectification,” which removes issues of scale by accounting for different tilts and terrains — the more diverse the Earth’s surface is, the more likely there will be distortions in the image.
There are a few different ways images can be distorted or have irregularities, and thus different techniques to help restore them. One such technique is resampling, in which a pixel gets assigned a DN based on the DN’s of its neighbor pixels. Another is radiometric pre-processing, in which corrections are made to handle noise or irregularities generated from the sensors so that only the actual reflected radiation shows on the image.
Since satellites can’t control the weather (and weather modification is banned), there’s often the need for atmospheric correction, such as cloud removal. Since remote sensing is based off of how the sun reflects off objects, there can also be variations in the angle of the sun that need to be taken into account.
Image Enhancement The image enhancement phase is to improve the quality of the imagery. Some enhancements are familiar due to their use in photography, such as contrast enhancement to help highlight the differences within an image. Spatial filtering involves directly manipulating pixels for some effect. If you’ve ever played around with filters in Photoshop, or even on your phone, you get the picture. These can include image sharpening or softening, embossing, etc.
A common technique for enhancement is image fusion, which seeks to create a single, more detailed image out of multiple images. It’s also one of the ways that the tradeoff between spatial and spectral resolution can be solved.
There are a few different levels at which image fusion can be performed. First is at the pixel level, comparing pixels in different images to figure out how to pack in more detail into a pixel. Second is at the feature level, comparing sizes, lengths, shapes, etc. of the same geographic area and using statistics to combine the highest-intensity features out of different images. Third is object-level, the highest-level type of image fusion, in which images are processed separately and then combined using fancy algorithms to help maximize intensity.
There are limitations of image fusion, such as color distortion and poor quality when dealing with high resolution images, but apparently these problems are expected to be alleviated as technology improves.
Information Extraction Image classification is the largest part of information extraction, and means each pixel (or, more recently, object) in an image is categorized. This is important to distinguish different objects within an image and ultimately extract information from the image. For example, if you are measuring how quickly a city is expanding, you’ll want to be able to classify buildings, or even particular types of buildings.
Classification typically involves the computer (or software tool) automatically distinguishing different types of objects — like water, grass, urban areas, forests, etc. These tools aren’t perfect, not only as a function of pixel size but also that the bands of energy that objects emit may be too similar to properly distinguish them.
For pixels, there’s unsupervised and supervised classification — if you’re familiar with machine learning, you’ll already get what that means. The shortest difference is that unsupervised classification involves examining unknown pixels in an image, while supervised means examining known pixels.
Unsupervised classification will compare the unknown data with reference data as a way to figure out the category of the unknown sub-area. It’s a manual process, with the user having to choose how many clusters, or groups with similar properties, to be generated, and then match clusters with classes. It’s arguably more accurate than supervised classification, but it’s also more tedious due to its more manual nature.
Supervised classification will take the known data in an image, compare it with reference data and use it to extrapolate categories for the unknown parts of the image. The process is typically “training” the classification engine on sample imagery, selecting specific features, applying the right algorithm, then determining how well it worked or not.
Some issues with classification are similar to those in machine learning — you need reliable comparison data and strong sampling data in order for it to work, which is why unsupervised is often preferred.
There’s also object-oriented image classification, or “multi-resolution segmentation,” which is a non-traditional approach (meaning it’s only come into use in the past decade or so). As the name suggests, it creates objects by grouping pixels rather than classifying individual pixels. The resulting objects have different shapes and scales, and thus can be classified more flexibly using different image layers (e.g. population density, infrared, elevation, etc.). The user is still doing supervised classification using samples and fancy algorithms, but with more accuracy when dealing with objects vs. individual pixels.
Example of how object-based image analysis works.
The general rule of thumb is that object-oriented classification is best for higher spatial resolution, since objects might consist of multiple pixels, and the other methods work fine for lower resolution (in which objects are just a pixel). Of course, as spatial resolution improves, this means that object-oriented classification might be increasingly adopted in kind.
The type of algorithm matters, too. For example, a highly tailored algorithm might eliminate any false classifications due to shadows by incorporating into its model the position of the sun and relevant ground elevations in the area based on the image’s location and time.
At the forefront of research are different automation techniques to help extract features. Methods leveraging machine vision are one example, as well as methodologies that allow for more variables for classification while maintaining a high level of accuracy (90%&#43;). It’ll likely take a few years for commercially available products to catch up to the research (along with bugs that come out when scaling to product-level use), but highly accurate automation within 5 years doesn’t seem preposterous.
Once features are classified, information can be extracted for its desired purpose. Which leads to the various applications of geospatial analysis.
What are the applications? There are a bunch of industries that benefit from using satellite-based imagery — particularly for anything in which physical trends over time are needed or they want to see stuff below the Earth’s surface. The number of applications is expanding as imaging capability improves, since higher resolution images provide a more granular view of what’s happening on Earth.
Even though purchasing multispectral imagery can be high in absolute dollar terms, relative to the cost of physical exploration, it is inexpensive. But for non-profit or applications without this high cost of physical capital on the line, the reward isn’t necessarily as high.
Also, assume for any of the following applications, traders can use similar information to inform their financial bets. For example, if satellite imagery suggests that the rate of construction in China is slowing down, they might short construction materials firms or commodities as a result. Of course, this has some intriguing implications for the efficient-market hypothesis, if investors have information on a company’s operations that even the company itself might not possess.
The government has a variety of applications for geospatial imagery, and has been leveraging it as a source of intelligence for half a century. But, I’ll just be focusing on applications within commercial industries.
Current Applications Agriculture It can be hard to measure agricultural trends on the ground, so satellite imagery is immensely helpful in assessing crop health and yields, environmental changes and trends pertaining to livestock. Even when planning and maintaining agricultural sites, this imagery can map irrigation and analyze soil — even showing variations in soil’s organic matter.
Imagery highlighting irrigation.
Aside from optimizing costs and boosting productivity at large agricultural companies, there’s a general global need for improved agricultural production and better utilization of resources. Having a better sense of what and where these resources are to improve their management has significant benefits on a macro scale.
Engineering &amp; Construction Along with companies in the mining and oil &amp; gas industries, engineering &amp; construction companies have high capital costs relating to physical projects. So, geospatial imagery can help these companies visualize their projects, not just for evaluating and planning construction sites, but also for maintaining them. This helps reduce construction costs and also minimize environmental impact.
Digital elevation model of a construction site.
Being able to model construction sites in 3D is crucial for planning purposes, but also ensuring ongoing safety. And for certain project types, like airstrips, dams, power plants and sewers, you need data beyond just the visual. For example, when building an airport, not only do you need to make sure the terrain is appropriate for an airstrip, but also have 3D models for flight simulation to make sure pilots aren’t going to run into recurring issues.
Environmental Monitoring On the “save the world,” side of things, environmental monitoring helps assess damage from natural disasters as well as help manage natural resources. Governments can use satellite imagery to help develop disaster response plans, as well as improve environmental planning and conservation.
Imagery highlighting deforestation.
Being able to see high-level trends, like deforestation, is helpful to monitor local environmental health but even more so to evaluate potential long-term impacts. After all, trees don’t grow back overnight, so excessive “forest farming” can have devastating effects on future generations’ economic wellbeing. Not to mention being a harbinger of global climate change.
Logistics (Shipping &amp; Maritime) Logistics and shipping companies, port operators, fishers, trade organizations and governments all have an interest in geospatial imagery relating to maritime and weather patterns. On the pure logistics side, being able to track ships in transit is highly useful, as tracking systems can fail when far enough away from ports. Weather patterns and other spatial data (like terrain mapping) can also help optimize shipping routes.
Search for MH370; odd given the number of global recon satellites that it’s still missing.
Being able to monitor trading, spot illegal fishing or piracy, and help with search and rescue missions are of particular importance from a global trade perspective. Even the “little guy” can win — local fishers and fisheries are often put out of business by illegal fishing, which is more widespread than you might think.
Mining Multispectral satellite imagery has the ability to differentiate between different types of rocks, vegetation and soil, which helps mining and geology projects in a few different ways.
Imagery optimized to show rare earth elements.
First and most obviously, this imagery can help identify clays, oxides and soils for mineral mapping and exploration. This is in contrast to most humans, who would walk to the location and say, “yep, that looks like ground.” All the different energy bands will show both different types of rocks and elements as well as structural aspects of the Earth’s surface that may influence ease of mining.
Second, it helps plan out mining projects. Digging into the ground isn’t the only challenge; mining companies also have to worry about how to get access to the mine and what infrastructure would be required to support the project. And, they also need to estimate what sort of impact the project will have on the surrounding area from a human and environmental perspective.
Oil &amp; Gas Satellite imagery can help oil and gas companies reduce risk in oil exploration as well as monitor ongoing projects. The level of detail is pretty impressive, from generally detecting areas that are most productive down to even detecting seismic lines or offshore oil seepage.
The Deepwater Horizon spill being just a bit more than seepage.
But not only does it help find areas most likely to be rich with oil, but it also helps these companies assess the potential costs and pitfalls associated with drilling in a particular area. For example, satellite imagery shows which areas have rock formations, heavy forest coverage, unfavorable weather conditions and whether they are in more remote or developed locations.
Future Applications In the next section I’ll talk about some of the challenges that have hindered adoption to date, but if geospatial imagery becomes more widely available and easier to leverage for business and operational intelligence, other industries may become customers in addition to those above.
One potential area is physical retail. A super cool application might be looking at the surrounding area and weather patterns of store locations to see what types of goods might resonate best with local customers. For example, imagery could show the levels and types of vegetation in nearby residential areas to see if stocking more garden supplies makes sense. If imagery can be updated quickly enough, retail companies could see how many cars are at a given location in order to estimate growth or decline. They could also plan new locations based on factors like accessibility or even locations that have lots of cars parked at their competitors’ stores.
In that vein, real estate is another potential application area. Much like for construction projects, real estate developers can improve planning their projects by being able to optimize residential appeal — whether by accessibility, proximity to natural spaces or avoiding high-risk zones. And the same goes for city and urban planners.
The advertising industry could leverage different types of data towards better ad targeting. Someone like Facebook could use satellite imagery to generate a wealth of data about a user’s specific location, that they can then provide as part of their user targeting suite for their customers. This could include the example above of measuring vegetation in residential areas to advertise garden supplies, or knowing proximity to mountains and trails to advertise hiking gear or mountain bikes.
As I’ll discuss a bit later, there’s also the potential that space data startups generate and sell intelligence directly to end customers, which could open up an even wider set of potential applications.
What’s hindering adoption? There isn’t necessarily one thing hindering adoption of geospatial imagery and intelligence. It’s a combination of availability, costs, latency, quality and usability. All these issues in conjunction means there’s a barrier for many commercial enterprises to using geospatial data to their advantage.
Getting satellites into orbit so there is more imagery available is step one. The goal of many of these imagery companies is to have a constellation of satellites in orbit to allow for daily imaging of the whole planet. Launching these satellites into orbit is currently expensive, and ups the cost of the end imagery (which thereby reduces the potential customer set). So, a lot depends on SpaceX’s (and others’) ability to cut down on the cost of satellite launches. The recent successful Falcon 9 launch and landing will very likely pave the way for rocket reuse, which will help bring down these costs substantially.
Obligatory cinemagraph in the name of ‘Murica.
The delivery of imagery is historically quite slow as well. Not only do satellites capture a small part of the Earth at a time, but there’s also the issue of sending down large file sizes over transmissions speeds that are just in the hundreds of MB per second range. Assuming there’s no pre-processing before the customer receives the image, the customer still has to download the image for themselves, which takes time…and any processing work needed only adds to that time. This is starting to change, as images are increasingly available online and some images are pre-processed, saving customers from having to do the image processing themselves.
Of course, images will only realistically be “near real-time,” given the transmission delay. But getting down to a matter of minutes, or even hours, is an improvement over the traditional daily or longer wait times. Faster transmission speeds could help improve the speed at which images are received as well.
Launching a satellite into space is no cheap feat, not to mention costs of ongoing operations, resulting in imagery pricing that is quite expensive. Pricing can range from $20 to $25 per square km, and there are often minimum order sizes of 25 square km a pop (meaning $500&#43;).
On the satellite design side, more development is needed in the miniaturization of components. For example, Planet Labs’ satellites are cutely described as “baguette”-size, and that’s the general trend — 172 satellites weighing 100kg (~220lbs) or less were launched in 2014. There are also sensor-related challenges, most which can’t be remediated at the source, putting more onus on the image processing part of the chain. There are multiple tradeoffs within sensors that affect quality: spectral resolution vs. signal to noise ratio (SNR), radiometric resolution vs. SNR, data size vs. spatial resolution, and spatial resolution vs. spectral resolution.
So, there’s a long way to go with image processing software as well, particularly as it pertains to information extraction. Better automation seems to be the path forward towards improving this software, though that isn’t particularly easy, either. It isn’t surprising that automation is perhaps the biggest area of focus among many of the startups in the field. The automation is primarily in the pre-processing (rectification and restoration phase), but also through easier integration (API all the things).
While I wasn’t able to find these claims specifically, after looking at a bunch of traditional GIS software, it has the GUI sophistication of Minesweeper from Windows 95. While I’m sure for users familiar with these interfaces it makes sense and works fine, I can’t help but imagine that a more intuitive and “typical user”-friendly UX might allow for more widespread adoption.
Who cares? The government has cared a lot for a long time, and I’d have to assume they’d be a little nervous about a bunch of new satellites being sent into orbit that may risk having spy satellites uncovered. But, they would also be able to benefit from innovations, particularly on the software-side, that are spurred by greater commercial adoption. Though based on how homely most government-facing software looks, maybe government analysts would disprove of UI improvements.
Satellite imagery via the CIA of Osama bin Ladin’s compound.
Any of the commercial industries from earlier might care, as it can help them cut costs, curtail risks and arguably even improve revenues. So, they care to the extent that better satellite imagery and analysis can help them optimize their business, but the degree to which it does may vary. I’d imagine it’s a “nice to have,” maybe even “would love to have,” but not a “necessary to have” in most of these cases.
As described above, there are a lot of “save the world” use cases that could legitimately help improve the environment and even potentially human rights. But generally those budgets are much thinner than for-profit industries.
On the darker side of things, there’s the potential for invasion of privacy. This currently pertains to sub-orbit, but high altitude aircraft (as far as we, the unknowing public, knows), but it certainly isn’t a stretch to imagine being able to detect individuals by thermal spectrum within specific buildings. Or, to watch their patterns of life via satellite — though that could more easily be done by gaining access to their phone’s GPS and location data.
With the recent bill passed to allow companies to retain profits from space mining activities, improvements in these technologies could potentially help these companies scout asteroids and other celestial objects containing valuable elements. It might be tricky from the satellite positioning perspective, but would cut down on the exploration costs enormously if companies could make “sure bets.”
What are the risks? How a satellite constellation looks. A lot depends on getting satellites into orbit, at least to make this a huge opportunity. The successful Falcon 9 launch and re-landing helps mitigate those risks a bit, but that happened only weeks ago. So, to get more satellites into orbit, thus increasing not only the amount of imagery, but quality of imagery, you have to hope that SpaceX really has their stuff together and in a hurry. You actually probably need to hope that more than just SpaceX does rocket reuse successfully.
Satellite imagery, at least as it stands today, also isn’t that big of an industry. The satellite industry as a whole is a hefty market ($200 billion), particularly because of consumer communications and entertainment. But right now the Earth Observation (EO) market, which includes the satellite imagery portion, is still quite small.
Specifically, the EO market size is just about $2 billion today, which doesn’t leave a lot of room for new players to make a killing. DigitalGlobe and Esri, arguably the largest satellite imagery providers, only made about $650mm and $950mm in revenue in 2014, respectively. Some of the estimates, like from Northern Sky research, put the EO market hitting $3.5 billion in 2020, and $4.5 billion by 2024.
An alternative is betting that even if the imagery part doesn’t grow that quickly, better software and analytics still has the opportunity for significant growth. After all, these tools would help companies get a better bang for their buck when purchasing satellite imagery. But is it a 10x better bang for the buck than it stands today? That’s up for debate, and largely depends on use case. But that’s not the sort of “sure bet” most VCs like.
On the other hand, if the monetization of satellite imagery isn’t via the imagery or software itself, but via the resulting data streams, then there’s arguably less risk. If you’re just selling what would essentially be business intelligence, but collected from Earth’s orbit, you’d undoubtedly find additional interested customers due to the more immediate value proposition. However, companies pursuing this would probably have to control the whole chain — satellites, imagery, processing, etc. — to have differentiated and high-quality data streams, which requires a ton of capital to pursue. So VCs would need to clutch their talismans and hope the all-in bet pays off.
What’s the current scene? There are not too many startups specifically in the satellite-imagery arena, though there are a few more in the satellite and space category more generally (most notably SpaceX). The ones who are in what I’d call the “geospatial big data” arena are:
Analyze Aquila Space BlackSky Global CartoDB Descartes Labs Iceye MapBox Planet Labs Orbital Insight Skybox Imaging (acquired by Google) Spire TellusLabs UrtheCast There are some sub-categories, like tracking weather and maritime conditions (Analyze, Spire), or mapping services (CartoDB, Mapbox). But for the most part there isn’t much overlap between the companies, other than at the highest level. You’ll see terms like “tracking,” “data streams,” and so forth, but they all self-describe quite differently.
The more notable VC funds that have funded some of these ventures are:
Accel Partners Draper Fisher Jurvetson Earlybird Venture Capital Felicis Ventures Founders Fund Foundry Group Lux Capital Promus Ventures Razors Edge Ventures Rothenberg Ventures RRE Ventures There are also a few larger companies that do provide either satellite imagery, GIS software, or geospatial database management systems, including:
Autodesk Bentley Systems DigitalEye Esri Exelis Hexagon Geospatial Teradata There are also a number of open source projects, from software to SDKs and libraries, that are released by non-profit organizations and universities. But they rarely have the same breadth of features, nor the number of capabilities, as the paid software.
Conclusion There’s a reason why I like using the term “space data” — this is really cool stuff. But, there are huge capital costs involved for a market that as of yet isn’t very big at all. Or, for companies that are improving just the software part, there’s a lot of reliance on third parties to provide the actual imagery.
Automation does seem like the most legitimate opportunity for a 10x improvement on what is available today, so that companies don’t need GIS experts in-house to still glean intelligence from satellite imagery. It seems like this software vertical is particularly behind in many of the infrastructure developments made in the past decade, so there’s certainly room for disruption just in that regard.
But, what are companies’ ongoing needs for satellite imagery? Many of the applicable industries suggest a per-project need rather than the sort of continuous need best met via SaaS. At the very least, the government is likely willing to throw some money towards better software, but relying on that revenue is unlikely to produce a blockbuster VC return.
The most viable proposition in my eyes is in eliminating the need for companies to have to even touch satellite imagery and give them the information they need to know, i.e. the data stream approach. It feels like a truly modern way of approaching business and operational intelligence with a large potential audience. And hedge funds would probably eat it up.
My main hesitation here would be in vertical-specific needs, and to a lesser extent, in pricing. My gut feeling is that, at least in early days, the data received by customers would require a heavy level of customization based on their needs, making the business almost like a software and data-enabled consultancy (which is arguably working out for Palantir). And as a result, the pricing might still be prohibitive to many customers — not to mention the initial and ongoing costs of maintaining a constellation of satellites.
My prediction is that many of the software-only companies will remain quite small, while those pursuing the entire chain (like Planet Labs) have a good shot at a big long-term payoff (with bigger capital requirements, of course). DigitalGlobe itself only has a $1 billion market cap, with about $100mm cash, so they can’t just gobble up the new software companies. There’s always the chance a cash-rich tech giant like IBM or Facebook decides they’re interested in the space data game, too. Or perhaps I’m wrong and space mining comes sooner rather than later, with space data crucial for any level of success.
Asteroid mining — the not so distant future?
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/satellite-01.jpg" alt="Satellite observing Earth"></p>
<p><em>WTFunding is one of my “spare time” projects to delve into tech sectors attracting VC funding that pique my curiosity. I like connecting dots between disparate things, it’s also pretty useful.</em></p>
<h3 id="table-of-contents">Table of Contents:</h3>
<ol>
<li><a href="#so-what-is">So what is &ldquo;space data&rdquo; and &ldquo;satellite imagery&rdquo;?</a></li>
<li><a href="#applications">What are the applications?</a></li>
<li><a href="#adoption">What&rsquo;s hindering adoption?</a></li>
<li><a href="#who-cares">Who cares?</a></li>
<li><a href="#risks">What are the risks?</a></li>
<li><a href="#current-scene">What&rsquo;s the current scene?</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ol>
<hr>
<h2 id="a-nameso-what-isaso-what-is-space-data-and-satellite-imagery"><a name="so-what-is"></a>So what is &ldquo;space data&rdquo; and &ldquo;satellite imagery&rdquo;?</h2>
<p>I derive a lot of pleasure by calling it “space data,” but it is more accurately termed satellite or geospatial imagery, and the analysis thereof. This is all about image data collected by satellites about the Earth’s surface and the software that helps make these images useful to humans.</p>
<p>To get to the final, useful image, there are three main steps: launching a satellite into orbit, collecting the images of the Earth’s surface, then processing and analyzing the images. While image processing &amp; analysis, with the goal of gleaning intelligence from satellite imagery, is drawing the most VC funding at the moment, it’s important to understand each step to get both the opportunities and risks startups face within this sub-sector.</p>
<p>If you are more of a <a href="https://en.wikipedia.org/wiki/Wikipedia:Too_long;_didn%27t_read">tl;dr</a> person, you might want to skip down the page a bit to the next section, <a href="#applications">“What are the Applications?”</a> and continue from there.</p>
<h3 id="step-1-launch-to-orbit">Step 1: Launch to Orbit</h3>
<p><img src="/blog/img/satellite-02.jpg" alt="Landsat 8, an Earth Observation satellite operated by NASA"><em>Landsat 8, an Earth Observation satellite operated by NASA.</em></p>
<p>I’ll be focusing on remote sensing satellites, but there are also communications and weather satellites and reconnaissance (aka spy) satellites and maybe even satellites that can shoot lasers out of the sky (<a href="https://commons.wikimedia.org/wiki/File:Nothing_is_beyond_our_reach.svg">nothing is beyond their reach</a>, after all). First, let’s cover some basics about satellites.</p>
<p>Rockets are the things who take satellites on their journey into Earth’s orbit. Earth’s gravity is what keeps satellites in orbit, just like how gravity causes the Moon to orbit us. Satellites are fitted with gyroscopes, spinning discs that leverage the Earth’s magnetic field to keep the satellites on course. Solar panels power the satellites once they’re in their proper orbit, since they have nice access to the Sun’s rays.</p>
<p>There are three different types of orbit: geostationary, geosynchronous and polar (or sun synchronous). Geostationary means that the satellite is stationary relative to the Earth’s surface. Communication and weather satellites use this orbit, staying way above the Earth at a specific point on the Equator (way above = 30k+ km). Polar (aka sun synchronous) means the satellite orbits at an altitude at which it will consistently pass over a given location at the same local time. This is what remote sensing satellites use, and are typically closer to the Earth’s surface than the communications satellites (only hundreds of km above).</p>
<h3 id="step-2-data-collection">Step 2: Data Collection</h3>
<p>Objects on Earth reflect energy from the Sun. This energy is in different “bands,” typically visible, infrared and water vapor. Each object reflects a different amount of energy, giving them a “spectral signature.” “Remote sensing” is how satellites collects these reflections — and just means sensing the energy bands from a remote place (like the Earth’s orbit).</p>
<p><img src="/blog/img/energy-reflection.png" alt="How remote sensing works, diagram by Kelly Shortridge"><em>How remote sensing works (no animals were harmed in the making of this image).</em></p>
<p>Satellites carry sensors that allow them to do this remote sensing. Passive sensors just collect the reflections (radiation) that are emitted from Earth, using the Sun’s energy as its source of electromagnetic radiation. Active sensing systems carry their own source of electromagnetic radiation, which is directed to the Earth’s surface. Just like the passive sensing system, it then measures the amount of energy that is reflected back.</p>
<p>Different sensors are designed to measure different parts of the electromagnetic spectrum. For example, the 450nm - 495nm wavelength band would measure the visible color blue. The 570nm - 590nm wavelength band would measure the visible color yellow. And near-infrared is the ~700nm - 1mm wavelength band.</p>
<p><img src="/blog/img/EM-spectrum.png" alt="The Electromagnetic Spectrum"><em>The Electromagnetic Spectrum</em></p>
<p>Luckily, the electromagnetic spectrum is pretty handy for identifying objects. The way objects reflect back energy is consistent and tends to be unique. Water doesn’t reflect much visible or infrared energy, but vegetation strongly reflects infrared.</p>
<p>Satellites can even be used for things like detecting gravitational pull on the Earth’s surface. If you have two satellites, one leading and one following, when the lead passes over an area with high gravitational pull, it’ll start speeding up (and slow down over areas with weaker gravitational pull). NASA launched twin satellites in 2002 to do just that (GRACE mission). Being able to detect gravity helps map out areas below the Earth’s surface, from tunnels to potential oil wells.</p>
<p>Capturing lots of energy/wavelength bands creates what are called “multispectral images.” This means that the final image contains a few layers of images that each capture a different energy band. For example, one layer might capture the infrared band while another might capture the green band. You can even have superspectral and hyperspectral, but that just means there are lots and lots of layers with different energy bands. The more bands, the more opportunity for differentiation of objects on the ground. On the opposite side of the spectrum (pun intended), panchromatic images capture a single band of energy in black and white (and shades of grey).</p>
<p>As the satellite orbits around the Earth, it looks at a small chunk of the planet at a time. Even if it goes around the Earth every few hours, it is only capturing as little as less than 1% of the Earth each time. This small chunk is represented by a pixel, just like on your TV or computer monitor. For example, one pixel could represent 50x50m on Earth. Different types of sensors allow for different pixel sizes, typically from 5m to 1km — though some have pixel sizes as small as the micrometer scale (and tend to capture a smaller portion of the Earth as a result).</p>
<p>As the satellite orbits, it has to capture as many pixels as it can. However, even if pixel size is the same among different sensors, this doesn’t mean the resolution is as well. Higher resolution is indicated with a smaller physical measurement; for example, an image with a pixel size of 5m will look much clearer with a resolution of 5m vs. 20m.</p>
<p>According to one news report, in 2013 a spy satellite was launched in California capable of “snapping pictures detailed enough to distinguish the make and model of an automobile hundreds of miles below.” Given the length of a car is approximately 5m, let’s assume that the resolution needs to be 10% of that in order to see the necessary level of detail about the car, so 0.5m or smaller. And we’d likely assume that it isn’t just panchromatic since the government likely also wants to know the color of the car. This makes it quite a bit more powerful than most commercially available multispectral resolutions. As a frame of reference, DigitalGlobe’s satellite scheduled to be launched in September 2016, the WorldView-4, will have a multispectral resolution of 1.2m.</p>
<p>When reading about a satellite’s imagery capabilities, it’s crucial to understand that pixels and resolution are not the same thing. The difference between a pixel and resolution is that resolution is the size of the smallest detectable feature captured whereas a pixel is the size of the smallest physical area captured (or in other words, the smallest unit of the image).</p>
<p>There are also different types of resolution. Spatial resolution depends on what’s called the sensor’s “Instantaneous Field of View” (IFOV), which just means the visibility the sensor has at a particular altitude at a particular moment in time. Spectral resolution just means which parts of the electromagnetic spectrum the sensor can capture; higher spectral resolution indicates a narrower band of wavelengths captured. Radiometric resolution measures how many shades of grey there are between black and white, measured in bits; an 8-bit radiometric resolution means the sensor can measure 256 unique shades of gray. And temporal resolution is the length of time it takes for the satellite to complete one orbit; for example, some satellites may capture the same area every 5 days, while for others it will be every 15 days.</p>
<p>Another important thing to keep in mind is that satellites are not necessarily taking photographs; these sensors are not cameras (though those spy satellites sometimes do use long-focus lenses). The mechanism is through sensing energy bands. Hence the industry lingo is remote sensing. Each image represents a lot of pixels arranged in rows and columns — think of it like putting a fragment of an image into 1,000 rows by 1,000 columns in Excel, and then zooming way out to see the full picture.</p>
<p><img src="/blog/img/pixel.png" alt="One pixel example"></p>
<p>To continue that analogy, that Excel file would then represent what’s called a “scene.” Scenes can represent over 2,000km on each side. These scenes are what most people buy since they want the “big picture,” not a tiny snippet of whatever is being observed.</p>
<p><img src="/blog/img/pixel-to-scene.png" alt="Showing how one pixel becomes a scene"></p>
<p>Again, this scene represents a physical object on the Earth’s surface. In our cat’s case, let’s say it’s just over a foot tall sitting upright, which is about 0.33m. In the scene, we can see the entire physical object (the cat) and features as small as its nose, so using its nose as a proxy for the smallest detectable feature, our resolution is approximately 0.01m — meaning we’re likely using a super secret spy satellite to spot Mr. Whiskers.</p>
<p><img src="/blog/img/pixel-and-resolution.png" alt="Showing how pixel and resolution relate"></p>
<p>Multispectral scenes can represent a large amount of data, often with tens of millions of bytes (10MB+) per scene. This is because the intensity of each pixels is stored as a single byte (an 8-bit digital number), and there are typically millions of pixels in any given scene. Recorded video can hog even more data, and high quality video is a more newly available product — and typically no longer than two minutes. How video can be used to track objects on the ground is a longer discussion that mostly talks about error correction.</p>
<h3 id="step-3-image-processing--analysis">Step 3: Image Processing &amp; Analysis</h3>
<p>This is just where the fun begins! Now the imagery needs to be processed, in what’s novelly called “image processing.” This just means a human uses a computer to work with the images. Having higher spatial resolution and more data doesn’t mean you’ll necessarily get more information out of the images — and that’s why image processing is so important.</p>
<p>Why is there a human required, you might ask? While it’d be fantastic to have automated image analysis software that can intelligently scan images and highlight relevant objects or issues, that’s not the current state of things at all. Humans still sadly need to be involved to help with filtering and classification based on their project goals.</p>
<p><img src="/blog/img/satellite-03.png" alt="Satellite imagery of wildfires in Idaho"><em>Satellite imagery of wildfires in Idaho</em></p>
<p>Processing is part of image analysis, which requires special types of statistics and analytical methods that are tailored for spatial data — though primarily anchored around interpolation (predicting new, unknown data points within a group of some known data points). There’s an emphasis in geostatistics in being able to estimate how much the gaps that are filled in via prediction might be wrong; things like elevation for 3D modeling are particularly tricky to determine aerially, so there is ripe opportunity for error. One example method is kriging, which helps predict the appropriate values in unobserved locations by using a weighted average of surrounding areas and estimating its accuracy.</p>
<p>For satellite imagery, there are software tools that have been developed with the specific use case of geospatial imagery processing and analysis in mind. These tools are called geographical information systems (GIS for short). The usual aim of image processing is to create an image that makes sense to a human — in essence, make the energy band sensing look more like a photograph. Or more simply put, “what the heck is in this spot?”</p>
<p>At a higher level, the end goal from all of this is information extraction. Even with basic mapping (like Google Maps), the point of the image is to gain some insight about stuff here on the ground. The type of information that is desired can be different between applications (which I’ll discuss later), and very rarely is available with just the basic image taken. Thus, humans need tools to help extract information from the images.</p>
<p>The typical chain of image processing is data import, image restoration &amp; rectification, image enhancement and information extraction. The methods for performing this processing chain have been shifting somewhat in recent years. Now, satellite data can be retrieved via APIs and similarly are various tools for processing made available via APIs. But, to understand some of the challenges, it’s important to walk through these steps.</p>
<p><img src="/blog/img/image-processing-chain.png" alt="Showing the steps in satellite image processing"></p>
<h4 id="data-import">Data Import</h4>
<p>While spy satellites might take actual film, eject it and have it intercepted by military aircraft, the process to transmit imagery back to Earth isn’t quite as cool for commercial satellites. As the satellites orbit around the Earth, they send data down (“downlinked”) via directional antennae to receiving stations on Earth. They can also receive instructions on what to capture, which are sent up from Earth (“uplinked”). These communications are conducted over the X-band, a specific frequency range.</p>
<p>The images the satellites take are generally compressed on-board, with their total storage nearing a terabyte. Some can even perform image fusion (which is discussed shortly) before transmission to help improve the image’s resolution.</p>
<p>There are various GIS data formats and different types of data that can be imported. Much like <a href="/blog/posts/wtfunding-industrial-manufacturing-analytics/">in my prior post</a>, a number of different vendors have their own data formats, and they are often proprietary, making a lot of big data analysis stuff a lot harder. There are also unique data formats for specific parts of imagery – such as the feature geometry, feature attributes, topology, etc.</p>
<p>Satellite imagery is commonly stored in digital numbers (DN), which means each pixel gets a value in 8-16 bits for a physical value (like color or temperature). This helps minimizing the storage volume.</p>
<p>There are a number of geospatial-specific database management systems (DBMS), which is software used to store GIS data to allow it to be later retrieved and modified – essentially the tool that helps organize data. This is actually an important part of image processing, since it allows for querying and comparisons across different pieces of data.</p>
<p>Most of the time, this data is stored in distributed systems, which just means it isn’t all in one database. Distributed databases optimize scalability, which is why they are used in this use case, due to geospatial data’s typically large file sizes.</p>
<p>The relational database model is the one that is primarily used for GIS data. I’ll skip an explanation and discussion of the various database systems and types for now, but if you aren’t familiar with them, <a href="https://en.wikipedia.org/wiki/Database">here’s a link to the Wiki</a>.</p>
<p>Analysis techniques are used to create and define models of relationships between different data sets. The traditionally used technique is the entity-relationship model. In this, there are specific entities, like buildings, forests, and agricultural land, which have different attributes and “members” of the entities (like government, residential or religious buildings). The relationship part of entity-relationship just means members of these entities are compared; for example, you could compare different residential buildings in New York City to determine what attributes they have in common (perhaps they have rooftop gardens, are thinner, etc.).</p>
<img style="float: right;" src="/blog/img/relational-model.png" alt="relational database model"/>
<p>The relational model works well for this sort of data, since it allows for different datasets to be compared. The key (pun intended) here is that there are keys for each entry in the data sets that link the data sets between each other.</p>
<p>Using a relational model for databases allows the humans to make queries with the Structured Query Language (known best by its acronym, SQL). You can make a query such as: select all buildings with rooftops within 100km of Union Square in Manhattan. Each building will have a specific coordinate (i.e. spatial relation), and (which we’ll cover in a few paragraph), can be marked as having a rooftop. Geographic search is the most important query out of these, and a large part of why there are geospatial-specific DBMS.</p>
<p>There are two types of data stored in these systems: spatial or attribute data. Spatial meaning “georeference” (i.e. location on Earth), and attribute meaning feature-related data (normally stored in tables). From the example above, the building coordinates are within the spatial data, and whether they have rooftops is within attribute data.</p>
<p>Spatial data has two further subcategories: vector and raster. Vector data deals with geometry in a few different ways. Think 2D, like boundaries of forests, or 1D (lines), like following the path of a river. You can also have specific spatial points that represent a particular item, and they are technically 0D. Raster data represents surfaces, but not just as far as 3D things like elevation; it can also show things like population density or temperature.</p>
<p>Raster data is where our spectral data comes in, along with the actual imagery and topography. Each cell (remember, this means pixel) gets assigned a specific value based on its primary feature. For example, one cell may have vegetation assigned to it, while another may have water.</p>
<p>GIS software will link all this data and create these models with a specific structure. Typically, it will start with geometrical information, then add topological and finally thematic (the raster data). This order is logical since geometrical represents the physical form and position of objects, topological is about relational position (intersections of objects), and thematic adds in the detail about objects (typically in layers).</p>
<p>There’s some trickiness in importing and managing this data, however. The data needs to be both accurate, meaning the map needs to match real world values, and precise, meaning described as exactly as possible. The distinction is important; you can have a map perfectly overlaid on real coordinates (accurate), but showing “this area is green” rather than a breakdown by type of vegetation (imprecise). Similarly, you could have detail down to different types of weeds (precise), but your coordinates are a mile off (inaccurate).</p>
<p>I’ll get into more of the challenges later, but it’s important to touch on how these errors happen, most of which are during processing. Formatting data can cause scaling to change, the data can be outdated, or there could even be an errant sensor. There are issues even with positional accuracy as far as non-land things go — it’s a lot easier to accurately determine the boundaries of a lake than the boundaries of population density.</p>
<p>There will also be labeling errors, whether by humans or automated processing. If I saw an image showing there was a Magnolia tree forest in some region of China, I’m unlikely to know if that is the correct type of tree in the forest or not.</p>
<p>These errors can snowball and ultimately make the entire analysis worthless, which is why they are such a big deal. A mining company starting a drilling project for gold in a specific spot will be none too pleased when the imagery they used actually was kilometers off, or showing the wrong type of mineral.</p>
<p>Some can be mitigated by supervised classification, in which a human selects an area of land they do know a lot about so that the software can then classify other areas accordingly. But that’s an inefficient and time-consuming process that requires domain expertise — so, far from ideal.</p>
<p>There are new sorts of structures being developed to improve object retrieval from satellite imagery databases. Some involve automatically extracting objects from imagery, then encoding their descriptors into much smaller (&lt;1% of original imagery) sizes. This allows for very fast retrieval by object shapes, which seems like a nice improvement.</p>
<h4 id="image-restoration--rectification">Image Restoration &amp; Rectification</h4>
<p>Processing might start with getting true color on the images (i.e. making it look like a photograph, with blue water, green forests, etc.). You might be familiar with RGB values, (255, 255, 255 being white), which represent an 8-bit image color range. Sometimes it’s better to have false color, which just means using unnatural colors to help highlight differences between energy bands. If your goal is to see levels of vegetation, then you may prefer a false-color composite (FCC) that lets the infrared bands really pop.</p>
<p><img src="/blog/img/satellite-04.png" alt="False color satellite imagery"><em>False color imagery</em></p>
<p>After that, you might want to make sure your image is accurate, or what is called “georectification.” You can use what are called ground control points (GCP), which just means using the coordinates of known locations on a map in order to make sure the image’s coordinates map the real physical location. There’s also “orthorectification,” which removes issues of scale by accounting for different tilts and terrains — the more diverse the Earth’s surface is, the more likely there will be distortions in the image.</p>
<p>There are a few different ways images can be distorted or have irregularities, and thus different techniques to help restore them. One such technique is resampling, in which a pixel gets assigned a DN based on the DN’s of its neighbor pixels. Another is radiometric pre-processing, in which corrections are made to handle noise or irregularities generated from the sensors so that only the actual reflected radiation shows on the image.</p>
<p>Since satellites can’t control the weather (and weather modification is banned), there’s often the need for atmospheric correction, such as cloud removal. Since remote sensing is based off of how the sun reflects off objects, there can also be variations in the angle of the sun that need to be taken into account.</p>
<h4 id="image-enhancement">Image Enhancement</h4>
<p>The image enhancement phase is to improve the quality of the imagery. Some enhancements are familiar due to their use in photography, such as contrast enhancement to help highlight the differences within an image. Spatial filtering involves directly manipulating pixels for some effect. If you’ve ever played around with filters in Photoshop, or even on your phone, you get the picture. These can include image sharpening or softening, embossing, etc.</p>
<p><img src="/blog/img/satellite-imagery-fusion.png" alt="Satellite image fusion diagram by Kelly Shortridge"></p>
<p>A common technique for enhancement is image fusion, which seeks to create a single, more detailed image out of multiple images. It’s also one of the ways that the tradeoff between spatial and spectral resolution can be solved.</p>
<p>There are a few different levels at which image fusion can be performed. First is at the pixel level, comparing pixels in different images to figure out how to pack in more detail into a pixel. Second is at the feature level, comparing sizes, lengths, shapes, etc. of the same geographic area and using statistics to combine the highest-intensity features out of different images. Third is object-level, the highest-level type of image fusion, in which images are processed separately and then combined using fancy algorithms to help maximize intensity.</p>
<p>There are limitations of image fusion, such as color distortion and poor quality when dealing with high resolution images, but apparently these problems are expected to be alleviated as technology improves.</p>
<h4 id="information-extraction">Information Extraction</h4>
<p>Image classification is the largest part of information extraction, and means each pixel (or, more recently, object) in an image is categorized. This is important to distinguish different objects within an image and ultimately extract information from the image. For example, if you are measuring how quickly a city is expanding, you’ll want to be able to classify buildings, or even particular types of buildings.</p>
<p>Classification typically involves the computer (or software tool) automatically distinguishing different types of objects — like water, grass, urban areas, forests, etc. These tools aren’t perfect, not only as a function of pixel size but also that the bands of energy that objects emit may be too similar to properly distinguish them.</p>
<p>For pixels, there’s unsupervised and supervised classification — if you’re familiar with machine learning, you’ll already get what that means. The shortest difference is that unsupervised classification involves examining unknown pixels in an image, while supervised means examining known pixels.</p>
<p>Unsupervised classification will compare the unknown data with reference data as a way to figure out the category of the unknown sub-area. It’s a manual process, with the user having to choose how many clusters, or groups with similar properties, to be generated, and then match clusters with classes. It’s arguably more accurate than supervised classification, but it’s also more tedious due to its more manual nature.</p>
<p>Supervised classification will take the known data in an image, compare it with reference data and use it to extrapolate categories for the unknown parts of the image. The process is typically “training” the classification engine on sample imagery, selecting specific features, applying the right algorithm, then determining how well it worked or not.</p>
<p>Some issues with classification are similar to those in machine learning — you need reliable comparison data and strong sampling data in order for it to work, which is why unsupervised is often preferred.</p>
<p>There’s also object-oriented image classification, or “multi-resolution segmentation,” which is a non-traditional approach (meaning it’s only come into use in the past decade or so). As the name suggests, it creates objects by grouping pixels rather than classifying individual pixels. The resulting objects have different shapes and scales, and thus can be classified more flexibly using different image layers (e.g. population density, infrared, elevation, etc.). The user is still doing supervised classification using samples and fancy algorithms, but with more accuracy when dealing with objects vs. individual pixels.</p>
<p><img src="/blog/img/satellite-05.jpg" alt="Example of how object-based image analysis works."><em>Example of how object-based image analysis works.</em></p>
<p>The general rule of thumb is that object-oriented classification is best for higher spatial resolution, since objects might consist of multiple pixels, and the other methods work fine for lower resolution (in which objects are just a pixel). Of course, as spatial resolution improves, this means that object-oriented classification might be increasingly adopted in kind.</p>
<p>The type of algorithm matters, too. For example, a highly tailored algorithm might eliminate any false classifications due to shadows by incorporating into its model the position of the sun and relevant ground elevations in the area based on the image’s location and time.</p>
<p>At the forefront of research are different automation techniques to help extract features. Methods leveraging machine vision are one example, as well as methodologies that allow for more variables for classification while maintaining a high level of accuracy (90%+). It’ll likely take a few years for commercially available products to catch up to the research (along with bugs that come out when scaling to product-level use), but highly accurate automation within 5 years doesn’t seem preposterous.</p>
<p>Once features are classified, information can be extracted for its desired purpose. Which leads to the various applications of geospatial analysis.</p>
<hr>
<h2 id="a-nameapplicationsawhat-are-the-applications"><a name="applications"></a>What are the applications?</h2>
<p>There are a bunch of industries that benefit from using satellite-based imagery — particularly for anything in which physical trends over time are needed or they want to see stuff below the Earth’s surface. The number of applications is expanding as imaging capability improves, since higher resolution images provide a more granular view of what’s happening on Earth.</p>
<p>Even though purchasing multispectral imagery can be high in absolute dollar terms, relative to the cost of physical exploration, it is inexpensive. But for non-profit or applications without this high cost of physical capital on the line, the reward isn’t necessarily as high.</p>
<p>Also, assume for any of the following applications, traders can use similar information to inform their financial bets. For example, if satellite imagery suggests that the rate of construction in China is slowing down, they might short construction materials firms or commodities as a result. Of course, this has some intriguing implications for the efficient-market hypothesis, if investors have information on a company’s operations that even the company itself might not possess.</p>
<p>The government has a variety of applications for geospatial imagery, and has been leveraging it as a source of intelligence for half a century. But, I’ll just be focusing on applications within commercial industries.</p>
<h3 id="current-applications">Current Applications</h3>
<h4 id="agriculture">Agriculture</h4>
<p>It can be hard to measure agricultural trends on the ground, so satellite imagery is immensely helpful in assessing crop health and yields, environmental changes and trends pertaining to livestock. Even when planning and maintaining agricultural sites, this imagery can map irrigation and analyze soil — even showing variations in soil’s organic matter.</p>
<p><img src="/blog/img/satellite-06.jpg" alt="Imagery highlighting irrigation"><em>Imagery highlighting irrigation.</em></p>
<p>Aside from optimizing costs and boosting productivity at large agricultural companies, there’s a general global need for improved agricultural production and better utilization of resources. Having a better sense of what and where these resources are to improve their management has significant benefits on a macro scale.</p>
<h4 id="engineering--construction">Engineering &amp; Construction</h4>
<p>Along with companies in the mining and oil &amp; gas industries, engineering &amp; construction companies have high capital costs relating to physical projects. So, geospatial imagery can help these companies visualize their projects, not just for evaluating and planning construction sites, but also for maintaining them. This helps reduce construction costs and also minimize environmental impact.</p>
<p><img src="/blog/img/satellite-07.jpg" alt="Digital elevation model of a construction site"><em>Digital elevation model of a construction site.</em></p>
<p>Being able to model construction sites in 3D is crucial for planning purposes, but also ensuring ongoing safety. And for certain project types, like airstrips, dams, power plants and sewers, you need data beyond just the visual. For example, when building an airport, not only do you need to make sure the terrain is appropriate for an airstrip, but also have 3D models for flight simulation to make sure pilots aren’t going to run into recurring issues.</p>
<h4 id="environmental-monitoring">Environmental Monitoring</h4>
<p>On the “save the world,” side of things, environmental monitoring helps assess damage from natural disasters as well as help manage natural resources. Governments can use satellite imagery to help develop disaster response plans, as well as improve environmental planning and conservation.</p>
<p><img src="/blog/img/satellite-08.jpg" alt="Imagery highlighting deforestation"><em>Imagery highlighting deforestation.</em></p>
<p>Being able to see high-level trends, like deforestation, is helpful to monitor local environmental health but even more so to evaluate potential long-term impacts. After all, trees don’t grow back overnight, so excessive “forest farming” can have devastating effects on future generations’ economic wellbeing. Not to mention being a harbinger of global climate change.</p>
<h4 id="logistics-shipping--maritime">Logistics (Shipping &amp; Maritime)</h4>
<p>Logistics and shipping companies, port operators, fishers, trade organizations and governments all have an interest in geospatial imagery relating to maritime and weather patterns. On the pure logistics side, being able to track ships in transit is highly useful, as tracking systems can fail when far enough away from ports. Weather patterns and other spatial data (like terrain mapping) can also help optimize shipping routes.</p>
<p><img src="/blog/img/satellite-09.png" alt="Search for MH370 by satellites"><em>Search for MH370; odd given the number of global recon satellites that it’s still missing.</em></p>
<p>Being able to monitor trading, spot illegal fishing or piracy, and help with search and rescue missions are of particular importance from a global trade perspective. Even the “little guy” can win — local fishers and fisheries are often put out of business by illegal fishing, which is more widespread than you might think.</p>
<h4 id="mining">Mining</h4>
<p>Multispectral satellite imagery has the ability to differentiate between different types of rocks, vegetation and soil, which helps mining and geology projects in a few different ways.</p>
<p><img src="/blog/img/satellite-10.jpg" alt="Imagery optimized to show rare earth elements"><em>Imagery optimized to show rare earth elements.</em></p>
<p>First and most obviously, this imagery can help identify clays, oxides and soils for mineral mapping and exploration. This is in contrast to most humans, who would walk to the location and say, “yep, that looks like ground.” All the different energy bands will show both different types of rocks and elements as well as structural aspects of the Earth’s surface that may influence ease of mining.</p>
<p>Second, it helps plan out mining projects. Digging into the ground isn’t the only challenge; mining companies also have to worry about how to get access to the mine and what infrastructure would be required to support the project. And, they also need to estimate what sort of impact the project will have on the surrounding area from a human and environmental perspective.</p>
<h4 id="oil--gas">Oil &amp; Gas</h4>
<p>Satellite imagery can help oil and gas companies reduce risk in oil exploration as well as monitor ongoing projects. The level of detail is pretty impressive, from generally detecting areas that are most productive down to even detecting seismic lines or offshore oil seepage.</p>
<p><img src="/blog/img/satellite-11.jpg" alt="Deepwater Horizon oil spill by satellite"><em>The Deepwater Horizon spill being just a bit more than seepage.</em></p>
<p>But not only does it help find areas most likely to be rich with oil, but it also helps these companies assess the potential costs and pitfalls associated with drilling in a particular area. For example, satellite imagery shows which areas have rock formations, heavy forest coverage, unfavorable weather conditions and whether they are in more remote or developed locations.</p>
<h3 id="future-applications">Future Applications</h3>
<p>In the next section I’ll talk about some of the challenges that have hindered adoption to date, but if geospatial imagery becomes more widely available and easier to leverage for business and operational intelligence, other industries may become customers in addition to those above.</p>
<p>One potential area is physical retail. A super cool application might be looking at the surrounding area and weather patterns of store locations to see what types of goods might resonate best with local customers. For example, imagery could show the levels and types of vegetation in nearby residential areas to see if stocking more garden supplies makes sense. If imagery can be updated quickly enough, retail companies could see how many cars are at a given location in order to estimate growth or decline. They could also plan new locations based on factors like accessibility or even locations that have lots of cars parked at their competitors’ stores.</p>
<p>In that vein, real estate is another potential application area. Much like for construction projects, real estate developers can improve planning their projects by being able to optimize residential appeal — whether by accessibility, proximity to natural spaces or avoiding high-risk zones. And the same goes for city and urban planners.</p>
<p>The advertising industry could leverage different types of data towards better ad targeting. Someone like Facebook could use satellite imagery to generate a wealth of data about a user’s specific location, that they can then provide as part of their user targeting suite for their customers. This could include the example above of measuring vegetation in residential areas to advertise garden supplies, or knowing proximity to mountains and trails to advertise hiking gear or mountain bikes.</p>
<p>As I’ll discuss a bit later, there’s also the potential that space data startups generate and sell intelligence directly to end customers, which could open up an even wider set of potential applications.</p>
<hr>
<h2 id="a-nameadoptionawhats-hindering-adoption"><a name="adoption"></a>What&rsquo;s hindering adoption?</h2>
<p>There isn’t necessarily one thing hindering adoption of geospatial imagery and intelligence. It’s a combination of availability, costs, latency, quality and usability. All these issues in conjunction means there’s a barrier for many commercial enterprises to using geospatial data to their advantage.</p>
<p>Getting satellites into orbit so there is more imagery available is step one. The goal of many of these imagery companies is to have a constellation of satellites in orbit to allow for daily imaging of the whole planet. Launching these satellites into orbit is currently expensive, and ups the cost of the end imagery (which thereby reduces the potential customer set). So, a lot depends on SpaceX’s (and others’) ability to cut down on the cost of satellite launches. The recent successful Falcon 9 launch and landing will very likely pave the way for rocket reuse, which will help bring down these costs substantially.</p>
<p><img src="/blog/img/rocketlaunch.gif" alt="Cinemagraph of the American flag waving while a rocket launches"><em>Obligatory cinemagraph in the name of ‘Murica.</em></p>
<p>The delivery of imagery is historically quite slow as well. Not only do satellites capture a small part of the Earth at a time, but there’s also the issue of sending down large file sizes over transmissions speeds that are just in the hundreds of MB per second range. Assuming there’s no pre-processing before the customer receives the image, the customer still has to download the image for themselves, which takes time…and any processing work needed only adds to that time. This is starting to change, as images are increasingly available online and some images are pre-processed, saving customers from having to do the image processing themselves.</p>
<p>Of course, images will only realistically be “near real-time,” given the transmission delay. But getting down to a matter of minutes, or even hours, is an improvement over the traditional daily or longer wait times. Faster transmission speeds could help improve the speed at which images are received as well.</p>
<p>Launching a satellite into space is no cheap feat, not to mention costs of ongoing operations, resulting in imagery pricing that is quite expensive. Pricing can range from $20 to $25 per square km, and there are often minimum order sizes of 25 square km a pop (meaning $500+).</p>
<p>On the satellite design side, more development is needed in the miniaturization of components. For example, Planet Labs’ satellites are cutely described as “baguette”-size, and that’s the general trend — 172 satellites weighing 100kg (~220lbs) or less were launched in 2014. There are also sensor-related challenges, most which can’t be remediated at the source, putting more onus on the image processing part of the chain. There are multiple tradeoffs within sensors that affect quality: spectral resolution vs. signal to noise ratio (SNR), radiometric resolution vs. SNR, data size vs. spatial resolution, and spatial resolution vs. spectral resolution.</p>
<p>So, there’s a long way to go with image processing software as well, particularly as it pertains to information extraction. Better automation seems to be the path forward towards improving this software, though that isn’t particularly easy, either. It isn’t surprising that automation is perhaps the biggest area of focus among many of the startups in the field. The automation is primarily in the pre-processing (rectification and restoration phase), but also through easier integration (API all the things).</p>
<p>While I wasn’t able to find these claims specifically, after looking at a bunch of traditional GIS software, it has the GUI sophistication of Minesweeper from Windows 95. While I’m sure for users familiar with these interfaces it makes sense and works fine, I can’t help but imagine that a more intuitive and “typical user”-friendly UX might allow for more widespread adoption.</p>
<hr>
<h2 id="a-namewho-caresawho-cares"><a name="who-cares"></a>Who cares?</h2>
<p>The government has cared a lot for a long time, and I’d have to assume they’d be a little nervous about a bunch of new satellites being sent into orbit that may risk having spy satellites uncovered. But, they would also be able to benefit from innovations, particularly on the software-side, that are spurred by greater commercial adoption. Though based on how homely most government-facing software looks, maybe government analysts would disprove of UI improvements.</p>
<p><img src="/blog/img/satellite-12.jpg" alt="Satellite imagery via the CIA of Osama bin Ladin’s compound."><em>Satellite imagery via the CIA of Osama bin Ladin’s compound.</em></p>
<p>Any of the commercial industries from earlier might care, as it can help them cut costs, curtail risks and arguably even improve revenues. So, they care to the extent that better satellite imagery and analysis can help them optimize their business, but the degree to which it does may vary. I’d imagine it’s a “nice to have,” maybe even “would love to have,” but not a “necessary to have” in most of these cases.</p>
<p>As described above, there are a lot of “save the world” use cases that could legitimately help improve the environment and even potentially human rights. But generally those budgets are much thinner than for-profit industries.</p>
<p>On the darker side of things, there’s the potential for invasion of privacy. This currently pertains to sub-orbit, but high altitude aircraft (as far as we, the unknowing public, knows), but it certainly isn’t a stretch to imagine being able to detect individuals by thermal spectrum within specific buildings. Or, to watch their patterns of life via satellite — though that could more easily be done by gaining access to their phone’s GPS and location data.</p>
<p>With the <a href="https://www.washingtonpost.com/news/the-switch/wp/2015/05/22/the-house-just-passed-a-bill-about-space-mining-the-future-is-here/">recent bill passed to allow companies to retain profits from space mining activities</a>, improvements in these technologies could potentially help these companies scout asteroids and other celestial objects containing valuable elements. It might be tricky from the satellite positioning perspective, but would cut down on the exploration costs enormously if companies could make “sure bets.”</p>
<hr>
<h2 id="a-namerisksawhat-are-the-risks"><a name="risks"></a>What are the risks?</h2>
<p><img src="/blog/img/satellite-13.jpg" alt="How a satellite constellation looks"><em>How a satellite constellation looks.</em>
A lot depends on getting satellites into orbit, at least to make this a huge opportunity. The successful Falcon 9 launch and re-landing helps mitigate those risks a bit, but that happened only weeks ago. So, to get more satellites into orbit, thus increasing not only the amount of imagery, but quality of imagery, you have to hope that SpaceX really has their stuff together and in a hurry. You actually probably need to hope that more than just SpaceX does rocket reuse successfully.</p>
<p>Satellite imagery, at least as it stands today, also isn’t that big of an industry. The satellite industry as a whole is a hefty market ($200 billion), particularly because of consumer communications and entertainment. But right now the Earth Observation (EO) market, which includes the satellite imagery portion, is still quite small.</p>
<p>Specifically, the EO market size is just about $2 billion today, which doesn’t leave a lot of room for new players to make a killing. DigitalGlobe and Esri, arguably the largest satellite imagery providers, only made about $650mm and $950mm in revenue in 2014, respectively. Some of the estimates, like from Northern Sky research, put the EO market hitting $3.5 billion in 2020, and $4.5 billion by 2024.</p>
<p>An alternative is betting that even if the imagery part doesn’t grow that quickly, better software and analytics still has the opportunity for significant growth. After all, these tools would help companies get a better bang for their buck when purchasing satellite imagery. But is it a 10x better bang for the buck than it stands today? That’s up for debate, and largely depends on use case. But that’s not the sort of “sure bet” most VCs like.</p>
<p>On the other hand, if the monetization of satellite imagery isn’t via the imagery or software itself, but via the resulting data streams, then there’s arguably less risk. If you’re just selling what would essentially be business intelligence, but collected from Earth’s orbit, you’d undoubtedly find additional interested customers due to the more immediate value proposition. However, companies pursuing this would probably have to control the whole chain — satellites, imagery, processing, etc. — to have differentiated and high-quality data streams, which requires a ton of capital to pursue. So VCs would need to clutch their talismans and hope the all-in bet pays off.</p>
<hr>
<h2 id="a-namecurrent-sceneawhats-the-current-scene"><a name="current-scene"></a>What&rsquo;s the current scene?</h2>
<p>There are not too many startups specifically in the satellite-imagery arena, though there are a few more in the satellite and space category more generally (most notably SpaceX). The ones who are in what I’d call the “geospatial big data” arena are:</p>
<ul>
<li>Analyze</li>
<li>Aquila Space</li>
<li>BlackSky Global</li>
<li>CartoDB</li>
<li>Descartes Labs</li>
<li>Iceye</li>
<li>MapBox</li>
<li>Planet Labs</li>
<li>Orbital Insight</li>
<li>Skybox Imaging (acquired by Google)</li>
<li>Spire</li>
<li>TellusLabs</li>
<li>UrtheCast</li>
</ul>
<p>There are some sub-categories, like tracking weather and maritime conditions (Analyze, Spire), or mapping services (CartoDB, Mapbox). But for the most part there isn’t much overlap between the companies, other than at the highest level. You’ll see terms like “tracking,” “data streams,” and so forth, but they all self-describe quite differently.</p>
<p>The more notable VC funds that have funded some of these ventures are:</p>
<ul>
<li>Accel Partners</li>
<li>Draper Fisher Jurvetson</li>
<li>Earlybird Venture Capital</li>
<li>Felicis Ventures</li>
<li>Founders Fund</li>
<li>Foundry Group</li>
<li>Lux Capital</li>
<li>Promus Ventures</li>
<li>Razors Edge Ventures</li>
<li>Rothenberg Ventures</li>
<li>RRE Ventures</li>
</ul>
<p>There are also a few larger companies that do provide either satellite imagery, GIS software, or geospatial database management systems, including:</p>
<ul>
<li>Autodesk</li>
<li>Bentley Systems</li>
<li>DigitalEye</li>
<li>Esri</li>
<li>Exelis</li>
<li>Hexagon Geospatial</li>
<li>Teradata</li>
</ul>
<p>There are also a number of open source projects, from software to SDKs and libraries, that are released by non-profit organizations and universities. But they rarely have the same breadth of features, nor the number of capabilities, as the paid software.</p>
<hr>
<h2 id="a-nameconclusionaconclusion"><a name="conclusion"></a>Conclusion</h2>
<p><img src="/blog/img/satellite-14.jpg" alt="Satellite looking over Earth at night">
There’s a reason why I like using the term “space data” — this is really cool stuff. But, there are huge capital costs involved for a market that as of yet isn’t very big at all. Or, for companies that are improving just the software part, there’s a lot of reliance on third parties to provide the actual imagery.</p>
<p>Automation does seem like the most legitimate opportunity for a 10x improvement on what is available today, so that companies don’t need GIS experts in-house to still glean intelligence from satellite imagery. It seems like this software vertical is particularly behind in many of the infrastructure developments made in the past decade, so there’s certainly room for disruption just in that regard.</p>
<p>But, what are companies’ ongoing needs for satellite imagery? Many of the applicable industries suggest a per-project need rather than the sort of continuous need best met via SaaS. At the very least, the government is likely willing to throw some money towards better software, but relying on that revenue is unlikely to produce a blockbuster VC return.</p>
<p>The most viable proposition in my eyes is in eliminating the need for companies to have to even touch satellite imagery and give them the information they need to know, i.e. the data stream approach. It feels like a truly modern way of approaching business and operational intelligence with a large potential audience. And hedge funds would probably eat it up.</p>
<p>My main hesitation here would be in vertical-specific needs, and to a lesser extent, in pricing. My gut feeling is that, at least in early days, the data received by customers would require a heavy level of customization based on their needs, making the business almost like a software and data-enabled consultancy (which is arguably working out for Palantir). And as a result, the pricing might still be prohibitive to many customers — not to mention the initial and ongoing costs of maintaining a constellation of satellites.</p>
<p>My prediction is that many of the software-only companies will remain quite small, while those pursuing the entire chain (like Planet Labs) have a good shot at a big long-term payoff (with bigger capital requirements, of course). DigitalGlobe itself only has a $1 billion market cap, with about $100mm cash, so they can’t just gobble up the new software companies. There’s always the chance a cash-rich tech giant like IBM or Facebook decides they’re interested in the space data game, too. Or perhaps I’m wrong and space mining comes sooner rather than later, with space data crucial for any level of success.</p>
<p><img src="/blog/img/satellite-15.jpg" alt="Rendering of asteroid mining"><em>Asteroid mining — the not so distant future?</em></p>
]]></atom:content>
        </item>
        
        <item>
            <title>WTFunding: Industrial / Manufacturing Analytics</title>
            <link>https://kellyshortridge.com/blog/posts/wtfunding-industrial-manufacturing-analytics/</link>
            <pubDate>Tue, 13 Oct 2015 17:24:07 -0400</pubDate>
            
            <guid>https://kellyshortridge.com/blog/posts/wtfunding-industrial-manufacturing-analytics/</guid>
            <description>
WTFunding is one of my “spare time” projects to delve into tech sectors attracting VC funding that pique my curiosity. I like connecting dots between disparate things, it’s also pretty useful.
Table of Contents: So what is “industrial / manufacturing analytics”? What are the applications? What’s hindering adoption? Who cares? What are the risks? What’s the current scene? Final thoughts So what is “industrial / manufacturing analytics”? There doesn’t seem to be a great catch-all term for it yet, but there are a few different terms for and related to the sector I’m discussing: smart manufacturing, infrastructure analytics, manufacturing analytics, industrial analytics and, though it’s a broader term, IoT analytics.
My preference is towards “infrastructure analytics,” but when I use those terms interchangeably throughout, know I mean the same thing (and it’s also helpful to keep an eye out for that range of terms in your own reading). The one sub-sector I’ve seen broken out the most is predictive maintenance, but otherwise startups in this space use some selection of the terms above when self-describing.
Gartner’s definition says, “providers that help manufacturers support variable product content through the manufacturing process and improve the visibility and analysis of manufacturing performance.” Other research creates some beautiful imagery like, “removing the barrier between physical and information flows,” and, “translating the physical world into a model accessible by IT.” Or that, you know, this is all about “IoT orchestration” and “big-data-driven manufacturing.”
My ELI5 definition, that makes me want to stab my eyes out less, is, “providers that help manufacturers manufacture better, and for less money, using data from machines.” They listen to all the stuff the manufacturers’ machines have to say, figure out what part of the stuff shows that things are going wrong or things could be done better, and show the manufacturers pretty pictures and clear advice on how to fix machines or make the machines do things better.
Industry people refer to this data as generated “on the edge,” which just means at the sensor or machine level. As I’ll talk about farther down, there are lots of machines that provide data for solutions to leverage. To visualize the different types of machines, here are some pictures along with what they are:
Machine vision systems RFID/barcode scanners Welding machines *
PLCs Robots Plasma cutters Oh, and don’t forget “factory-floor software” like MES (manufacturing execution system) and ERP (enterprise resource planning).
Let’s walk through an example manufacturing process to get a sense of what data is generated; for example, putting together the chassis (the car’s frame) on an automobile.
Robots will put together the chassis by welding different parts together. Then, the robots will typically put in the engine, transmission and suspension. Building the rest of the body operates in much the same way — putting in parts and welding them together (some human interaction is still involved).
Tracking temperature abnormalities can help cut down on defects and, once enough data is analyzed, potentially speed up production time as well. But, what is arguably even more valuable is tracking the robots themselves. The data generated from their operations allows better forecasting of production time, maximizing uptime, performing predictive maintenance, spotting quality issues and generating insights into how processes might be improved.
So, this software mines information across the manufacturing floor from all those machines, sensors, devices, etc. But how is that done?
The data is typically either collected via a gateway device that’s located on the factory floor or via a local node that transfers the collected data to a central gateway device. Then, the data needs to be made usable, or “transformed” (as is true for traditional data analytics), so there needs to be cleansing, normalizing and organizing.
This transformed data is then analyzed, either on-prem or cloud-based, using industry-specific data models. These include those tailored towards discrete, batch and process manufacturing. Since VC’s ❤ KPI’s, some of the common ones from these analytics are performance, uptime, quality, cycle time, OEE (overall equipment effectiveness), along with reasons for defects and downtimes.
The results of this analysis and the KPI’s are then shown to the customer that is (ideally) highly understandable and conducive to informing decision-making and action. Which brings us to what sorts of decisions and actions can be informed by this analysis.
What are the applications? I’m not the first to suggest that there’s a big resource allocation problem across a number of industries due to prioritization of “collect all the things” (the NSA approach) rather than finding the “why” first and narrowing down what needs to be collected from there. Perhaps as a result of the jubilant headlines on how revolutionary big data is, it often seems that data collection serves primarily as a signal for organizations to show that they’re doing something about big data.
It’s a similar conundrum to my company’s industry, information security, in which most organizations care foremost about showing they care about security, and vendors are happy to be providing something that sounds really cool but doesn’t solve anything real. But companies showing that they care about security ends up being expensive not just financially, but also time, resource and user experience-wise (something my company is attempting to change) — and the same is largely true for companies attempting to “do something” with the big data they’ve now spent a lot of money to collect and store.
Going into researching this smart manufacturing area, I assumed there would exist roughly the same narrative. Tons of data is given off by sensors, which is being collected and now companies are searching for applications to show that their investments into collecting and storing all this data have not been fruitless.
However, to a large extent this is not the case. As I’ll get to in the next section, a big problem is actually in the data collection itself. And companies have some pretty crystalline use-cases in mind that would materially — perhaps even disruptively — enhance their operations, and the challenge is how to get the right data to do so in a non-cost-prohibitive manner.
At the highest level, the primary application is “optimizing operations.” For a tech startup, that can mean reducing downtime, maximizing processing speed, and so forth. For a manufacturing floor, the implications are enormous. I had the pleasure of seeing a manufacturing floor up-close when I was very young, and it’s one of the clearest of my early memories. But for those who may not have before, here are a few pictures as a guide (in addition to all the lovely machines from earlier):
As you can see, there are lots of machines, and different types of them. This means that there’s a lot of opportunity for malfunctioning, misbehaving, slowing down, speeding up and so forth. Identifying and fixing these issues costs money, and before they’re fixed, there are also potential losses from downtime and quality issues as well.
As a result, there is, in fact, a long list of potential applications of being able to analyze data from these machines, including, but not limited to (and with some overlap):
Improving quality Identifying issues in real-time Minimizing operational variability Reducing waste of raw materials Reducing unplanned downtime Reducing scrap rates Identifying opportunities for process improvements Increasing production speed Predicting supply needs Synthesizing visibility of operations across plants Reducing compliance costs Improving auditability The impact of these applications is huge in an industry with slim margins. Being able to better predict, streamline and “smooth out” operations results in significant savings at scale.
What’s hindering adoption? Maybe one day humans can just think of a bunch of dots and have it turn into a sportscar that shoots out of random screens, disrupting manufacturing entirely.
As foreshadowed above, one of the largest barriers to adoption of these technologies is in data collection. Though it can be treated as a distinct part of the big data chain, the “data cleanup” aspect is extremely challenging, as it is difficult to combine and centralize sensor data emanating from disparate machines and devices.
And, that’s not to mention the sheer amount of data causes some hefty storage requirements, and the pace at which data is generated means that solutions have to be able to keep up. There are also challenges like needing to make sure sensors and communications hardware can survive potentially rough environments, but it seems like people know more or less how to accomplish that.
The most-repeated challenge in realizing the “Fourth Industrial Revolution” is that there needs to be a standard data architecture across vendors. Lacking a global government to decree all vendors must adhere to a certain data architecture or perish, most of the people who cite this challenge have suggestions to ameliorate it along the lines of ¯\_(ツ)_/¯.
As a result, current capabilities are pretty okay at helping solve basic problems but not great at solving tricky, complex and big problems. Most of the time, being able to solve those hard problems comes with a big price tag and requires a lot of human input.
When you visualize the manufacturing floor, it becomes super easy to understand why this is the state of the world. Ignoring the fact that there are lots of different kinds of machines involved in any particular manufacturing process, individual machines themselves will have different types of sensors recording tons of data points on different things involved in its operation. Considering then that an individual machine will have thousands of data points in a single operation, an entire manufacturing floor will have millions — so just imagine the sheer scale of how much data is generated from ongoing operations.
It’s a frustrating realization, and if you’re like me, then your mind begins to furtively think of how to fix it. Starting at a small scale, theoretically vendors could ensure that all sensor data generated from a single machine adhered to the same data architecture. Let’s be bold and optimistic and even say that a vendor could ensure that all their different machines adhered to the same data architecture, so if you had a floor with only their equipment, you could collect and combine all this juicy data with ease.
But in the real world, making sure all the sensor data has the same architecture is decidedly not trivial. What incentives does the vendor have to implement this extra process when designing their products? Do customers really care so much about the data that they’re willing to pay a premium for machines with a common data architecture?
This results in first-mover disadvantages both on the vendor and customer side. For vendors, they don’t want to be the dopes that were first to put all this effort into having this gorgeous, seamlessly combinable spring of sensor data from their machines that customers end up not buying because it’s more expensive than that of their Tower of Babel-esque competitors. For customers, they don’t want to be the dopes that were first to pay a premium for this theoretically awesome tech that turns out not to generate such actionable insights after all, or worse, isn’t as easily collected and synthesized as promised. Not to mention the customers will still need to buy or develop the analytics that leverage this data.
It’s going to take a risky bet on one side or the other. My personal bet is on the customer side, which will (hopefully) put pressure on vendors. Why? Because the customers (the manufacturers) may be able to sell some of the insights they gain to their own customers as part of a value-added offering. For example, they can differentiate from their competition by suggesting data-supported ways for their customers to improve the quality of their products or speed at which their products can get to market. While helping bottom-line by reducing their downtime, use of raw materials, etc. is all well and good, there’s nothing like a boost to the top-line to really cultivate interest.
This still doesn’t solve the problem of how standardization will actually be accomplished. It doesn’t do a ton of good if each vendor standardizes within itself but makes it even harder for manufacturers to combine data from machines across different vendors. Very few vendors will make all of the machines and devices manufacturers need to run their operations, so it is realistic to assume that broader standardization is essential.
There is some evidence of the government looking to help facilitate standardization, but I, for one, have little faith in how quickly any initiative might be accomplished in that manner. Realistically, the GE’s and other titan industrial firms will throw a lot of money at figuring this out and monetize it as something like IoTAaaS, but those will likely be in areas away from the manufacturing part of “industrial” and instead in use-cases where their machine or part can serve as a strong sole-source of data (e.g. sensors on jet engines to help optimize fuel efficiency).
What I’d love to see is an approach similar to what the founders of Flatiron Health performed in its earliest days. The lore is they traveled around the country speaking with cancer specialists and treatment centers to better understand the challenges they faced, what sort of data was relevant, how it was currently being shared, etc. From gathering those insights first-hand, they developed a solution that has been very sincerely described as “fighting cancer with big data.” There are meaty enough sub-sectors within the industrial space that I strongly believe something similar could be accomplished and help leap over the standardization hurdle to the phase where meaningful insights can be generated with ease.
But, I’m only half joking when I think that maybe this whole problem will be solved if Google open sources whatever software it develops as it builds its self-driving cars and/or AI-driven robot and drone army, ahem, logistics network, then executes its Order 66 to have the global leaders it’s funded and seeded mandate Google’s way as the standard.
Who cares? Luckily from a “who cares” perspective, the field of potential customers is actually enormous. Manufacturing spans behemoth industries, including pharmaceuticals, transportation, aerospace and defense, oil and gas, electronics and chemicals. And within each industry is a chunky supply chain full of many vendors, from raw materials to packaging. Being the go-to operational analytics solution provider for any one of these industries alone would very likely result in a hefty amount of revenue.
From the VC perspective, it’s pretty obvious why there’d be interest in investing in this area, particularly if you’re an investor for a corporate VC arm who happens to be a part of the manufacturing or industrial supply chain. It also heavily touches on IoT, which is an area still attracting a considerable amount of interest, but has crossover appeal to enterprise software investors who are familiar with more general analytics and business intelligence companies.
What are the risks? Our lizard brains are programmed to like pictures of sparks.
Being early to the party is no fun, and even though the “Fourth Industrial Revolution” has had its cue music on for a while, it’s still extremely early and somewhat speculative. Nanotechnology is an example industry that’s had an excessively long drumroll, to the point that now legitimately exciting advances are met by many with indifference. And many of the early “plays” in it never took off or lived up to the hype.
To generalize, VC’s like taking “safe risks.” What this means is that they typically like to fund new companies in markets somewhere between totally unproven and super saturated. So where does the infrastructure analytics market fall?
On the one hand, it’s sort of a “no duh” that companies want this operational intelligence, for all the potential applications described earlier. On the other, will these companies buy these sorts of from startups? And can these startups prove that their solutions provide disruptive-level value early enough? Those are questions that are true for a lot of enterprise solution providers, but I feel are magnified given the higher-stakes of the physical realm.
There’s also the uncertainty of standardization. Can companies actually extract the value they promise in a world with that many disparate types of data? There’s potentially the question of whether their value is eroded a bit if standardization happens, and thus it becomes substantially easier to use existing data science methods to garner operational intelligence, but I personally think that’s a lesser risk (and definitely cart before horse).
As in my Flatiron Health analogy earlier, a startup that cataloged all the different machines and things involved in the processes of a specific sub-sector in industrial analytics might have the potential to do exceptionally well. That level of granularity may be necessary to truly crack the code on how to most efficiently and meaningfully collect, combine and analyze this data. But it would involve a longer time horizon and doing Paul Graham’s “things that don’t scale,” both which add execution risk that might be insurmountable in an investors’ eyes.
What’s the current scene? There are a handful of startups that are already operating in this arena (and I have no doubt I’m missing some and quite a few are in stealth). They self-describe in roughly four buckets:
Manufacturing Analytics/Intelligence — Hai, Northwest Analytics, OFS Systems, Optimal&#43;, SightMachine Predictive Maintenance — Augury, DecisionIQ, Maintenel, Predikto Industrial Analytics — Seeq IoT Analytics — Decisyon, mnubo Some of the more notable VC funds that have invested in these companies include:
First Round Formation 8 IA Ventures Lerer Hippeau Madrona Venture Group O’Reilly Alpha Tech Orfin Ventures …and KKR There are a number of industrial conglomerates that have created data science wings, realizing the potential of the space, such as the list below. They tend to self describe as “industrial analytics” or “process analytics.”
ABB Dionex GE (Predix) Genpact Honeywell Siemens Along with analytics and consulting incumbents spreading out into industry-specific solutions, including the list below. In contrast to the industrial conglomerates above, they come up if you search for “manufacturing analytics.”
Ayasdi CSC Datawatch Deloitte Oracle ParStream Predixion Tableau TIBCO Wipro Final Thoughts I think the issue with the industrial analytics sector isn’t level of VC interest, but instead that the pipeline of companies is really small; so even if investors wanted to take a gamble, there hasn’t been much opportunity to do so. After all, the “right” founders will have come from working in the industrial sector, and if they have the level of data science expertise required to create a novel startup in this space, I’d suspect they’re really well paid (especially if in oil and gas). More typical SV data science founder-types may be apprehensive of tackling this problem space as well given the domain expertise required, which seems to be tricky to acquire without access to industrial equipment.
My prediction is we’ll see some of the incumbent analytics players creating industry-specific modules at first along with the big industrial companies creating analytics solutions. And from there, I hope (and I suspect VC’s hope as well) there are defectors that take the risk to create a pure-play company that takes the big leap forward rather than incremental shuffles.
</description>
            <atom:content type="html"><![CDATA[<p><img src="/blog/img/sparks.jpg" alt="Image of Sparks"></p>
<p><em>WTFunding is one of my “spare time” projects to delve into tech sectors attracting VC funding that pique my curiosity. I like connecting dots between disparate things, it’s also pretty useful.</em></p>
<h3 id="table-of-contents">Table of Contents:</h3>
<ol>
<li><a href="#so-what-is">So what is &ldquo;industrial / manufacturing analytics&rdquo;?</a></li>
<li><a href="#applications">What are the applications?</a></li>
<li><a href="#adoption">What&rsquo;s hindering adoption?</a></li>
<li><a href="#who-cares">Who cares?</a></li>
<li><a href="#risks">What are the risks?</a></li>
<li><a href="#current-scene">What&rsquo;s the current scene?</a></li>
<li><a href="#conclusion">Final thoughts</a></li>
</ol>
<hr>
<h2 id="a-nameso-what-isaso-what-is-industrial--manufacturing-analytics"><a name="so-what-is"></a>So what is “industrial / manufacturing analytics”?</h2>
<p>There doesn’t seem to be a great catch-all term for it yet, but there are a few different terms for and related to the sector I’m discussing: smart manufacturing, infrastructure analytics, manufacturing analytics, industrial analytics and, though it’s a broader term, IoT analytics.</p>
<p>My preference is towards “infrastructure analytics,” but when I use those terms interchangeably throughout, know I mean the same thing (and it’s also helpful to keep an eye out for that range of terms in your own reading). The one sub-sector I’ve seen broken out the most is predictive maintenance, but otherwise startups in this space use some selection of the terms above when self-describing.</p>
<p>Gartner’s definition says, “providers that help manufacturers support variable product content through the manufacturing process and improve the visibility and analysis of manufacturing performance.” Other research creates some beautiful imagery like, “removing the barrier between physical and information flows,” and, “translating the physical world into a model accessible by IT.” Or that, you know, this is all about “IoT orchestration” and “big-data-driven manufacturing.”</p>
<p>My <a href="https://www.urbandictionary.com/define.php?term=ELI5">ELI5</a> definition, that makes me want to stab my eyes out less, is, “providers that help manufacturers manufacture better, and for less money, using data from machines.” They listen to all the stuff the manufacturers’ machines have to say, figure out what part of the stuff shows that things are going wrong or things could be done better, and show the manufacturers pretty pictures and clear advice on how to fix machines or make the machines do things better.</p>
<p>Industry people refer to this data as generated “on the edge,” which just means at the sensor or machine level. As I’ll talk about farther down, there are lots of machines that provide data for solutions to leverage. To visualize the different types of machines, here are some pictures along with what they are:</p>
<table>
<thead>
<tr>
<th style="text-align:center"><img src="/blog/img/industrial-01.png" alt="machine vision systems"></th>
<th style="text-align:center"><img src="/blog/img/industrial-02.png" alt="RFID/barcode scanners"></th>
<th style="text-align:center"><img src="/blog/img/industrial-03.png" alt="welding machines"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">Machine vision systems</td>
<td style="text-align:center">RFID/barcode scanners</td>
<td style="text-align:center">Welding machines</td>
</tr>
</tbody>
</table>
<p>*</p>
<table>
<thead>
<tr>
<th style="text-align:center"><img src="/blog/img/industrial-04.jpeg" alt="PLCs"></th>
<th style="text-align:center"><img src="/blog/img/industrial-05.jpeg" alt="Robots"></th>
<th style="text-align:center"><img src="/blog/img/industrial-06.jpg" alt="Plasma cutters"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">PLCs</td>
<td style="text-align:center">Robots</td>
<td style="text-align:center">Plasma cutters</td>
</tr>
</tbody>
</table>
<p>Oh, and don’t forget “factory-floor software” like MES (manufacturing execution system) and ERP (enterprise resource planning).</p>
<p>Let’s walk through an example manufacturing process to get a sense of what data is generated; for example, putting together the chassis (the car’s frame) on an automobile.</p>
<img style="float: right; max-width:50%; padding: 5px" src="/blog/img/industrial-07.png">
<p>Robots will put together the chassis by welding different parts together. Then, the robots will typically put in the engine, transmission and suspension. Building the rest of the body operates in much the same way — putting in parts and welding them together (some human interaction is still involved).</p>
<p>Tracking temperature abnormalities can help cut down on defects and, once enough data is analyzed, potentially speed up production time as well. But, what is arguably even more valuable is tracking the robots themselves. The data generated from their operations allows better forecasting of production time, maximizing uptime, performing predictive maintenance, spotting quality issues and generating insights into how processes might be improved.</p>
<p>So, this software mines information across the manufacturing floor from all those machines, sensors, devices, etc. But how is that done?</p>
<p>The data is typically either collected via a gateway device that’s located on the factory floor or via a local node that transfers the collected data to a central gateway device. Then, the data needs to be made usable, or “transformed” (as is true for traditional data analytics), so there needs to be cleansing, normalizing and organizing.</p>
<p>This transformed data is then analyzed, either on-prem or cloud-based, using industry-specific data models. These include those tailored towards discrete, batch and process manufacturing. Since VC’s ❤ KPI’s, some of the common ones from these analytics are performance, uptime, quality, cycle time, OEE (overall equipment effectiveness), along with reasons for defects and downtimes.</p>
<p>The results of this analysis and the KPI’s are then shown to the customer that is (ideally) highly understandable and conducive to informing decision-making and action. Which brings us to what sorts of decisions and actions can be informed by this analysis.</p>
<hr>
<h2 id="a-nameapplicationsawhat-are-the-applications"><a name="applications"></a>What are the applications?</h2>
<p>I’m not the first to suggest that there’s a big resource allocation problem across a number of industries due to prioritization of “collect all the things” (the NSA approach) rather than finding the “why” first and narrowing down what needs to be collected from there. Perhaps as a result of the jubilant headlines on how revolutionary big data is, it often seems that data collection serves primarily as a signal for organizations to show that they’re doing something about big data.</p>
<p>It’s a similar conundrum to my company’s industry, information security, in which most organizations care foremost about showing they care about security, and vendors are happy to be providing something that sounds really cool but doesn’t solve anything real. But companies showing that they care about security ends up being expensive not just financially, but also time, resource and user experience-wise (something my company is attempting to change) — and the same is largely true for companies attempting to “do something” with the big data they’ve now spent a lot of money to collect and store.</p>
<p>Going into researching this smart manufacturing area, I assumed there would exist roughly the same narrative. Tons of data is given off by sensors, which is being collected and now companies are searching for applications to show that their investments into collecting and storing all this data have not been fruitless.</p>
<p>However, to a large extent this is not the case. As I’ll get to in the next section, a big problem is actually in the data collection itself. And companies have some pretty crystalline use-cases in mind that would materially — perhaps even disruptively — enhance their operations, and the challenge is how to get the right data to do so in a non-cost-prohibitive manner.</p>
<p>At the highest level, the primary application is “optimizing operations.” For a tech startup, that can mean reducing downtime, maximizing processing speed, and so forth. For a manufacturing floor, the implications are enormous. I had the pleasure of seeing a manufacturing floor up-close when I was very young, and it’s one of the clearest of my early memories. But for those who may not have before, here are a few pictures as a guide (in addition to all the lovely machines from earlier):</p>
<table>
<thead>
<tr>
<th style="text-align:center"><img src="/blog/img/industrial-08.png" alt="PLCs"></th>
<th style="text-align:center"><img src="/blog/img/industrial-09.png" alt="Robots"></th>
<th style="text-align:center"><img src="/blog/img/industrial-10.png" alt="Plasma cutters"></th>
</tr>
</thead>
</table>
<p>As you can see, there are lots of machines, and different types of them. This means that there’s a lot of opportunity for malfunctioning, misbehaving, slowing down, speeding up and so forth. Identifying and fixing these issues costs money, and before they’re fixed, there are also potential losses from downtime and quality issues as well.</p>
<p>As a result, there is, in fact, a long list of potential applications of being able to analyze data from these machines, including, but not limited to (and with some overlap):</p>
<ul>
<li>Improving quality</li>
<li>Identifying issues in real-time</li>
<li>Minimizing operational variability</li>
<li>Reducing waste of raw materials</li>
<li>Reducing unplanned downtime</li>
<li>Reducing scrap rates</li>
<li>Identifying opportunities for process improvements</li>
<li>Increasing production speed</li>
<li>Predicting supply needs</li>
<li>Synthesizing visibility of operations across plants</li>
<li>Reducing compliance costs</li>
<li>Improving auditability</li>
</ul>
<p>The impact of these applications is huge in an industry with slim margins. Being able to better predict, streamline and “smooth out” operations results in significant savings at scale.</p>
<hr>
<h2 id="a-nameadoptionawhats-hindering-adoption"><a name="adoption"></a>What&rsquo;s hindering adoption?</h2>
<p><img src="/blog/img/industrial-11.jpeg" alt="Image of a man watching a car fly through a screen"><em>Maybe one day humans can just think of a bunch of dots and have it turn into a sportscar that shoots out of random screens, disrupting manufacturing entirely.</em></p>
<p>As foreshadowed above, one of the largest barriers to adoption of these technologies is in data collection. Though it can be treated as a distinct part of the big data chain, the “data cleanup” aspect is extremely challenging, as it is difficult to combine and centralize sensor data emanating from disparate machines and devices.</p>
<p>And, that’s not to mention the sheer amount of data causes some hefty storage requirements, and the pace at which data is generated means that solutions have to be able to keep up. There are also challenges like needing to make sure sensors and communications hardware can survive potentially rough environments, but it seems like people know more or less how to accomplish that.</p>
<p>The most-repeated challenge in realizing the “Fourth Industrial Revolution” is that there needs to be a standard data architecture across vendors. Lacking a global government to decree all vendors must adhere to a certain data architecture or perish, most of the people who cite this challenge have suggestions to ameliorate it along the lines of ¯\_(ツ)_/¯.</p>
<p>As a result, current capabilities are pretty okay at helping solve basic problems but not great at solving tricky, complex and big problems. Most of the time, being able to solve those hard problems comes with a big price tag and requires a lot of human input.</p>
<p>When you visualize the manufacturing floor, it becomes super easy to understand why this is the state of the world. Ignoring the fact that there are lots of different kinds of machines involved in any particular manufacturing process, individual machines themselves will have different types of sensors recording tons of data points on different things involved in its operation. Considering then that an individual machine will have thousands of data points in a single operation, an entire manufacturing floor will have millions — so just imagine the sheer scale of how much data is generated from ongoing operations.</p>
<p align="center"><img src="/blog/img/industrial-12.jpg"/ alt="Spock sobbing mathematically"></p>
<p>It’s a frustrating realization, and if you’re like me, then your mind begins to furtively think of how to fix it. Starting at a small scale, theoretically vendors could ensure that all sensor data generated from a single machine adhered to the same data architecture. Let’s be bold and optimistic and even say that a vendor could ensure that all their different machines adhered to the same data architecture, so if you had a floor with only their equipment, you could collect and combine all this juicy data with ease.</p>
<p>But in the real world, making sure all the sensor data has the same architecture is decidedly not trivial. What incentives does the vendor have to implement this extra process when designing their products? Do customers really care so much about the data that they’re willing to pay a premium for machines with a common data architecture?</p>
<p>This results in first-mover disadvantages both on the vendor and customer side. For vendors, they don’t want to be the dopes that were first to put all this effort into having this gorgeous, seamlessly combinable spring of sensor data from their machines that customers end up not buying because it’s more expensive than that of their Tower of Babel-esque competitors. For customers, they don’t want to be the dopes that were first to pay a premium for this theoretically awesome tech that turns out not to generate such actionable insights after all, or worse, isn’t as easily collected and synthesized as promised. Not to mention the customers will still need to buy or develop the analytics that leverage this data.</p>
<p>It’s going to take a risky bet on one side or the other. My personal bet is on the customer side, which will (hopefully) put pressure on vendors. Why? Because the customers (the manufacturers) may be able to sell some of the insights they gain to their own customers as part of a value-added offering. For example, they can differentiate from their competition by suggesting data-supported ways for their customers to improve the quality of their products or speed at which their products can get to market. While helping bottom-line by reducing their downtime, use of raw materials, etc. is all well and good, there’s nothing like a boost to the top-line to really cultivate interest.</p>
<p>This still doesn’t solve the problem of how standardization will actually be accomplished. It doesn’t do a ton of good if each vendor standardizes within itself but makes it even harder for manufacturers to combine data from machines across different vendors. Very few vendors will make all of the machines and devices manufacturers need to run their operations, so it is realistic to assume that broader standardization is essential.</p>
<p>There is some evidence of the government looking to help facilitate standardization, but I, for one, have little faith in how quickly any initiative might be accomplished in that manner. Realistically, the GE’s and other titan industrial firms will throw a lot of money at figuring this out and monetize it as something like IoTAaaS, but those will likely be in areas away from the manufacturing part of “industrial” and instead in use-cases where their machine or part can serve as a strong sole-source of data (e.g. sensors on jet engines to help optimize fuel efficiency).</p>
<p>What I’d love to see is an approach similar to what the founders of Flatiron Health performed in its earliest days. The lore is they traveled around the country speaking with cancer specialists and treatment centers to better understand the challenges they faced, what sort of data was relevant, how it was currently being shared, etc. From gathering those insights first-hand, they developed a solution that has been very sincerely described as “fighting cancer with big data.” There are meaty enough sub-sectors within the industrial space that I strongly believe something similar could be accomplished and help leap over the standardization hurdle to the phase where meaningful insights can be generated with ease.</p>
<p>But, I’m only half joking when I think that maybe this whole problem will be solved if Google open sources whatever software it develops as it builds its self-driving cars and/or AI-driven robot and drone army, <em>ahem</em>, logistics network, then executes its Order 66 to have the global leaders it’s funded and seeded mandate Google’s way as the standard.</p>
<hr>
<h2 id="a-namewho-caresawho-cares"><a name="who-cares"></a>Who cares?</h2>
<p>Luckily from a “who cares” perspective, the field of potential customers is actually enormous. Manufacturing spans behemoth industries, including pharmaceuticals, transportation, aerospace and defense, oil and gas, electronics and chemicals. And within each industry is a chunky supply chain full of many vendors, from raw materials to packaging. Being the go-to operational analytics solution provider for any one of these industries alone would very likely result in a hefty amount of revenue.</p>
<p>From the VC perspective, it’s pretty obvious why there’d be interest in investing in this area, particularly if you’re an investor for a corporate VC arm who happens to be a part of the manufacturing or industrial supply chain. It also heavily touches on IoT, which is an area still attracting a considerable amount of interest, but has crossover appeal to enterprise software investors who are familiar with more general analytics and business intelligence companies.</p>
<hr>
<h2 id="a-namerisksawhat-are-the-risks"><a name="risks"></a>What are the risks?</h2>
<p><img src="/blog/img/more-sparks.jpg" alt="more sparks"><em>Our lizard brains are programmed to like pictures of sparks.</em></p>
<p>Being early to the party is no fun, and even though the “Fourth Industrial Revolution” has had its cue music on for a while, it’s still extremely early and somewhat speculative. Nanotechnology is an example industry that’s had an excessively long drumroll, to the point that now legitimately exciting advances are met by many with indifference. And many of the early “plays” in it never took off or lived up to the hype.</p>
<p>To generalize, VC’s like taking “safe risks.” What this means is that they typically like to fund new companies in markets somewhere between totally unproven and super saturated. So where does the infrastructure analytics market fall?</p>
<p>On the one hand, it’s sort of a “no duh” that companies want this operational intelligence, for all the potential applications described earlier. On the other, will these companies buy these sorts of from startups? And can these startups prove that their solutions provide disruptive-level value early enough? Those are questions that are true for a lot of enterprise solution providers, but I feel are magnified given the higher-stakes of the physical realm.</p>
<p>There’s also the uncertainty of standardization. Can companies actually extract the value they promise in a world with that many disparate types of data? There’s potentially the question of whether their value is eroded a bit if standardization happens, and thus it becomes substantially easier to use existing data science methods to garner operational intelligence, but I personally think that’s a lesser risk (and definitely cart before horse).</p>
<p>As in my Flatiron Health analogy earlier, a startup that cataloged all the different machines and things involved in the processes of a specific sub-sector in industrial analytics might have the potential to do exceptionally well. That level of granularity may be necessary to truly crack the code on how to most efficiently and meaningfully collect, combine and analyze this data. But it would involve a longer time horizon and doing Paul Graham’s “things that don’t scale,” both which add execution risk that might be insurmountable in an investors’ eyes.</p>
<hr>
<h2 id="a-namecurrent-sceneawhats-the-current-scene"><a name="current-scene"></a>What&rsquo;s the current scene?</h2>
<p>There are a handful of startups that are already operating in this arena (and I have no doubt I’m missing some and quite a few are in stealth). They self-describe in roughly four buckets:</p>
<ul>
<li><strong>Manufacturing Analytics/Intelligence</strong> — Hai, Northwest Analytics, OFS Systems, Optimal+, SightMachine</li>
<li><strong>Predictive Maintenance</strong> — Augury, DecisionIQ, Maintenel, Predikto</li>
<li><strong>Industrial Analytics</strong> — Seeq</li>
<li><strong>IoT Analytics</strong> — Decisyon, mnubo</li>
</ul>
<p>Some of the more notable VC funds that have invested in these companies include:</p>
<ul>
<li>First Round</li>
<li>Formation 8</li>
<li>IA Ventures</li>
<li>Lerer Hippeau</li>
<li>Madrona Venture Group</li>
<li>O’Reilly Alpha Tech</li>
<li>Orfin Ventures</li>
<li>…and KKR</li>
</ul>
<p>There are a number of industrial conglomerates that have created data science wings, realizing the potential of the space, such as the list below. They tend to self describe as “industrial analytics” or “process analytics.”</p>
<ul>
<li>ABB</li>
<li>Dionex</li>
<li>GE (Predix)</li>
<li>Genpact</li>
<li>Honeywell</li>
<li>Siemens</li>
</ul>
<p>Along with analytics and consulting incumbents spreading out into industry-specific solutions, including the list below. In contrast to the industrial conglomerates above, they come up if you search for “manufacturing analytics.”</p>
<ul>
<li>Ayasdi</li>
<li>CSC</li>
<li>Datawatch</li>
<li>Deloitte</li>
<li>Oracle</li>
<li>ParStream</li>
<li>Predixion</li>
<li>Tableau</li>
<li>TIBCO</li>
<li>Wipro</li>
</ul>
<hr>
<h2 id="a-nameconclusionafinal-thoughts"><a name="conclusion"></a>Final Thoughts</h2>
<p>I think the issue with the industrial analytics sector isn’t level of VC interest, but instead that the pipeline of companies is really small; so even if investors wanted to take a gamble, there hasn’t been much opportunity to do so. After all, the “right” founders will have come from working in the industrial sector, and if they have the level of data science expertise required to create a novel startup in this space, I’d suspect they’re really well paid (especially if in oil and gas). More typical SV data science founder-types may be apprehensive of tackling this problem space as well given the domain expertise required, which seems to be tricky to acquire without access to industrial equipment.</p>
<p>My prediction is we’ll see some of the incumbent analytics players creating industry-specific modules at first along with the big industrial companies creating analytics solutions. And from there, I hope (and I suspect VC’s hope as well) there are defectors that take the risk to create a pure-play company that takes the big leap forward rather than incremental shuffles.</p>
]]></atom:content>
        </item>
        
    </channel>
</rss>
