There’s a philosophy of thinking out there that I like to call “Cybersecurity Fatalism”, and it’s bad and wrong. It leads you down a path of buying snake oil from charlatans and failing to implement real security controls. Why would anyone do such a thing, you ask? Well, because it’s so seductive. Downright insidious, and you should be aware of it.
But first, let’s give the philosophy an honest shake. Its central tenet goes something like this:
“Everything is hackable and it’s simply a matter of time and resources before someone gets in.”
−Life according to the cybersecurity fatalist
This is meant to be taken as a “rhetorical truth”. Whether or not it’s literally true is almost irrelevant because it’s meant to be accepted in the same way “There’s no such thing as an unloaded gun” is. Obviously, this isn’t literally true: some guns are in fact unloaded. But we ought to treat each gun as if it’s always loaded (so the philosophy goes), or else you’re likely to accidentally shoot yourself or somebody else in a state of neglect.
So too, says the cybersecurity fatalist, that we ought to treat all things as hackable at all times. Nothing is safe, nothing is perfect, and nothing is exempt. The fatalist would have you believe that cybersecurity is akin to hikers running from a bear: you don’t need to outrun the bear, you just need to outrun the other hikers. In cybersecurity fatalist terms, you only need to be more secure than your competitors.
You can see how CISOs and other executives might come to this conclusion. Vulnerabilities are constantly released for basically everything, security doom-and-gloom is all over the news, and no methodology ever seems to eliminate the risk. If you squint really hard and ignore all the details, a fatalist view really does start to seem appropriate.
“Well if it sounds so appropriate, what’s wrong with it?” I can hear you asking through your web browser. The truth is that while it sounds like wisdom, it isn’t. You see, cybersecurity fatalism also comes with a corollary tenet that is not always included on the warning label.
“The sole task of security, therefore, is to increase the time and resources necessary for an attack to succeed.”
This is where cybersecurity fatalism fundamentally leads you astray. In cybersecurity, actual solutions do exist, which will be shocking news to some. It’s not all just shades of grey. If you want to make a web application that isn’t vulnerable to SQL injection, you can use modern frameworks that ensure SQL queries are properly parameterized. These are controls that do more than “add to the time and resources necessary to complete an attack,” they actually stop the attack.
When it comes to cybersecurity, one good control is better than a hundred bad ones strapped together. Cybersecurity fatalism, however, would have you settle for bad security controls and call it “defense in depth.” Your core business app has cross-site scripting? Don’t rewrite the code to sanitize user input, just put a WAF on it. Important internal servers have known critical CVEs on them? Don’t bother upgrading your software or applying patches, just stick an IDS in front of it. That’s bad advice that won’t stop anything. But when you apply real engineering solutions to problems and don’t buy into applying multiple layers of bad controls, you can implement real security. And think of how infrequently you hear the truth of that.
In my experience, any defensive security control that claims to be “part of a defense-in-depth strategy” is in reality just advertising that it doesn’t work. Real security controls prevent attacks. They don’t just aim to “frustrate the attacker.” Furthermore, cybersecurity fatalism can lure you into a state of acceptance for critical issues that really shouldn’t be ignored. Fatalism lets you construct a mental framework for accepting risk instead of solving it. If you believe that all things can be hacked all the time and no real security exists, then why bother patching that critical bug? There’ll just be another one next month anyway right? This happens far more than you would think.
Call me old-fashioned, but I believe that reality matters. And in reality, security is getting better. I know there’s doom-and-gloom on the news and it might look scary, but things are way more secure than they used to be. Not everything is on fire.
Real security is possible. Not in all places and not all of the time, of course – there are always variables. But security is an achievable engineering goal, and not just a pipe dream. More to the point: if you don’t make real security your objective, and instead settle for something less, you’ll just be left with a bunch of half-working mitigations and a frustrated but successful attacker. Because if cybersecurity really is like hikers trying to run from a bear, every now and then not getting mauled requires more than outrunning the other hikers.
See Dan Petro present on weaponized machine learning at the Black Hat Arsenal this summer.
For more information, please go here.
You might be interested in these related posts.