Bishop Fox named “Leader” in 2024 GigaOm Radar for Attack Surface Management. Read the Report ›

Stop Treating Breaches Like Natural Disasters: A New Mindset for Application Security

Laptop showing a big wave out of control on screen.

Share

Application security is a complex topic that can be viewed through many lenses, and each is valuable when used in the right context. There’s one lens I especially evangelize, which I like to call Security Determinism. Its central tenet is the following:

“Security incidents are not things that happen to you at random from an external source, but rather are the necessary and predictable consequences of engineering decisions you’ve made.”

Far too often we treat ourselves as hapless victims in the security field. Even legitimate experts sometimes talk about security issues with the same language we use to describe natural disasters: unpredictable forces of nature cause a breach and all you can do is clean up the mess afterward.

Determinism rejects this idea that we have no agency in security and in fact states the opposite: the issues you’re experiencing are not just within your control but were indeed inevitable and predictable.

The solution, therefore, is sound engineering. It’s not simple and it’s not something you can package up and sell as a product, but it’s the only thing that really strikes at the root of security issues: a failure of engineering.

Let’s talk about the usefulness of considering a mindset of security self-determinism, how a design based on this thinking would look, the challenges of fully implementing it in your work, and finally, let’s compare this mentality to the current outlook that dominates the security industry.

CONSIDERING THE RHETORICAL TRUTH

As I mentioned in a previous blog post, philosophies like this can serve as a kind of rhetorical truth. Whether Security Determinism’s central tenet is literally true is not important; what matters is whether it can be useful and instructive.

It’s “true” in the same sense that we say, “there’s no such thing as an unloaded gun.” Obviously, this isn’t literally true: some guns are in fact unloaded. But we ought to treat each gun as if it’s always loaded (so the philosophy goes), or else you’re likely to accidentally shoot yourself or somebody else.

The rhetorical truth that Security Determinism can tell us is that security is solely within our control. As a corollary, if security is important and we’re in control of it, that makes it our responsibility. It’s become increasingly clear as each year passes that the consequences of a security breach are no longer “just” a PR blunder for a company. Building resilient software systems is important for critical infrastructure, democracy, and public safety. And it’s up to us as engineers and tech professionals to maintain that.

Let’s walk through a few hypothetical examples that I think will help demonstrate the determinist perspective and show how it can lead you toward making better security decisions.

DESIGNING WITH INTENTION

“Simple is better than complex. Complex is better than complicated.”

- The Zen of Python

Imagine being a software engineer jumping onto a new project and the existing team hands you a (hypothetical) architecture diagram that looks like this:

A complicated web application architecture


Figure 1 – A complicated web application architecture

That diagram is complicated. But more specifically, it has a disorganized approach to data storage. Let’s assume that this application (like most web applications today) is data-centric: their APIs are essentially CRUD (create, read, update, delete) operations on data. So that data ought to be the focal point of the application, and security should focus on securing that data.

When you see an application architecture, such as above, that fails to have clear defensible lines where authorization can be enforced, it’s not just a pain for onboarding, it’s a recipe for vulnerabilities. Application data in that diagram is spread out across at least four separate locations and three internal systems (plus one external) that have direct database access. A bug in any one of them could mean compromising user data.

Unless something drastically changes in the design of this application, it will consistently fail to have an adequate hold on the security of its data.

So when the project gets hit with a critical vulnerability where an attacker can see everyone’s data, it should really come as no surprise. This wasn’t something that an attacker did to the application, it's that they found an existing issue to exploit. It was the inevitable and predictable consequence of poor engineering design decisions.

Let’s see what this application might look like with a design more focused on the data:

A simple web application architecture


Figure 2 – A simple web application architecture

It’s hard to evaluate designs of entirely hypothetical applications, but you can clearly see that this new diagram is simpler around data access. Here, application data is housed in one location and its access is managed only by one component (instead of the three separate locations above). This means that authorization logic can now be centralized and relied upon by other components.

Security Determinism tells you that applying strong engineering discipline early in the process of a project is critically important for security. The foundation that you lay will determine much of the security of the project going forward.

But if one treats security issues like natural disasters, as an external force out of your control, then it might lead you to conclude that what the application above needs isn’t a better design but rather more and more resources toward incident response. A change in mindset can be the difference between responding to security incidents and not needing to.

FULLY IMPLEMENTING SECURITY DETERMINISM

“Dark times lie ahead of us and there will be a time when we must choose between what is easy and what is right.”

- Famous software engineer, Albus Percival Wulfric Brian Dumbledore

As a software-engineer-turned-pen-tester, I can often evaluate a website through what you could call the “broken windows theory” of application security. If you go to a web application and the JavaScript is broken, the CSS is dysfunctional, and most functionality only half-works, what are the chances that the developers of this site have security locked down? Very small is the answer. I may not know how I’m going to compromise the site right away, but I know it’s very likely to happen.

In software engineering, you’re seldom presented with a choice between the right and wrong way of doing things. More commonly, you’re presented between the right way and the easy way. This is where the lack of engineering discipline and rigor is a frequent root cause of security issues. Sure, you may point to specific issues: insecure database access, improper contextual output encoding, but that’s not really what went wrong.

When a project has enough bugs, some of them are just bound to be security-relevant.

Security folks don’t talk about code quality enough. If you want to write secure code, start by writing high-quality code. Write test cases, make clear documentation, and make sure it runs on someone’s computer other than yours.

  • You know that one undocumented API function that all the admins use but nobody else knows about? Well, it’s not just a documentation issue, it’s a security issue.
  • Keep running an ancient internal application on that same outdated server because it breaks if you try to upgrade it? That’s not just a portability issue, it’s a security issue.
  • Updating external dependencies by hand each time there’s a new version? That’s not just a maintenance headache, it’s a security issue.

If you work hard and invest in sound engineering efforts to improve the overall quality of your applications, you’ll find that many of these pesky security bugs go away. Because they were never really “security bugs” at their core, they were really just “regular bugs” all along.

But if you treat security vulnerabilities like natural disasters, you would conclude that there’s little point in secure coding practices. The security typhoon will crash onto your shores eventually anyway, so it would seem wiser to just buy firewalls and antivirus software. This sort of “bolt-on” approach to security is dangerous, and it’s why this mentality must be avoided.

OWNING YOUR MENTALITY

Security Determinism isn’t the only valid way to view cybersecurity. Sometimes, things do just happen that are outside your control, and security is hard. It’s impossible for any one person to know all of it. And even if *in theory* you could have predicted some disaster in retrospect, it doesn’t mean it was reasonable to have done so at the time in practice. But as I talked about earlier, the point of this philosophy is to be instructive and useful, not strictly true. You can’t control everything, but everything that you do have control over, you have responsibility for.

The security determinist philosophy is equally about a mentality of responsibility and accountability for the things you build. It’s too easy to play the victim in cybersecurity incidents. We use language like “hackers broke the authorization” of a web application when the reality is that the authorization was always broken; it’s just that you only learned about it because of an attack.

I suspect that this doomed mentality can come from an “attackers vs defenders” perspective, where security is seen as a game between red and blue teams. In this framework, it’s easy to conclude that any red team of sufficient quality will be able to defeat the security of your application. And if that’s true, what’s the point of investing in security beyond the bare minimum? However, this fatalist philosophy can lose sight of the fact that the game takes place on a medium: your application. And you get to make the rules in that medium.

As a penetration tester, I can’t supernaturally conjure up vulnerabilities in systems where they don’t exist. I know it may not feel like it sometimes, but all the bugs pen testers find were in the application all along. And each of those bugs could have been prevented with a sufficient amount of planning and hard work. It’s not easy, and you still won’t be perfect, but approaching the problem the determinist way is the first step to getting in front of security issues rather than simply being reactive.

If you’re on a sinking ship, it’s prudent to bail the water out. But if this keeps happening, what you need isn’t a bigger bucket: it’s to stop making boats with holes in them. Initiatives like vulnerability management systems and bug bounty programs are all fine, but essentially just “bigger buckets.” If you want to finally get ahead of security issues instead of just responding to incidents, then you have to stop treating vulnerabilities like natural disasters. The Determinist mindset is a way of thinking about application security that lead you in the direction of strong engineering discipline as opposed to ad hoc, reactive measures.

Subscribe to Bishop Fox's Security Blog

Be first to learn about latest tools, advisories, and findings.


Dan Petro Headshot

About the author, Dan Petro

Senior Security Engineer

As a senior security engineer for the Bishop Fox Capability Development team, Dan builds hacker tools, focusing on attack surface discovery. Dan has extensive experience with application penetration testing (static and dynamic), product security reviews, network penetration testing (external and internal), and cryptographic analysis. He has presented at several Black Hats and DEF CONs on topics such as hacking smart safes, hijacking Google Chromecasts, and weaponizing AI. Dan holds both a Bachelor of Science and a Master of Science in Computer Science from Arizona State University.

More by Dan

This site uses cookies to provide you with a great user experience. By continuing to use our website, you consent to the use of cookies. To find out more about the cookies we use, please see our Privacy Policy.