History shows that people are unlikely to develop or purchase secure software by accident. Back in the Dark Ages (think the 1990s), people built software and then tried to add security. This was rarely successful and frequently expensive.
Progress, of a Sort
As an industry, we’ve moved on to more efficient and more effective strategies, like building security in from the beginning. Developers talk to security folks earlier, and many projects identify security requirements before design completes.
Progress, yes, but not as much as one would hope, because most of us are just bolting on security earlier.
Security requirements typically cover avoiding well-known vulnerabilities (such as the OWASP Top Ten), using security technologies (such as encryption or authorization) in specific contexts, and complying with regulations (such as HIPAA). If you are in an exceptionally security-conscious organization, they may even include overall security goals (two employees must work together to alter financial records, for example).
Security is Holistic
There is just one problem: Many security issues — especially those pesky design issues that are so expensive to fix — are caused by defects in other requirements.
As an example, consider the above image of an exterior door. It has a lock, so we will assume at least some intent to secure. It has glass panes, which are a potentially ill-advised feature depending on the surrounding environment. Most suspiciously, it has externally accessible, and easily removable, hinge pins, perhaps because the layout of the room beyond makes it inconvenient for the door to open inward. This door is a classic example of adding a single explicit security requirement for a common security technology (a lock) and then ignoring security entirely while identifying the rest of the requirements (and designing and building the solution).
The idea that you could take a specification written without regard to security, add a few security requirements, and create a secure system is as inaccurate as the idea that you could take an implementation written without regard to security, add a few security features, and create a secure system.
What you need are secure requirements, not just security requirements.
What’s the Difference?
Requirements are statements about a system that establish a clear, common, and coherent understanding of what the system must do.
Security requirements are individual design constraints that explicitly discuss security. Security requirements describe security features that must be included, how security technology the system uses must be configured, security tests that must occur, or severity thresholds for security bugs that must be fixed before launch.
Secure requirements are a collection of requirements that also establish a clear, common, and coherent understanding of what the product must not allow. Specifications that are secure typically include greater than average information about business rules, edge cases, data formats, and error handling as well as classic security requirements like those previously mentioned.
If your organization has never written secure requirements before, try the following:
1. Document the application’s intended behavior, including the business rules the system must enforce.
For example, Grandma would like to post to her blog. She wants everyone from her grandchildren to her state representative to read and comment on her posts. She would like to be able to edit or delete her own posts, and approve comments before they are published.
2. Describe the deployment environment and any interfaces the system must use or provide.
Basically, write the requirements as you normally would. Agile development teams may need to create and maintain a library of documentation for external interfaces — and an explicit list of business rules — to store information they would not otherwise document.
Grandma would like to post to her blog via two interfaces: a web form, and email from her phone to a certain address. She is experimenting with Linux, so she would like to host her blog on her own server.
3. Define the system’s security objectives.
Since you probably don’t have the resources to stop every attack, prioritize what you want to protect and set overall security goals that express the threshold for what you plan to defend against.
Grandma #1 is a retired school teacher whose most inflammatory posts concern her neighbors’ recycling habits. A good security objective for this Grandma’s blog would be: The system should thwart attempts by anonymous readers and commenters to create, update, or delete Grandma’s blog posts.
Grandma #2 is a vocal critic of the local police department, which she has publicly accused of links to organized crime. The above security objective still applies, but another appropriate security objective is: The system should thwart attempts by local law enforcement or organized crime to prevent Grandma from posting (at least via technical means; we will assume intimidation is out of scope).
4. Review each requirement to determine whether it is compatible with the security objectives.
Because nothing in the requirements indicates Grandma #2 has attempted to be anonymous, we should assume local law enforcement and organized crime both know where this firecracker of a Grandma lives. Her desire to run her own server could allow her opponents to confiscate or steal her equipment, thereby preventing her from posting. She might do better to host a virtual server in the cloud, where a judge’s consent will be required to shut her server down, and she should certainly create off-site backups in case she needs to migrate her site after the original becomes unavailable.
5. While examining each requirement, add anything else the developer needs to know to meet the security objectives.
So far, our requirements are missing some security-relevant information for either Grandma. For example, because email is easily spoofed, but we don’t want arbitrary people other than Grandma to post, the email interface to post to Grandma’s blog needs authentication. A From: header containing Grandma’s email address is insufficient. If the authentication method is not specified in the requirements, the developers will need to select one, which means they need to know whether Grandma can edit her email headers to include a password, whether she has a PGP or S/MIME key, whether she is intrigued by DMARC, and (whether the requirements specify the authentication mechanism or not) what to do with email that came from Grandma’s email address, but shows no other signs of being from Grandma.
After you’ve extended several sets of requirements to cover security, you will learn to add security-relevant details to all your requirements as you write them, and bolt-on security will be gone for good.
Now, that’s progress!
Stay tuned to find out how application security is like shopping for pants.
Read the next installment, "Security Should Be Application-Specific," here.
Subscribe to Bishop Fox's Security Blog
Be first to learn about latest tools, advisories, and findings.
Thank You! You have been subscribed.
Recommended Posts
You might be interested in these related posts.
Dec 12, 2024
Our Favorite Pen Testing Tools: 2024 Edition
Oct 15, 2024
Off the Fox Den Bookshelf: Security and Tech Books We Love
Sep 17, 2024
Navigating DORA Compliance: A Comprehensive Approach to Threat-Led Penetration Testing
Aug 28, 2024
Offensive Security Under the EU Digital Operational Resilience Act (DORA)