The Code Reveals All: Why Secure Code Review Should be an Integral Part of DevSecOps
As security has become a growing focus within the DevOps community – resulting in the new and expanding field of DevSecOps – developers have been integrating more security testing techniques into the software development lifecycle. One important aspect of security testing that developers should consider is secure code review. Why? Because the source code reveals the entire attack surface, which helps ensure that even edge cases and issues in obscure sections of code are caught. In other words, the code reveals all.
Since developers already know code, they can easily understand and adopt the practice of code review. Developers may even already perform some form of code reviews, such as peer code reviews or paired programming, or use automated code checkers that review code for bad coding practices or code quality issues.
By growing their understanding of what makes software vulnerable and by integrating some simple strategies for secure code review into their DevSecOps lifecycle, developers can continuously deliver more secure software.
Before we begin digging into the components of secure code review, let's review some of the common methods used to understand and test for software vulnerabilities.
Security Testing in DevSecOps
The DevOps lifecycle, shown above, depicts the iterative nature of modern software development as a sequence of repeating phases. There is a beginning of sorts, but never really an end. And throughout that lifecycle, different activities are performed, including the phase where software is actually designed, created, and tested. It is in these phases that I want to highlight a few important security testing activities. Each of these is important in its own way and adds different value to the process of creating secure software.
- Threat modeling is a structured technique for identifying, evaluating, and mitigating security risks based on the design or architecture of the software being built. Threat models show the application from the perspective of an attacker with knowledge of the internals (the architecture). The threat model takes into consideration an attacker’s interests, such as the attack surface, attack vectors, and key assets of the application. While threat modeling can be effective and valuable throughout the lifecycle, it is best done during the initial “plan” phase of DevOps, before any code is written, thereby enabling the team to incorporate mitigation strategies for potential vulnerabilities as soon as possible. In iterative agile lifecycles, the threat model can be revisited and updated incrementally with each sprint or iteration.
- Penetration testing is the process of identifying security vulnerabilities by testing a running application remotely. The tester has very limited or no knowledge of the inner workings of the application or its architecture, other than what can be inferred by interacting with the running application. Penetration testing usually incorporates a combination of automated testing using Dynamic Application Security Testing (DAST) tools and manual testing. While penetration testing can be performed any time in the lifecycle that runtime functionality is available, it is typically performed in the “Test” phase, often before a major release of new functionality.
- Secure code review is the process of identifying security vulnerabilities through inspection of an application’s source code. The tester will typically use a combination of automated code scanning tools to identify well-known and well-understood code patterns that lead to vulnerabilities, in conjunction with targeted manual code inspection focusing on security critical components. Some form of secure code review can be performed in any phase of the DevOps lifecycle that functional source code is available, even down to the individual class or method. Developers can leverage manual code review and tools within their IDE to review code for issues during coding. Additionally, automated code scanning is beginning to be used during Continuous Integration to perform validation checks before pull requests are merged into the main branch.
There are advantages and disadvantages to each of the above testing techniques. Threat modeling can uncover systemic issues at the design level so they can be mitigated early. Penetration testing is well understood by most security professionals, but not necessarily by developers. Results can be obtained quickly and are usually easily demonstrated as exploitable.
Secure code review offers one distinct advantage in that the source code reveals all. The entire attack surface is exposed by the source code, making it possible to identify issues in edge cases and hard-to-reach states.
So, What Really Is Secure Code Review?
Put simply, secure code review is reviewing code for security issues. Okay, I know that’s unfair - enough with the circular definitions already. Let’s first talk about “code review” in the more general sense.
Most developers should be familiar with some form of code review, whether it is through formal code inspections (rare), “desk checking” (a form of self-review), pair programming, peer reviews, or something similar. These techniques are typically intended to find code correctness issues (aka bugs), code quality issues, or ensure adherence to organizational coding style standards – potentially all of the above.
Secure code review has similar purposes. Its primary goal is to identify security bugs and may also include ensuring adherence to organizational secure coding standards, assuming they exist.
But how do you start such a review? Do you just start reading the source code line-by-line, hoping you’ll come across something that looks suspicious? You could, but that will quickly lead to exhaustion and a lack of focus for most people. It also isn’t very effective, especially if you don’t know what’s “suspicious” or how to determine if simply “suspicious” is vulnerable.
Going into great depth on performing a secure code review is beyond the scope of this article. Instead, I’ll start by defining a few terms, and then discussing a commonly accepted strategy for approaching a secure code review. First, here are some definitions.
Static code analysis is analysis of software without executing it. This is commonly performed on source code, but sometimes object code, and is usually automated. It has many uses – type checking, style checking, program understanding, program verification, bug finding, etc.
Static analysis security testing (SAST) is static code analysis that is applied specifically to the problem of identifying conditions within software that indicate a security vulnerability. The acronym SAST is commonly used when referring specifically to automated security “code scanning” tools.
Secure code review typically involves a combination of automated SAST tools and manual code inspection focusing on high-risk or security critical components of the source code. It is not an exhaustive, manual line-by-line review of every single line of code in the application. Not all code “matters,” or at least doesn’t matter much from a security perspective. Nor is it just running an automated SAST tool to scan the code and generate a report of findings.
With that out of the way, let’s review at a high level a commonly accepted methodology for performing a secure code review in three “easy” steps.
- Run an automated scan of the codebase with a SAST tool to identify “low-hanging fruit”: those issues that are well-known and well-understood. Triage the results of this scan to validate the identified issues by manually inspecting the code to determine if they are valid and exploitable. This is easier said than done and involves applying the next two techniques to decide on the veracity of the SAST findings.
- Perform a bottom-up analysis, sometimes known as back-tracing or data flow analysis. This usually starts by identifying a set of potential vulnerabilities, or “candidate points.” This is code that matches a well-known pattern that is often vulnerable, such as a SQL statement execution in the case of SQL Injection vulnerabilities. These candidate points might be identified from a SAST tool, or through pattern matching, think grep, or manual inspection searching for dangerous function or API calls, stored secrets, or the like. Once identified, the analyst traces backward, through one or more code modules, to identify data flows to the candidate point from an untrusted input. In the absence of any intermediate code in the data flow path that mitigates potential issues, this could represent a vulnerability.
- Conduct a top-down analysis, also known as forward-tracing or “code comprehension.” That last term is important. This part of secure code review requires manual code review; it requires a human who can read and understand complex control and logic flows in the application. This person can puzzle through what the code is doing and not just what it looks like. These are things an automated SAST tool simply cannot do effectively. In this approach, the analyst starts with specific entry points, usually in code modules or sections that are critical. These include authentication and authorization code, data validation code, user account management, data encryption, and high-risk business logic and flows. Starting with the entry to these components, the analyst drills down into the functionality of the code to identify control or logic flow paths that an attacker may be able to exploit in a way that could circumvent or manipulate those flows to their advantage. This aspect of code review requires a significant amount of time, typically reveals some of the highest-risk issues, and requires the greatest amount of expertise in both security and code understanding. No bots allowed.
- Repeat until done. (Okay, that was four steps.)
But when is secure code review done? There’s no easy answer to that, but if you stick to a determined, methodical, and focused approach using the above techniques – rather than randomly perusing through code hoping you’ll find something interesting – you’ll know what "done" looks like.
Next, let’s move on to discussing some strategies for how secure code review can be adopted in DevSecOps, or really any development methodology.
Strategies for Adopting Secure Code Review in DevSecOps
I’m not going to be able to turn everyone reading this article into a skilled secure code review specialist in a few pages. Nor can I give you a stepwise checklist of things to do to integrate SAST tooling into your CI/CD pipeline, thereby enabling you to instantly achieve DevSecOps nirvana. I wish it were that simple, but one size doesn’t fit all and your mileage may vary.
Rather, I’ll present a few strategies that I believe can help you adopt secure code review in DevSecOps (or really whatever methodology you use in your SDLC). I’ll leave the tactical implementation of these strategies to you. These strategies, like the phases of DevOps, should also be thought of as a lifecycle. There may be a beginning, but there should not be an end. They should be repeated, enhanced, and refined as more is learned, and as development processes and developers themselves change. One strategy often builds off and informs the others. So here we go…
Develop Secure Coding Standards
Secure coding standards (or guidelines) are meant to provide developers a baseline that they can use to guide the development of more secure software – the rules of the road, if you will. There are many challenges to this. Most organizations use multiple programming languages and frameworks, so it can be difficult to develop secure coding standards for each. And things change with those languages and frameworks, so the standards must be regularly updated. A number of resources can be used to support this effort, such as the OWASP Code Review Guide, the OWASP Top 10 list, the Common Weakness Enumeration (CWE) Top 25, CERT Coding Standards, and more. The development of secure coding standards should be driven by and owned by development, with input and guidance from security.
Education
Developers should be trained up on secure coding practices through a combination of regular formal training as well as self-study and research. Performing secure code review, with some expert guidance, can itself be an effective training tool. Learning by doing is often the best way. Again, use the wealth of material from OWASP as a resource, and seek out a training partner that can keep your development teams fresh with formal secure coding training.
Threat Model
Although we touched only briefly on threat modeling so far, it is not only an important part of ensuring that software is designed and developed securely but also a great way to achieve several other goals. Developers who are involved in the threat modeling process will better understand the threats to their application, which – when coupled with understanding of secure coding practices, vulnerabilities, and anti-patterns – will help them to develop a “security mindset” and think about how their software might be compromised. Threat modeling is also an effective tool to help focus a secure code review on the riskiest parts of an application.
Socialize Secure Code Review with Developers
Use “checklists” or guidelines for what code needs to be reviewed. Not all code matters as much from a security perspective. Remind developers that they should be looking for things beyond just code quality, clarity, or correctness. They should look for security issues, often in non-obvious places, using resources like the organization’s secure coding standards and the OWASP Code Review Guide.
Developers should incorporate secure code review into their development routine. Encourage them to review a little bit of code at a time, as it is being developed, by incorporating security checks into “desk checking” (self-review), paired programming, or other existing code review approaches already in place. This is a good time to introduce SAST tools, using the IDE plug-ins of such tools within the developer’s existing development environment where they are most comfortable – but only after some education and preferably secure coding standards are available. They need to know what they are looking for before they use any automated tool to start looking.
Automate Slowly and Cautiously
Ultimately, DevOps is all about tooling and automation. But tools, especially tools to automate security testing, are not a panacea, and in fact can be a bit of a Pandora’s box. Automated SAST tools, especially without any “tuning” or customization of rulesets, always err on the side of reporting everything that looks suspicious based on their rulesets and heuristics. For this reason, expect a lot of false positives. Weeding through those and validating them is time consuming and aggravating and may lead to SAST tools becoming shelfware.
Don’t think that you can just throw a SAST tool into your CI/CD pipeline and you’re good – you will end up blocking a lot of code commits due to false positives. Carefully tuned for specific, well-known, and well-understood issues, SAST tools can be effectively applied in this context.
Remember that there’s only one way to eat an elephant: one bite at a time. Start slowly with automation, understand the strengths and limitations of the tools, and accumulate and build on small successes to increase the usage and value of the chosen tools.
Tracking and Measurement
If you don’t track it, you can’t control it. I encourage you to track vulnerabilities like any other bugs. Your security team may want insight and tracking specifically on vulnerabilities, but the development team should also track and backlog them like any other bug, as that team is ultimately going to be responsible for fixing them. Use risk analysis to prioritize security bugs in the backlog. Critical- or high-severity security bugs should never be left there for very long.
I also believe that an effective strategy is to develop your own “Top 10” (or eleven, or fifteen, or twenty) security bugs. These are the issues that you think are most prevalent across your application portfolio based on previous testing or other information or are the most severe or high risk. Use whatever measure fits. Tune your tooling toward these issues, use them to inform and adjust your secure coding standards, training, checklists – literally everything.
Leverage Outside Expertise
Use an expert – there’s always a need for an in-depth, point-in-time, external secure code review by an independent expert. Developers can eventually make good secure code reviewers; after all, it’s their code. But they are also much more invested in that code and may have a lot of preconceptions and misconceptions. A second or third set of eyes can make a huge difference.
Wash, Rinse, Repeat
And oh yeah, lest we forget, security is a journey, not a destination. All of these strategies should be continuously revisited, revised, and enhanced. Learn from mistakes, celebrate successes, and deliver more secure code.
Conclusion
I’m a big fan of code review, whether for identifying security bugs or “regular” bugs. When I was in college, I worked in the computer center as a student “consultant,” where my job was to help other students figure out bugs in their code – usually just syntax errors and the like when their code wouldn’t compile. I wasn’t supposed to help them with “logic flaws”; that would be like cheating on their programming assignment. I realized then, as I do now, that that’s the best way to fix bugs: read the code, understand the code, and fix the code. After all, the code reveals all!
Join author Chris Bush as he goes a little deeper into the secure code review process in his webinar, “Cracking the Code: Secure Code Review in DevSecOps.”
Subscribe to Bishop Fox's Security Blog
Be first to learn about latest tools, advisories, and findings.
Thank You! You have been subscribed.
Recommended Posts
You might be interested in these related posts.
Dec 12, 2024
Our Favorite Pen Testing Tools: 2024 Edition
Oct 15, 2024
Off the Fox Den Bookshelf: Security and Tech Books We Love
Sep 17, 2024
Navigating DORA Compliance: A Comprehensive Approach to Threat-Led Penetration Testing
Aug 28, 2024
Offensive Security Under the EU Digital Operational Resilience Act (DORA)