Bishop Fox named “Leader” in 2024 GigaOm Radar for Attack Surface Management. Read the Report ›

Continuous Testing Finds Major Risks Under the Surface

Up close fingerprint in neon


Given my experience as a pen tester, bug bounty hunter, and now as a Cosmos Operator, I’m always fascinated by how much focus is put on emerging threats in our industry like zero days and new CVEs. While we absolutely need to be aware of those new threats, there are many other easily exploitable vulnerabilities testers can focus on. In fact, when we focus most of our time on those issues, spending time and resources simply determining whether those threats affect our client or target, we’re potentially missing the more widespread but less newsworthy issues that could have a more damaging impact on the organization.

Of course, we can’t ignore those threats, but there are only 24 hours in a day, and security budgets are more strapped down than ever. To solve this problem, most organizations employ the use of tools or services which automatically identify these emerging threats on their known perimeter or – if they’ve gone down the attack surface monitoring route, the entire attack surface, including the assets the client may not have previously known about.

In theory, this automation should solve the problem of the constant barrage of emerging threats, freeing up resources of security teams to focus on other areas of risk, but it doesn’t always work out that way.

Anyone who’s worked in security knows that tools can add noise – or more information that can distract a security team from what really matters. If you have various tools scanning your attack surface and a bug bounty program hacking away at your perimeter, your security team may be stuck manually sifting through reports to find out what threats they need to prioritize to make the biggest impact on the security of the company. Tl;dr: More tools doesn’t mean better intel.

What’s the solution? You’ve got the automation, tooling, and the internal staff, but they’re often overwhelmed or maybe just can’t keep up. To solve that issue, we created a service that combines an automation platform with a team of experts to investigate threats for each client and analyze the real impact within the context of their business.


The context of their business is important. Since our Cosmos service is discovering new assets and risks on our clients’ perimeters 24/7, we get the historical knowledge of previous issues and can track changes in the environment. That means we often see relationships between discovered vulnerabilities that help us correlate related issues that might be hidden within the attack surface.

Here’s an example of a recent critical finding in a client’s attack surface that was only loosely related to an issue flagged by our automation platform. In other words, while most tools or services may have found the initial vulnerability that our platform found, they likely wouldn’t have pushed the finding to maximum impact without the unique knowledge that comes with continuously testing the client’s perimeter.


While continuously testing client perimeters, we often obtain what we call “loot” during investigations – data such as usernames, email addresses, credentials, internal DNS information, or anything that may help us understand the client’s environment and attack surface better. This data can be obtained during different stages of our investigations, but is often acquired during post exploitation activities, or enumeration performed after we gain unauthorized access to client infrastructure. This loot combined with tribal knowledge gained from continuously testing client assets gives us a unique insight into common issues and exposures that may exist within various client environments.

During a recent investigation, a CAST Operator accessed a WordPress backup for a subsidiary blog of a large financial services company. Within the recovered backup information was a database dump, which contained plaintext credentials for an employee. Given our previous experience with credential reuse, we saw the employee’s credentials as a potential entry point to other systems the company is using or has used in the past – meaning even more sensitive data could be within our reach. Using the recovered credentials, the Operator identified several instances of credential reuse to increase the impact of the finding and was able to access several web applications owned by the subsidiary.

Of these web applications, one was an email marketing platform that allowed for the creation of custom emails that could be sent to all of the company’s subscribers and clients, totaling 1.7 million subscriptions. An attacker with access to this application could create mass email campaigns to perform a variety of attacks, including credential phishing or fraud schemes against the company’s subscribers and clients. As the emails would be sent from the company’s email servers, from a legitimate company email address that had been previously subscribed to, likelihood of success would increase substantially.

We immediately informed the affected client of our findings and walked through the proofs of concept attacks with them so they understood why we prioritized removing that backup information, updating passwords, and restricting their reuse.

Several weeks later, while enumerating client-owned cloud ticketing applications, an instance from the same subsidiary caught the Operator’s eye. The Operator recalled the prior issue of credential reuse and confirmed that the application could be accessed with the same credentials that had been retrieved during the previous investigation, as the employee had not changed their password for this account. The spread turned out to be broader than originally known, which then got the Operator wondering just how much more they could access with those same credentials.

With access to the company’s ticketing system, the Operator began searching for sensitive data that could provide additional access, and they identified a ticket related to auditing the company’s firewall rules. Attached was the firewall configuration file, which included the firewall’s IP address, administrator password hashes, and encrypted VPN user passwords. Research into this specific firewall revealed that it was vulnerable to an issue where the encrypted passwords could be decrypted, as the same hard-coded encryption key was used in all models of the firewall prior to a certain version.

The Operator:

  • Obtained a virtual machine of the same model firewall,
  • Added the encrypted passwords to the local configuration, and
  • Successfully decrypted the passwords for all the administrator and VPN users recovered from the configuration file.

The credential sets were confirmed to be valid – one granting full administrator-level access to the firewall – and all credential sets granting VPN access to the company’s internal network. The company’s security team was briefed on our findings and level of unauthorized access, and the Operator was asked not to push the internal network access any further. However, an attacker with this level of access could have compromised hosts via services only accessible internally and maybe even pivoted into the parent company’s internal network.

As the Operator continued to review the company’s ticketing system, they noticed mention of LastPass password manager in several tickets and attempted to authenticate with the same recovered credentials used previously. The attempt and authentication were successful, and the assessment team was able to access the employee’s LastPass password manager which included all of the employee’s personal and work account passwords.

The Operator quickly identified credentials for the company’s Amazon Cloud account and confirmed that authentication could be achieved with root or administrator-level permissions. This allowed access to create, delete, and modify all resources within the environment. While most of the resources were used for development, the Operator identified a storage bucket containing a production database for the company’s main website, which contained PII and plaintext passwords for thousands of the company’s customers. Within the same storage bucket, the Operator identified configuration files which contained various keys, secrets, and credentials. Having sufficiently demonstrated the impact of the credential reuse, the Operator concluded the investigation and reported all findings to the client.


The access acquired and the critical findings reported in this article would likely have flown entirely under the radar if the client was relying on most automated attack surface management solutions. While automation is definitely required these days to keep up with modern attack surfaces, you can’t rely only on those findings alone to protect your perimeter. But your internal team is often time-restrained and usually overburdened, so taking on the role of blue team on top of everything else is usually not ideal.

Fortunately, our Cosmos team acts as an extension of your security team and working together, we can identify emerging threats that affect your perimeter, while also finding the issues that may not be in the news cycle, to protect everything that drives your business forward.

Learn more about what Cosmos has found for existing clients and the impact those findings have had on their businesses.

Subscribe to Bishop Fox's Security Blog

Be first to learn about latest tools, advisories, and findings.

Nate Robb

About the author, Nate Robb


Nate Robb is a Security Associate at Bishop Fox, where he works as an Operator for Cosmos (formerly CAST). Prior to coming to Bishop Fox, he held roles as a security consultant and spent time as a full-time bug bounty hunter, where he worked to secure Fortune 500 companies, state and Federal Agencies, and small and medium-sized businesses

More by Nate

This site uses cookies to provide you with a great user experience. By continuing to use our website, you consent to the use of cookies. To find out more about the cookies we use, please see our Privacy Policy.