Bishop Fox named “Leader” in 2024 GigaOm Radar for Attack Surface Management. Read the Report ›

How a Common Misconfiguration Led to Over 30 Critical Findings

Illustration of white round circles in a line. How a Common Misconfiguration Led to Over 30 Critical Findings.

Share

In working with clients to continuously test their attack surfaces, I often encounter high or critical risk vulnerabilities on a daily basis – many of which would be common to any penetration test. However, due to our continuous testing window which extends for months to years and beyond, every new exposure contributes to the breadth of knowledge related to each client’s unique attack surface. A newly identified vulnerability is seen as a potential pivot point to additional data or knowledge which could lead to the identification of even more critical or widespread risks for Continuous Attack Surface Testing (Cosmos) clients.

IDENTIFYING AN EXPOSED DIRECTORY DUE TO MISCONFIGURATION

During a recent investigation, the CAST team identified a directory listing on a misconfigured web server owned by the subsidiary of a large financial services company. While directory listings usually contain public web application files, occasionally interesting artifacts can be left behind by developers when they set up an application or configure a web server that can help attackers.

In this case, the exposed directory we discovered contained files that included source code for web applications that belonged to the organization. The team retrieved the exposed source code and identified a trove of interesting data, including internal database credentials, API keys, access tokens, and references to additional company-owned hosts and their corresponding application paths. This impact-driven approach gives us the ability to leverage discovered vulnerabilities and their resulting post-exploitation activities to build intimate knowledge of the target’s attack surface.

INVESTIGATION UNCOVERS ADMIN CREDENTIALS AND ACCESS TOKEN

At this point, the investigation split into multiple directions as they often do. The graphic below provides a simplified view of our workflow and the accompanying text goes into more detail:

Investigation split into multiple directions



Immediately, Operator A:

  • Correlated the exposed source to a live web application, created a wordlist from the source files, and through directory enumeration, identified an administrator endpoint that could be accessed without authentication.

  • Having the source code for reference, the Operator was able to identify several parameters used by the endpoint with insufficient input filtering, and found the endpoint was vulnerable to SQL injection.

  • With the identified injection, the Operator extracted various data, including user password hashes and authentication keys that allowed for additional web application access.

  • In addition to obtaining database records and compromising access keys for additional access, an attacker could compromise the integrity of the database via modification or deletion of data as the privileged database user.

Simultaneously, another Operator (Operator B):

  • Identified an access token from the exposed source code that granted administrator privileges to a separate web application.

  • With administrator access, the Operator found that the information for over 2,000 users could be exported, including password hashes, some of which were recovered via a hash cracking attack. An attacker with access to user password hashes could likely recover additional cleartext passwords and take advantage of instances of credential reuse to pivot to other applications and services.

  • Having demonstrated the impact of the directory listing and exposed source code, the team reported the identified findings to the client.

CONTINUOUSLY LEARNING ABOUT NEW ATTACK VECTORS AND POTENTIAL RISKS

At this point, the team’s knowledge of the attack surface has grown with every new exposure. Additional subdomains were extracted from the previously exposed data and fed back through the CAST process, as depicted in the following image:

Subdomains extracted from the previously exposed data and fed back through CAST process
  • Operator A began open-source information-gathering and discovered an authentication key for one of the company’s web applications exposed within search engine results. With authenticated access to the application, the Operator tested and identified functionality that was vulnerable to SQL injection, however data could not be extracted due to complications within the injected SQL query.

  • Meanwhile, Operator B, through review of the initially recovered source code, identified yet another SQL injection vector and gained remote code execution via a database function.

  • With access to the server, the Operator began post exploitation tasks of enumerating the server and identified that the host was a development server that contained source code for many of the company’s web applications and services.

  • The Operator identified the source for the web application that Operator A was working on, analyzed the endpoint that was causing trouble, and identified a separate vulnerable parameter that allowed for SQL injection which eventually led to remote command execution and a remote shell on a different internal database server.

UNCOVERING 30 CRITICAL ISSUES AND GAINING ACCESS TO AWS ENVIRONMENT

Over the next several weeks the team used their growing knowledge of the company’s environment and their post exploitation activities to identify the following total findings across the company’s perimeter:

  • 30 SQL injection vulnerabilities, most of which led to remote command execution.

  • 1 Command injection.

  • 2 Sensitive information disclosures.

Using the vulnerabilities and subsequent remote command shells outlined above, the team confirmed that they had access to the company’s internal AWS environment and collaborated to identify the impact of their access. The team leveraged the original source code disclosure which contained internal database server connection details and found they could pivot to three database servers with administrator-level permissions to access hundreds of databases, including the company’s production data.

The team accessed the data, including hundreds of credential sets for various web applications, some of which provided administrator access and the ability to create, modify, and delete data, and extracted hundreds of thousands of records containing personally identifiable information (PII) for the company’s clients. Additionally, the team accessed AWS Relational Database (RDS) data and 74 S3 buckets which contained millions of files used for business operations. Having established significant access to the company’s web applications, databases, and AWS environment, the team concluded the investigation and reported the findings to the client.

Following reporting of the vulnerabilities, the CAST team worked in real-time with the client to retest findings as they were remediated.

CONCLUSION

Since the CAST team is continuously mapping, monitoring, and learning about your attack surface in real time, we’re able to get a better understanding of the attacks that your organization could face. We’re getting the visibility you’d expect from a quality penetration test, but we’re always on rather than serving as a single point in time assessment. Alongside that visibility, we’re investigating indicators as your own personal red team to not only ensure that critical vulnerabilities are identified, but also that the information acquired through analysis of these vulnerabilities is leveraged to maximize impact across the attack surface.

Subscribe to Bishop Fox's Security Blog

Be first to learn about latest tools, advisories, and findings.


Nate Robb

About the author, Nate Robb

Operator

Nate Robb is a Security Associate at Bishop Fox, where he works as an Operator for Cosmos (formerly CAST). Prior to coming to Bishop Fox, he held roles as a security consultant and spent time as a full-time bug bounty hunter, where he worked to secure Fortune 500 companies, state and Federal Agencies, and small and medium-sized businesses

More by Nate

This site uses cookies to provide you with a great user experience. By continuing to use our website, you consent to the use of cookies. To find out more about the cookies we use, please see our Privacy Policy.