Google Partner Program – GPP Top 10

Neon sign reading top 10

Share

Bishop Fox works closely with Google to design the Partner security program, so we know what’s needed for you to pass the testing requirements. Through our 200+ assessments of applications in this ecosystem, we have found certain classes of vulnerabilities that were more common than others.

Inspired by the OWASP Top 10, we’ve created a prioritized list of the top 10 most common and high-risk bugs that we find on these assessments. I hope to share some insight into our process and common trouble spots that we see across assessments. This guide would also benefit anyone interested in improving a modern web application’s security.


APPLICATION ASSESSMENTS

In the list below, we’ve broken down the most common vulnerabilities we encounter in our assessments of Google Partner Program applications. In contrast to the general set of web applications we see at Bishop Fox, Google Partner Program (GPP) applications were often built using more recent web technologies and shared specific core functionality (e.g., Google Drive and productivity suite integration). These characteristics produced a different set of common findings that differ from the OWASP Top 10. The list is ordered based on severity and prevalence:

  1. Insufficient Authorization Controls
  2. Cross-site Scripting
  3. Cross-site Request Forgery
  4. Unsafe Document Processing
  5. Content Injection
  6. Insecure File Upload
  7. Server-side Request Forgery
  8. Insecure Credential Storage
  9. Overly Permissive CORS configuration
  10. Missing best practices

Let’s review these 10 different vulnerability classes in more detail:

1. Insufficient Authorization Controls

Inadequate security controls around varying user privileges (vertical access) and missing segmentation between users (horizontal access) was the most common high-risk issue across the assessments.

In nearly every report, insufficient authorization controls were the primary weak point for an attacker to gain access to another user’s OAuth token (or associated privileges). These vulnerabilities are often a dangerous combination of easy-to-find and presenting high business risk.

Things to keep an eye out for:

  • Consistent data and behavior entitlements: Every endpoint should uniformly restrict access to given data. Often there may be a single endpoint that’s more permissive or includes the restricted data as a part of another data record. Similarly, every endpoint action should be uniformly restricted based on the role of the authenticated user.
  • Verifying access based on the session: Ensure that every request is verified based off the users’ session object, not arbitrary user-controlled parameters provided in the request. Users should not be able to identify themselves based on a user ID value in place of a cookie or JWT session identifier.
  • Secure handling of world-readable resources: If a resource is world-readable, assume it will be cached by search engines. Sensitive customer information should never be made world-readable. If business requirements necessitate sharing access to a sensitive resource to individuals outside of your platform, consider other security controls such as easy sign-up for read-only external accounts (paired with an email), document expiration, and temporary passwords.
  • Security through obscurity: Difficult to guess UUIDs are not a form of access control. Additionally, hidden endpoints, secret HTTP parameters or headers, or non-advertised GraphQL actions are not access controls. Finally, access controls that solely exist on the client are not true access controls.

2. Cross-site Scripting (XSS)

The most frequent XSS vectors we encounter can be exploited by common payloads. Most often, we succeed with JavaScript execution via attribute handlers (e.g., <img src=x onerror=alert(1)>), and there is quite a bit of success with DOM-based XSS (javascript:alert(1)or unsanitized user input from the URL).

Stored XSS may require minimal user interaction, but often leads to some degree of, if not complete, account compromise.

Things to look out for:

  • Client and server-side user input sanitization: Is all user input processed and sanitized consistently? All input should be sanitized by centralized filtering both on the client and server sides. For stored data, ensure consistent filtering between frontend and backend components. Inconsistencies could create opportunities for exploitation.
  • DOM-based XSS vectors: Don’t forget about javascript:, data:,and other malicious URIs being used for user-controlled URLs. Clicking these malicious URIs can trigger DOM-based XSS.
  • Blind XSS test cases: Blind XSS may be challenging to test. Consider using a tool like XSSHunter and its associated payloads to detect callbacks as you browse through the application.

3. Cross-site Request Forgery (CSRF)

Cross-site request forgery is straightforward to remediate and CSRF protection is an essential security control. Often when it is misconfigured, it's indicative of further issues.

Things to look out for:

  • CSRF smoke test: State-changing account actions should not be able to be performed if an authenticated user clicks an arbitrary link on a third-party site.
  • Don’t DIY a solution: Use standardized and community-supported CSRF protection implementations. Most security mechanisms have sufficient opportunities to be misconfigured. There is no need to compact misconfiguration risk by also writing your own implementation.
  • SameSite headers are not a panacea: Understand the benefits limitations of protections like SameSite. Check out this excellent article by PortSwigger.

4. Unsafe Document Processing

This category of vulnerability is a common path to remote code execution or server-side request forgery across applications assessed through the GPP.

Many GPP apps integrate with office productivity suites to consume and produce these documents. In most cases, processing these documents is a core business requirement.

Things to look out for:

  • Use the minimum level of parsing required: How are Microsoft Office Documents, XML, or PDFs processed? Instrumenting Microsoft Office (or an open source equivalent) is the highest-risk solution. This can create opportunities for malicious user-supplied formulas, macros, or document structure (DOCX is just a zip bundle of XML). Take care in avoiding zip file path traversal issues in unzipping, and XML vulnerabilities (e.g., XXE) when processing XML. Document processing can be a minefield of potential security issues. Look for simplified frameworks and standardized solutions and avoid rolling your own solution.
  • Beware of user input in document generation: How does your application generate these documents for export? If you’re exporting CSV, Excel, or spreadsheet-like formats, you may be susceptible to CSV injection. Improperly escaped user input can also lead to issues in PDF generation or other formats where user input is mistaken for code.

5. Content Injection

So, if it’s not XSS, what else can go wrong with rendered user input?

Content injection can be thought of the superset of all forms of injection. Sometimes it’s severe and leads to a form of code execution. Other times it’s simply the user’s text displayed in an unexpected way.

Things to look out for:

  • Context-aware escaping: All data should be escaped when it is moved to a new context. For example, a first name submitted by a user will need to be escaped to be used in a SQL query; next, it will need to be escaped to render in an HTML profile page for HTML markup; finally, that same first name will need to be escaped again for use in an exported spreadsheet to avoid risks of formula markup. Context-aware escaping is crucial to safe input processing.
  • Indirect risks of rendering user input: Through a combination of new lines and the location in the content where the user input is displayed, even arbitrary plaintext could be misinterpreted as coming from the provider. Content injection is commonly used in phishing through email content injection and web page injection.
  • Using user input in HTML: Let’s consider the <a> tag. Although the application may restrict user input in URIs to an HTTP(s) proper pattern, does your application secure against threats such as Reverse Tabnabbing? If your <a> tag does not use noopener, then the newly opened page can redirect the parent page via the cross-origin window.opener parameter. Every HTML tag can have nuances — be sure to properly sandbox any tag that may include user input.

6. Insecure File Upload

Although file upload to a web root is less common these days, there are still significant ways that file upload can lead to application compromise.

Things to look out for:

  • File storage: Is it rendered on a meaningful domain? If JavaScript code can be executed from that domain, does its location allow it to misuse CORS configurations? Are there path traversal flaws in safely storing the users’ files in the desired server path?
  • Upload restrictions: Is there a file size, file number, or upload quota limit?
  • File content: For languages like PHP, is the file uploaded to a webroot where it may be processed? Does the file contain illegal, malicious, or inappropriate content?
  • File processing: Do you decompress archives? Are user-controlled file names used in shell commands where they may be interpreted as command arguments (e.g., files that includes ‘-‘characters)? Can unexpected file formats be processed?
  • Cloud upload risks: Does your application properly secure against path traversal for S3 upload policies or pre-signed requests.

7. Server-side Request Forgery (SSRF)

This functionality is often obvious to an attacker when they see it. It is also the most common and low-effort path to cloud takeover.

Speaking of which, if an attacker were to gain privileged access to your application’s cloud account and were to delete everything: Do you have a disaster recovery plan and backups stored outside of your cloud provider account? The level of privilege to perform this type of damage happens frequently, and a lack of distributed backups often presents significant risk.

Things to look out for:

  • Egress controls to internal servers: What protections do you have in place to protect access to internal network assets, instance metadata URLs, and other services running local to the requesting host?
  • Service segmentation: Are you delegating these requests to a minimally privileged server to reduce opportunities for things to go wrong?
  • Egress controls to illegal external servers: Are you restricting the possible outgoing domains to prevent violation of export controls, or other policies?

8. Insecure Credential Storage

We often see promo codes or API keys hardcoded into client-side JS or mobile apps. Plan for any data stored in client-side resources to be fully accessible to users. Also, ensure your application is following best practices for secret storage and secure password storage.

Things to look out for:

  • Application secret storage: Secrets like JWT secrets, AWS/GCP access tokens, and Google OAuth tokens should never be available to the user. (It happens more than you would expect). Use cloud-provided key management solutions for storage of all secret keys.
  • Don’t use client-side storage: Client-side code is not a safe place for secrets: Do not store access tokens, promotional codes, or other secrets in client-side JavaScript or as part of a mobile application.
  • Implement strong modern password storage: Please see the OWASP guidelines on secure password storage.

9. Overly Permissive CORS Configuration

Cross-origin resource sharing (CORS) policies allow APIs to grant cross-origin access to data. Although this might facilitate access to data across subdomains, common misconfigurations allow for access across far more.

Things to look out for:

  • User input should not be used in a CORS response: This design fails all of the time. The best solution is to use a lookup table to ensure the requested Origin is only reflected if it’s contained in the allowed lookup table.
  • User input needs to be used in a CORS response: If business requirements force you to reflect input, ensure you are using a restrictive, correct regex pattern. Most often we will see a regex match like “^[A-z]+.example.com$” which does not properly escape the first dot (.) allowing matches like wwwxexample.com, which could be registered by an attacker.

10. Missing best practices

The following findings may just feel like noise in an assessment report, but many of these findings have allowed us to transform a medium-severity attack into a high-severity attack.

From an attacker’s perspective, every misconfiguration and information disclosure grants additional access. Additionally, when many of these easy-to-fix issues are present, it often indicates more systemic security issues through the application.

Things to look out for:

  • Banner information disclosure: Version numbers or product names disclosed in HTTP headers or service responses can aid attackers in identifying vulnerable software. Strip these from outgoing responses.
  • Error message info disclosure: Errors often disclose SQL syntax, file paths, software name or versions, and sometimes credentials. Replace these with generic error pages.
  • Logging disclosure: Gaining access to logs can often be an indirect way to account or application takeover. Logs containing sensitive information such as credentials or URLs containing private information can be a shortcut to compromise.
  • Missing security headers: Content security policy may be a challenge to implement, but many other headers (e.g., X-Frame-Options, SameSite) and attributes (e.g., the cookie attributes HttpOnly, secure) are not. These small additions let your application take advantage of existing browser security controls.
  • Username enumeration: Mechanisms that verify the existence of usernames or email addresses simplify an attacker’s strategy. Whether it enables targeted credential stuffing, identification of administrative accounts, or identification of internal employee emails, enumerating users can be a useful disclosure for an attacker. In addition to endpoints presenting true/false responses defining user existence, be aware of the potential deviation in server response time which may indicate a successful database lookup.
  • Weak password requirements: Users compromised for having weak passwords is still bad optics for your product, although it was the user’s mistake. Help reduce the likelihood of compromise by enforcing strong password requirements. Also, consider offering or requiring multi-factor authentication. Better yet, check out this guide by Troy Hunt, creator of haveibeenpwned.com.
  • Insecure cookie transmission: On a modern application, the cookie attributes Secure and HttpOnly should be used where possible and appropriate.
  • Insecure SSL/TLS ciphers enabled: Often this vulnerability is more about optics than security risk. Usually it’s trivial to update; the users’ browsers almost certainly support modern ciphers. A quick scan from a free SSL scanning tool (e.g., sslyze, sslscan, or testssl.sh) will be able to evaluate your application’s current configuration.
  • Session termination and timeout: For those who are skeptical of the risk, there are stories of session cookies showing up on StackOverflow or HackerOne bug reports, and being used to take over accounts due to a combination of a user’s mistake and excessive session expiration.
  • Requiring current password when changing the password: Consider the discussion of sessions termination above and imagine if the current password wasn’t required to change the account password. This a great example of when small bugs cascade to create a more significant security issue.
  • Password/Credential storage: MD5 is still the most common password hashing solution we encounter and it is inadequate for the modern era. There are lots of great guides and resources on this topic that can provide modern recommendations (e.g., the OWASP guide). bcrypt is a good choice but find a solution that best fits your needs and is appropriate for the modern web.
  • Insufficient anti-automation: Spammers, attackers, and individuals looking to capitalize on introductory offers can cause a serious detriment to your platform. Finding the right blend of using CAPTCHAs, rate-limiting, user verification, privacy, and usability is a complex decision. Take the time to weigh the pros and cons and find a solution that works right for your product.
  • Debug vs. production: Staging environments should not be exposed to the internet. Often, compromise of a staging environment enables compromise of the production environment. Ensure proper segmentation between your environments and restrict access to any non-production assets from the public.

If you'd like our help on your Google Partner Program security assessment, head over to our GPP Testing page to get in touch with our security experts.

Subscribe to Bishop Fox's Security Blog

Be first to learn about latest tools, advisories, and findings.


Jake Miller

About the author, Jake Miller

Security Researcher

Jake Miller (OSCE, OSCP) is a Bishop Fox alumnus and former lead researcher. While at Bishop Fox, Jake was responsible for overseeing firm-wide research initiatives. He also produced award-winning research in addition to several popular hacking tools like RMIScout and GitGot.


More by Jake

This site uses cookies to provide you with a great user experience. By continuing to use our website, you consent to the use of cookies. To find out more about the cookies we use, please see our Privacy Policy.