Architecting Cloud Security Testing in the GenAI Era
Join Steven Smiley and Jessica Stinson for a deep dive into how early-stage architectural reviews can transform the effectiveness of your testing. Whether you're navigating IAM setups or preparing to tackle GenAI risks in cloud environments, this session has the clarity and direction you need to test smarter—not just harder.
Security Testing the cloud isn’t just about launching attacks—it’s about creating the right conditions to uncover what truly matters. In this session, Steven Smiley (ScaleSec) and Jessica Stinson (Bishop Fox) break down how a well-executed architecture review can lay the foundation for more impactful red team or cloud penetration testing results. They cover how to properly configure access for a cloud penetration test and how these tests are performed, what red teamers look for during a red team engagement and how these engagements are conducted, and how to clearly distinguish between red teaming and traditional cloud pentesting. Lastly, as GenAI models become increasingly embedded in cloud ecosystems, it’s more important than ever to understand the unique risks they introduce. The team outlines key considerations for integrating GenAI into your testing strategy to ensure complete and relevant coverage.
What You’ll Learn:
- How an upfront architecture review can elevate your test
- Key differences between cloud red teaming and cloud pen tests
- How to provision access and permissions for security testing
- How to approach security testing cloud environments that now include integrated GenAI technologies.
- How to make the most of findings and harden your environment
Who Should Watch:
- Cloud Security Architects seeking to integrate security considerations from the design phase of cloud-native architectures
- DevSecOps Teams responsible for implementing security controls in AI-augmented cloud environments
- Data Scientists and AI Engineers who need to understand the security implications of their AI implementations
- CISOs and Security Directors evaluating how to adapt security testing strategies for environments with GenAI components, and making decisions about penetration testing vs. red teaming approaches for cloud environments
Session Summary
The discussion begins by emphasizing the critical role of upfront architecture reviews in effective security testing, with both experts highlighting how comprehensive diagrams help security teams identify key targets like AWS identity centers, logging buckets, and containerized workloads. Smiley notes that while approximately half of organizations have architecture documentation, it's often outdated or incomplete, making it difficult to understand complex cloud environments with multiple interconnected accounts.
The conversation then shifts to distinguishing between cloud penetration testing and red team approaches. Simpson explains that cloud penetration tests are ideal for organizations that have recently migrated to the cloud or undergone significant architectural changes, providing broad visibility into misconfigurations within a time-boxed period. In contrast, red team engagements are better suited for more mature organizations looking to simulate real-world adversary techniques against their cloud defenses. The speakers debate the merits of black box versus white box testing approaches, with both agreeing that assumed breach scenarios typically deliver more value by focusing security resources on realistic attack paths rather than spending time attempting to gain initial access.
A substantial portion of the session addresses how GenAI is transforming cloud architectures and introducing new attack vectors. Smiley identifies four emerging patterns: applications calling external AI APIs (like OpenAI), applications using cloud provider AI services (like AWS Bedrock), self-hosted AI models, and agentic AI with permissions to perform actions within applications. Each pattern introduces distinct security challenges, from credential exposure to insufficiently hardened infrastructure. Simpson shares two compelling client stories illustrating these risks: one where red teamers accessed proprietary AI models by exploiting hard-coded credentials in a Git repository accessible to contractors, and another where they manipulated an AI helpdesk agent through prompt injection to exfiltrate sensitive customer data from Salesforce.
The session concludes with practical advice for making the most of security assessment findings, emphasizing the importance of timely remediation while the information is still relevant. Both experts advocate for a collaborative approach between offensive security testers and cloud architects, viewing security testing as part of a continuous lifecycle rather than a compliance checkbox. They acknowledge that while GenAI introduces new risks, organizations shouldn't avoid these technologies but should instead implement appropriate guardrails, sandboxing, and monitoring to safely capture their benefits.
Key Takeaways
- Architecture diagrams accelerate security testing - Comprehensive documentation of cloud environments helps testers quickly identify critical assets, authentication flows, and network interconnections, leading to more strategic and efficient security assessments.
- Testing approach should match security maturity - Cloud penetration tests provide broad visibility into misconfigurations for less mature organizations, while red team engagements better serve mature organizations looking to test their detection capabilities against realistic adversary techniques.
- GenAI introduces four key architectural patterns with unique risks - Applications calling external AI APIs, cloud provider AI services, self-hosted AI models, and AI agents with system permissions each require specific security controls and testing approaches.
- AI agents represent a significant new attack surface - Without proper isolation, monitoring and data flow controls, AI agents with system permissions can be manipulated through techniques like prompt injection to access sensitive data across interconnected cloud services.
- Effective remediation requires collaboration between offensive and defensive teams - Organizations get the most value from security tests when they plan for immediate remediation while information is fresh, treating testing and fixing as a single collaborative effort rather than separate activities.
- Permission issues remain the most exploited cloud misconfiguration - Even with the rise of AI-related vulnerabilities, IAM misconfigurations continue to be the primary entry point that red teamers exploit to gain unauthorized access to sensitive cloud resources.