AI War Stories: Silent Failures, Real Consequences
- Date:
- Wednesday, August 20th
- Time:
- 2 p.m. ET / 7 p.m. BST

AI systems rarely crash when compromised – they comply. Whether it’s a helpful browser extension leaking emails or a chatbot escalating privileges via persistent memory, AI failures don’t announce themselves. They quietly obey malicious instructions, often with no alerts, no errors, and no oversight.
In this webcast, Jessica Stinson, Senior Solutions Engineer, shares AI war stories drawn from real-world security assessments, revealing how trusted AI tools in production were silently manipulated to leak data, trigger unauthorized actions, and undermine system integrity. These aren’t theoretical risks; they’re failures happening in environments just like yours.
You’ll learn:
- How indirect prompt injection, memory manipulation, and AI-driven automation are being used in the wild
- Why architectural blind spots, not bad prompts, are the root cause of many AI security incidents
- How to apply structural defenses like modular execution layers, policy enforcement gates, and output sanitization
Who Should Watch:
- AI/ML Engineers and Developers
- Security Architects
- Application Security Teams
- DevSecOps Engineers
- Cloud Security Professionals
- Risk Management Teams
- Product Security Managers
- System Architects
If you’re building, deploying, or securing AI in your environment, this session will challenge your assumptions and equip you with a blueprint to prevent silent failures from becoming real-world breaches.