AI & Security Risks: A Cyber Leadership Panel
Watch a fireside chat with cybersecurity and AI leaders on today’s real AI security risks. Learn where risk is emerging, how leaders set ownership, the true cost of securing AI, and practical steps teams use to protect AI systems and data.
Artificial Intelligence is moving fast inside every organization. Faster than governance, security, or even budgets can keep up with. Leaders are dealing with real, day-to-day challenges: figuring out who owns what, managing shadow AI, keeping costs under control, and securing systems that change constantly. And with attackers already poking at AI environments, the pressure is only increasing.
This is a chance for cybersecurity and AI leaders to step back and talk openly about what’s actually happening. What’s working? What isn’t? What are people trying as their teams scramble to keep pace?
Session Summary:
This panel digs into the real-world tension leaders face: intense pressure to “do AI” fast, but uncertainty about where it truly drives efficiency, how to measure success, and how to report results credibly to boards and investors. Panelists share pragmatic lessons from AI-native and enterprise environments, including what’s working, what’s hard, and how governance, data discipline, and identity controls must evolve as AI becomes embedded infrastructure. The throughline: move quickly, but intentionally. Ground decisions in specific risks, clear ownership, measurable outcomes, and repeatable guardrails.
Key Takeaways:
- AI is collapsing time and cost, but it changes how work gets done. Teams are shifting effort upstream with better specs and requirements so AI can execute faster with fewer iterations, shrinking release cycles from months to weeks or less.
ROI is tricky because productivity gains aren’t always hours saved. Some orgs see clearer quality, consistency, and speed-to-draft improvements than measurable time reductions, so you need metrics that capture both efficiency and quality outcomes.
-
Governance has to cut through hype and fear. Effective AI risk management starts by defining specific threats and exposures, not headlines, then aligning controls to material business and reputational impacts.
-
Data governance becomes a force multiplier, good or bad. If your data is messy, overly permissive, or poorly classified, AI amplifies the problem. Foundational data hygiene and clarity on what data goes where are now table stakes.
-
Agentic workflows create identity and privilege escalation risks. As tools like MCP expand what models can do, leaders must treat non-human identities, least privilege, and user-delegated access as first-class security requirements. Human-heavy processes will not scale.
-
Supply chain risk is a major blind spot. AI introduces new dependencies such as models, tooling layers, plugins, connectors, and vendors that can expand blast radius quickly. Vendor due diligence needs to include AI governance posture and shared-responsibility clarity.
-
The most valuable human skill is shifting to evaluation and judgment. Speed is commoditized. The differentiator is the ability to validate outputs, spot failure modes, manage technical debt, and apply ethical decision-making, meaning treat output as untrusted until proven otherwise.
-
Start where you are: lead with high-value use cases, then build repeatable guardrails. Focus first on the next meaningful business use case, define success metrics early, assign accountability, and iterate while improving org-wide AI literacy so decisions do not bottleneck in a few experts.
Host:
- Nick Selby, Managing Partner, EPSD, Inc.
Speakers:
- Christie Terrill, Chief Information Security Officer, Bishop Fox
- Justin Greis, Chief Executive Officer & Board Member, acceligence
- Andy Chou, Founder & CEO, Ventrilo
- Kris Kimmerle, VP AI and Risk & Governance, RealPage