A Dark Reading Panel - "The Promise and Perils of AI: Navigating Emerging Cyber Threats"
This video showcases leading voices in cybersecurity explaining their examinations into how AI is simultaneously transforming cyber defense and supercharging attacker capabilities. Together, they explored how GenAI is reshaping the threat landscape and what security leaders must do to adapt.
This panel, hosted by Dark Reading, brought together leading experts to explore the dual-edged nature of AI in cybersecurity—how it's empowering defenders and attackers alike.
Key Themes Discussed:
1. AI as a Double-Edged Sword
- Offensive Use by Threat Actors: AI is lowering the barrier for cybercriminals by enabling more personalized, scalable, and sophisticated attacks. Social engineering, phishing, and data exploitation (especially using breached data and deepfakes) are major concerns.
- Quiet Before the Storm: Caleb Sima noted that while AI threats are real, current attacker adoption is in a “quiet period” as they figure out how to scale AI effectively—true productization is expected within a year.
2. Emerging Threats
- Deep Research Tools: Tools that combine breached data, OSINT, image analysis, and LLMs can instantly build rich attacker profiles—dramatically increasing the effectiveness of social engineering.
- AI Agents and Prompt Injections: Rob Ragan highlighted that 2025 will be the year of agentic AI systems. Attackers will exploit these multi-step agent workflows using indirect prompt injection to gain access to sensitive data or execute malicious actions.
- Legacy Infrastructure at Risk: AI is particularly good at analyzing and reverse engineering old, obscure systems, which could lead to an increase in zero-day exploits in critical infrastructure.
3. AI in Defensive Security
- Enhanced Testing and Automation: AI is already improving vulnerability scanning, code review, and prioritization of security issues. Bishop Fox’s team, for example, used AI to expedite firmware vulnerability discovery.
- Code Security: While AI-generated code can increase productivity, it may lack best practices or have hidden flaws. Organizations must double-check LLM-generated code using traditional and AI-enhanced SAST tools.
4. Human Factor and Trust
- Social Engineering Evolution: With convincing deepfakes and AI-generated pretexts, traditional cues for detecting fraud are eroding. Both Eric Kruse and Rob Ragan emphasized technical controls, behavioral analysis, and user education as key defenses.
- Supply Chain and Trust Challenges: AI expands the attack surface across digital supply chains. Knowing who and what to trust is becoming increasingly difficult.
5. Practical Recommendations
- Defense-in-Depth Is Critical: Technical mitigations (e.g., session security, access control for agents, proper validation) are essential.
- Test Models Like Code: Just like traditional code, models will need versioning, testing, and security evaluation. Trust in model integrity and origin will be a new frontier.
- Prepare for AI-Generated Exploits: AI will soon enable junior attackers to find zero-days. Organizations should rethink vulnerability management and threat modeling.
6. Optimism and Opportunities
- Despite concerns, panelists expressed hope. AI can help defenders become faster, more efficient, and creative—especially in threat detection and remediation. Rob Ragan described it as a “superpower” that enables security teams to do more with less.
Panelists:
- Caleb Sima – Chair, CSA AI Safety Initiative
- Rob Ragan – Principal Technology Strategist, Bishop Fox
- Stephen Thoemmes – Developer Advocate, Snyk
- Erich Kron – Security Awareness Advocate, KnowBe4