Explore Bishop Fox's experimental research into applying Large Language Models to vulnerability research and patch diffing workflows. This technical guide presents methodology, data, and insights from structured experiments testing LLM capabilities across high-impact CVEs, offering a transparent look at where AI shows promise and where challenges remain.
How Cosmos AI and Human Expertise Work Together to Strengthen Application Security
A financial services organization tested Bishop Fox's Cosmos AI platform against a realistic application to answer one question: what does AI-powered penetration testing actually deliver? In 3 hours and 17 minutes, Cosmos AI surfaced 35 candidate findings — including a $1M negative transfer exploit and a race condition that multiplied funds 5× — that conventional scanners cannot test for. After expert triage, the client received 20 confirmed vulnerabilities and zero false positives.
Inherited Access, AI Permissions, Supply Chain Attacks & Edge Exposure
In this Initial Access podcast episode, we examine how attackers are inheriting access through trusted systems, default permissions, and unpatchable infrastructure.
Malvertising, Trusted Tools, Real-Time Attacks & Shrinking Windows
In this Initial Access podcast episode, we examine how attackers are turning normal workflows and trusted systems into reliable paths for initial access as exploitation timelines continue to shrink.
Secure AI-Assisted Development: 15 Guardrails for Shipping AI-Generated Code
Before releasing AI-developed software, use our recommended security guardrails checklist to learn how to constrain generated code, enforce security controls, and prevent silent risk from prompt to production.
Workshop Series: Inside Cirro
Get a practical look at Cirro with creator Leron Gray, a new open-source tool for modeling Azure and Entra ID environments as relationship graphs to make privilege and attack paths easier to understand.
Tactics of Deception: Protecting Trust and Purpose
Trained people, strong controls, still getting fooled? This session breaks down how modern social engineering exploits trust and urgency, and what actually works to stop it.
Subscribe to our blog
Be first to learn about latest tools, advisories, and findings.
Thank You! You have been subscribed.
Speed, Trust, and the Compromised Workbench
In this Initial Access podcast episode, we explore how attackers are collapsing timelines and exploiting trust relationships, turning developer environments into the fastest path to full compromise, and where defenders still have room to slow them down.
Social Engineering, Phishing-as-a-Service, Edge Device Exploits & AI-Assisted Attacks
In this Initial Access podcast episode, we examine how attackers are gaining initial access through social engineering, identity abuse, and vulnerable edge infrastructure.
Designing for Resilience: LastPass Prioritizes Security in Move to Cloud
Rebuilding in AWS gave LastPass a clean slate, but it also meant getting the architecture right. To be sure their security boundaries would hold, they partnered with Bishop Fox to test their cloud environment under realistic conditions and strengthen it where it mattered most.
AI Coding Agents, FortiGate Attacks, Surveillance & Identity Hacks
In this Initial Access podcast episode, we cover AI coding agents operating inside developer environments, automated attack platforms accelerating exploitation cycles, long-lived connected devices exposing unexpected telemetry risks, and why identity systems remain the primary entry point for attackers.
Securing Airline Commerce: Penetration Testing for AWS Cloud Infrastructure
A major airline technology platform turned to Bishop Fox after routine assessments kept missing the mark. What followed revealed unauthorized PCI database access, misconfigured IAM roles spanning hundreds of instances, and lateral movement across Active Directory domains — driving immediate remediation and stronger customer trust.
Autonomous AI, Broken Guardrails, and Geopolitics
In this Initial Access podcast episode, we cover autonomous vulnerability discovery, AI agents that ignore instructions, and why models are becoming strategic national assets.
This site uses cookies to provide you with a great user experience. By continuing to use our website, you consent to the use of cookies. To find out more about the cookies we use, please see our Privacy Policy.