Discover, fingerprint, and test exposed AI infrastructure at internet scale.
AIMap is an internet-scale discovery and security testing tool for exposed AI agent infrastructure. It is designed to find, fingerprint, score, and test publicly exposed AI endpoints, including MCP servers, Ollama instances, vLLM/LiteLLM proxies, LangServe chains, Gradio apps, ComfyUI nodes, and more.
AIMap discovers, fingerprints, scores, and tests exposed AI agent infrastructure across the internet.
It queries Shodan to identify exposed AI and ML endpoints, then probes each one using Nuclei templates and live HTTP checks to determine protocol, framework, authentication status, tools, models, and system prompts.
Each endpoint receives a risk score from 0 to 10 based on authentication posture, tool exposure, CORS policy, TLS configuration, system prompt leakage, and dangerous capability combinations. Attack suites for MCP, Ollama, and prompt injection run in real time, with results streamed as they arrive. All discovered endpoints are visualized in a searchable interface.
The barrier to building this capability is now an afternoon. AIMap puts that same view in the hands of defenders — so organizations can see exactly what their AI agent infrastructure looks like from the outside.
Exposed AI systems present a fundamentally new attack surface. They combine models, tool execution, APIs, and user interaction in ways that create novel risk combinations: unauthenticated endpoints with code execution, leaked system prompts, broad CORS policies, exposed model weights. AIMap detects these conditions, scores them based on how they combine in practice, and enables direct testing through protocol-specific scenarios including prompt injection, tool authorization boundary testing, and model extraction.
AI infrastructure has outpaced the security tooling built to assess it. AIMap closes that gap.
AIMap supports detection and analysis across a range of AI protocols and frameworks.
These include MCP (Model Context Protocol), Ollama, vLLM, LiteLLM, LocalAI, LangServe and LangChain deployments, OpenClaw and Clawdbot systems, Open WebUI and LibreChat interfaces, Gradio and Streamlit applications, ComfyUI and Stable Diffusion environments, Hugging Face TGI, and generic inference APIs.
Detection is performed through a combination of endpoint patterns, ports, API paths, and framework-specific markers.
AIMap’s attack testing capabilities are intended exclusively for authorized security engagements. Operators are responsible for confirming they own or have explicit written permission to test any target system. Discovery and fingerprinting features are read-only; active attack modules require operator opt-in and explicit target confirmation before execution.
AIMap includes built-in attack testing capabilities tailored to specific protocols:
All attack results are streamed in real time and include severity ratings, raw request and response data, and associated findings.
Each endpoint discovered by AIMap receives a risk score between 0 and 10.
The score is calculated based on multiple factors, including lack of authentication, unknown authentication status, number and type of exposed tools, presence of high-risk or critical-risk tools, open CORS policies, lack of TLS, system prompt leakage, exposed models, uncensored model detection, and signup configurations.
Additional scoring is applied for combinations of risky conditions, such as unauthenticated access combined with code execution capabilities.
Operationally, scores above 7 typically indicate exploitable combinations such as unauthenticated endpoints with code execution capabilities or exposed system prompts paired with tool access — conditions that have been actively targeted in the wild.
AIMap provides a searchable interface for exploring discovered endpoints across three primary use cases: threat hunting, executive reporting, and incident response.
The Shodan-style query language supports filters for protocol, authentication status, risk level, tool exposure, geographic location, port, and organization. Filters combine to refine results across multiple attributes — defenders can quickly identify their organization’s exposed AI agent infrastructure across cloud regions, or scope exposure when a new vulnerability drops in a specific framework.
The 3D globe visualization displays endpoints by protocol type, risk score, and geographic location. The view is built for executive and board-level reporting, giving leadership a clear picture of attack surface concentration at a glance.
AIMap
AIMap is an open source tool developed by Bishop Fox to support defensive security, research, and authorized testing of AI and machine learning systems. Its purpose is to help organizations identify, analyze, and reduce risk across an expanding AI agent attack surface.
This tool is intended for use only on systems that you own or have explicit permission to assess. Unauthorized use against systems without consent may violate laws and regulations.
AIMap includes capabilities that simulate real-world attack techniques in order to help defenders better understand exposure and validate security controls. These capabilities are provided to improve defensive readiness, not to enable misuse.
Bishop Fox does not support or condone the use of this tool for illegal, unauthorized, or malicious activities. Use of AIMap is at your own risk, and the authors assume no liability for misuse or damage resulting from its application.
By using this tool, you agree to use it responsibly and in accordance with all applicable laws and accepted security testing practices.
BISHOP FOX SECURITY RESEARCHER
Aashiq Ramachandran is a security researcher at Bishop Fox focused on AI-powered offensive security systems. His work centers on autonomous penetration testing — using large language models to identify, analyze, and validate vulnerabilities at scale, while keeping practitioners in the loop for the work that requires intuition and creativity. His background spans Python, security automation, and AI agent architectures.
Blog Post
Security Testing For AI Agent Infrastructure
AI agent infrastructure is exposed in ways most security teams can't see. Read how Bishop Fox built AIMap.
Blog Post
Taking Maestro in Stride: AI Threat Modeling Frameworks
AI doesn’t break STRIDE. It breaks the idea that systems have fixed roles. Agentic AI systems built on LLMs don’t behave like traditional components. They act like users, services, and data pipelines at the same time, often crossing trust boundaries. MAESTRO provides a layered way to model those risks across modern AI systems.
Virtual Session
AI Security in the Age of Project Glasswing & GPT-5.4 Cyber
AI is shrinking the gap between vulnerability discovery and exploitation. As pressure mounts, most security programs aren’t built to keep up. Learn what actually matters and how to stay focused in an increasingly noisy, fast-moving threat landscape.
Guide
15 Guardrails for Shipping AI-Generated Code
Before releasing AI-developed software, use our recommended security guardrails checklist to learn how to constrain generated code, enforce security controls, and prevent silent risk from prompt to production.
AIMap is open source and built for the offensive security community. Star the repo, file issues, contribute templates, or fork it for your own research.
This site uses cookies to provide you with a great user experience. By continuing to use our website, you consent to the use of cookies. To find out more about the cookies we use, please see our Privacy Policy.