AI-Powered Application Penetration Testing—Scale Security Without Compromise Learn More

AI & Security Risks: Reviewing Governance and Guardrails

Blog header graphic reading “AI & Security Risks: Reviewing Governance and Guardrails” focused on AI security risk management and governance best practices.

Share

TL;DR Boards are pushing hard on AI. Security teams are trying to keep up. AI risk isn’t the same as traditional software risk. It’s probabilistic, dependent on third parties, and easy to over-permission when integrated poorly. Governance has to be built in early and treated as ongoing oversight, not a one-time review. Start with inventory. Apply least privilege. Define guardrails before you scale.

AI is moving faster than most governance programs. Boards want value, and security teams see risk. In our most recent virtual session, AI & Security Risks, A Cyber Leadership Panel, host Nick Selby, Managing Partner at EPSD, Inc., led a candid discussion with leaders spanning enterprise security, AI governance, SaaS, and AI-native product development.

The panel featured:

Together, they shared real-world perspectives from consulting, enterprise operations, startup innovation, and board-level strategy. Here’s how the conversation unfolded.

What are you seeing from boards and executive leadership when it comes to AI adoption and expectations?

Terrill: What we’re seeing is real pressure to move quickly. There’s an expectation that we’re adopting AI and that we’re not falling behind competitors. At the same time, there’s concern about risk, so security teams are being brought into conversations earlier than before, which is a good thing. But the pace of change makes it challenging, because the technology is evolving quickly, and the risks are evolving right along with it.

Greis: Boards are asking pretty direct questions: What is our AI strategy? How are we using it? What are we getting out of it? There’s a strong desire to see measurable impact. The reality though is that a lot of organizations are still experimenting, so it can be difficult to quantify value in a way that really satisfies board-level scrutiny.

Chou: From a startup and builder perspective, there’s a lot of excitement around what AI enables. It lets teams move faster. It changes what small teams can accomplish. But leadership is also asking about guardrails. They want to understand how we’re thinking about data security, access control, and responsible deployment as we integrate AI into products.

Kimmerle: There’s recognition that AI isn’t just a feature you bolt on. It cuts across products and operations. That means governance can’t be isolated to one team. We’ve had to formalize how we assess risk, how we document use cases, and how we define oversight structures that involve multiple stakeholders. 

How is AI risk different from the kinds of software risk we’ve managed in the past?

Terrill: Traditional software behaves in a more deterministic way. With AI systems, you’re dealing with probabilistic outputs. That changes how you test and validate systems. You’re not just looking for code vulnerabilities. You’re thinking about how a model might behave under different prompts or inputs, and that introduces new testing requirements.

Greis: There’s also the ecosystem risk. A lot of AI implementations rely on third-party models or APIs. That introduces dependencies you don’t fully control. It’s not just about securing your own codebase anymore. It’s about understanding the broader supply chain.

Chou: When you connect AI systems to internal data or workflows, you’re creating new pathways for information flow. If permissions aren’t scoped correctly, the system can surface data in ways that weren’t originally intended. That’s a different class of integration risk.

Kimmerle: You also have to consider where data comes from and how it’s used. Data provenance matters. Training data exposure matters. Governance has to address how outputs are monitored over time. It’s not something you review once and move on from. It’s ongoing oversight. 

What does good governance look like in practice right now?

Kimmerle: For us, it’s meant building a structured intake process for AI use cases. Teams document the purpose, data sources, risk profile, and intended controls they plan to put in place. That allows us to apply consistent review criteria. Governance needs to be repeatable.

Terrill: From a security perspective, we’re defining baseline guardrails. Things like data handling requirements, access controls, and acceptable use policies. When those are clearly articulated, teams don’t have to reinvent the process each time.

Greis: There needs to be alignment between business value and risk tolerance. Governance shouldn’t just exist to block innovation. It should help prioritize efforts that deliver measurable impact while staying within defined risk parameters.

Chou: We’re paying attention to feedback loops. How are models performing in production? Where are they failing? Where do we need human intervention? Governance isn’t static, it evolves as the system evolves. 

How are you thinking about measuring success and risk?

Greis: You really have to tie AI initiatives to specific business metrics. What’s the outcome you’re trying to drive? Whether that’s operational efficiency, cost reduction, or revenue generation, there needs to be something concrete. Otherwise, it becomes very hard to assess what’s working.

Kimmerle: On the risk side, it starts with visibility. How many AI systems are we actually using? How many have gone through a review process? Are controls documented, and implemented? If you don’t have that visibility, you can’t really measure risk in a meaningful way.

Terrill: There’s also a cultural component to this. If teams view governance as purely restrictive, they’ll find ways around it. Measurement should reinforce shared objectives. It shouldn’t just feel like compliance for compliance’s sake.

Chou: We look at where human intervention is required and how often outputs need correction. That gives you a sense of where models are reliable and where additional controls may be necessary.

Where are organizations still struggling?

Terrill: Scale is a big one. It’s manageable when you have a small number of AI initiatives. It becomes much harder when experimentation is happening across the enterprise. Automating parts of the review process is something we’re still working through, so it doesn’t become a bottleneck.

Greis: Communicating complexity is another challenge. AI risk isn’t always binary. Explaining residual risk in a way that’s clear and concise for non-technical stakeholders can be difficult.

Kimmerle: Keeping governance frameworks adaptable is hard. The regulatory and technological landscape is evolving quickly, so policies need to be flexible enough to adjust but not so vague that they lose meaning.

Chou: Maintaining reliability as models change is also a challenge. When models are updated, behavior can shift. That means validation isn’t a one-time exercise, it has to be ongoing.

If you could give one piece of practical advice to leaders navigating AI adoption, what would it be?

Kimmerle: Start with inventory. Understand where and how AI is being used across your organization. You can’t govern what you don’t know exists.

Greis: Anchor AI efforts to measurable business outcomes from the beginning. Be clear about what success looks like.

Terrill: Establish clear baseline security and data guardrails before scaling deployment. It’s much harder to retrofit those controls later.

Chou: Design integration points carefully. A lot of the risk shows up where AI systems connect to other systems and data. Being intentional there makes a big difference.

For the full conversation, including deeper examples and audience Q&A, watch the complete virtual session.

Subscribe to our blog

Be first to learn about latest tools, advisories, and findings.


Default fox headshot purple

About the author, Bishop Fox

Security Researchers

This represents research and content from the Bishop Fox consulting team.

More by Bishop

This site uses cookies to provide you with a great user experience. By continuing to use our website, you consent to the use of cookies. To find out more about the cookies we use, please see our Privacy Policy.