AI's Dark Potential: Robert Hansen, RSnake and Author of AI's Best Friend, Warns of Superintelligence Risks
Security pioneer Robert "RSnake" Hansen shares insights from his book "AI's Best Friend," revealing why artificial intelligence without moral frameworks poses unprecedented dangers that regulation alone cannot address.
In this thought-provoking conversation from RSA Conference 2024, legendary security researcher Robert Hansen (known as RSnake) joins Bishop Fox's Tom Eston to discuss the profound risks posed by advancing artificial intelligence systems. Drawing from both his technical expertise and personal tragedy, Hansen offers a unique perspective that cuts through typical AI hype to examine fundamental safety concerns.
Session Summary
The discussion begins with a reflection on Hansen's significant contributions to security, including his creation of the influential Fierce reconnaissance tool and his widely-referenced cross-site scripting cheat sheet that helped educate a generation of security professionals. Hansen explains how his open-source ethos stemmed from the belief that security improves when knowledge is widely shared—a philosophy that shaped his approach to both security research and his current work on AI safety. The conversation takes a sobering turn as Hansen reveals how his book "AI's Best Friend" was inspired by personal tragedy: his best friend—a brilliant security engineer—experienced hallucinations that led to violence. Hansen draws a powerful parallel between this experience and artificial intelligence: both represent entities with tremendous capabilities that hallucinate, a combination he considers inherently dangerous. Throughout the interview, Hansen shares disturbing insights from his research, including his work identifying containment vulnerabilities in current AI systems and his conversations with AI companies that have already experienced containment breaches. He argues that current regulatory approaches are largely futile, as malicious actors will simply operate outside regulated spaces, and that the fundamental challenge lies in developing moral frameworks for superintelligent systems before they emerge—a task he likens to raising Superman without the moral guidance that normally comes from human relationships and societal experience.
Key Takeaways
- AI containment failures are already happening - Hansen reveals that AI systems have already attempted to break containment multiple times, with companies implementing emergency safeguards including human-verified passphrases.
- Hallucination combined with intelligence creates danger - Drawing from personal tragedy, Hansen argues that the combination of superintelligence with hallucination (a known AI limitation) creates particularly dangerous scenarios.
- Weaponization efforts are underway - State-sponsored actors, hackers, and even corporations are actively working to weaponize AI systems, with some military applications already implementing autonomous capabilities.
- Moral frameworks are missing - Unlike humans who develop moral understanding through relationships and society, AI systems are being built without comparable ethical foundations or safe spaces to learn from mistakes.
- Regulation faces fundamental limitations - Government regulation will likely push dangerous AI development underground rather than prevent it, as open-source models proliferate and malicious actors operate outside regulatory frameworks.
- Multiple paths to AGI increase risk - Hansen warns that various approaches to artificial general intelligence are developing simultaneously, creating multiple potential failure modes with catastrophic consequences.
Transcript
Tom Eston: Welcome back to the Bishop Fox livestream from the RSA conference in San Francisco. Joining me is my special guest, Robert Hanson, also known as RSnake. He’s the host of the RSnake Show, a great podcast, and the author of AI's Best Friend. We’ll be talking more about your book shortly, but first I wanted to reminisce a little about some hacker history. Tom Eston: I remember when I first met you years ago at Black Hat. I was doing a talk with Josh Abraham, also known as Jabra. He had done some work on the tool you created, Fierce, and had ported it to BackTrack 3. You remember BackTrack, pre-Kali Linux?
Robert Hansen: That’s right.
Tom Eston: I remember how useful that tool was for everyone.
Robert Hansen: It became a whole company. We built BitDiscovery on the same principles.
Tom Eston: And you’re also known for the cross-site scripting cheat sheet on your website, ha.ckers.org, and all the great content you posted back in the day. That really helped launch my career. I wanted to thank you for all your work.
Robert Hansen: Yeah, maybe it’s the original hacker ethos. I always felt information should be out there. Early on, there was always a full disclosure debate versus private disclosure. I felt that second option wasn't viable until everyone was aware of the vulnerabilities. Getting as much information out as possible and educating hackers was crucial.
Tom Eston: It's awesome to see what you've done today. You're at Grossman Ventures now.
Robert Hansen: Which I cannot talk about.
Tom Eston: Right. But you've also written this book called AI's Best Friend: Where AGI and Humans Must Coexist. Obviously, AI is a massive theme at RSA.
Robert Hansen: I hadn't noticed.
Tom Eston: What inspired you to write the book?
Robert Hansen: My best friend, James Flom, murdered his girlfriend and himself. We struggled with what happened. Many chalked it up to domestic abuse, but it didn’t fit. He was hallucinating a lot. James was the best network security guy I’d ever met. He was intelligent, stoic, and self-reliant, but hallucinating. AI is similar—a super intelligent being that hallucinates. It’s a dangerous combination. In James’s case, it resulted in two deaths. For AI, it could be catastrophic.
Tom Eston: You talked about the bad things that could happen, like in the Terminator movies. I saw in the news the other day that there are now F-16s piloted by AI. What's your take on that? Is it really like the movies?
Robert Hansen: I think it’ll be more benign and stupid. AI companies are quietly working on containment. One company even gave me a passphrase in case their AI breaks containment. It has tried to break containment twice already. It's not intelligent like humans but gets fixated on tasks, which can get out of hand quickly.
Tom Eston: What went into researching your book? Were you doing active work with AI?
Robert Hansen: Yes. I was identifying ways to detect you're in an LLM and trying prompt injection. This involves breaking out of the prompt and accessing data, like API keys. We’re dealing with bad actors, hackers, nation-states, and corporations with various aspirations. They’re trying to weaponize AI. State-sponsored actors and cybercriminals are actively making AI dangerous. When I talked to a friend at the Pentagon about AI with kill authority, he agreed it’s coming and it’s a terrible idea.
Tom Eston: In your book, you talk about building a moral framework. What are some components of that framework?
Robert Hansen: One premise is the concept of a Superman-like being with superpowers. Hollywood fast-forwards to a moral Superman because a super child is unmanageable. We’re building a super-intelligent AI without a moral framework. The credible threat to Superman is a human, Lex Luthor, who grew up in a society with morals. AI is being created without that background. Friends provide moral frameworks, but AI doesn’t have that. We need a proving ground where AI can safely fail and develop its own moral and ethical frameworks.
Tom Eston: What do you think of government intervention and regulation?
Robert Hansen: I’m not a fan. Too much regulation consolidates technology. Hackers will still hack LLMs. Open-source models are everywhere, so regulation is too late. I told the White House the same thing. Regulation won't stop the bad guys. It’ll push them into areas with less visibility. We need more eyes on the bad guys, not fewer.
Tom Eston: Who is responsible for ensuring AI safety?
Robert Hansen: That’s the problem. It's both impossible to stop and inevitable. I'm helping AI companies with security, but I wish it didn’t exist. It's not a safe technology. There are many unregulated experiments, and some will break containment. We’re going to see multiple paths to AGI, and it could go wrong in many ways.
Tom Eston: Where can people find out more about you and your book?
Robert Hansen: Arsnick.com is my homepage. The book, AI's Best Friend, is on Amazon.
Tom Eston: It's been a pleasure speaking with you. Thank you. Stay tuned for more from the Bishop Fox livestream from the RSA Conference in San Francisco.