Testing LLM Algorithms While AI Tests Us

The presentation delves into securing AI & LLMs, covering threat modeling, API testing, red teaming, emphasizing robustness & reliability, sparking conversation on our interactions with GenAi.

In an era where artificial intelligence (AI) and Large Language Models (LLMs) are becoming integral to our digital interactions, ensuring their security and usability is paramount. This presentation embarks on a journey through the compelling intersection of these two pivotal domains within the automation landscape. The discourse unfolds cutting-edge methodologies, techniques, and tools employed in threat modeling, API testing, and red teaming, all aimed at fortifying security measures within these artificial narrow intelligent systems. Engage in a thought-provoking exploration of how we, as users and developers, can strategically plan and implement tests for GenAi & LLM systems, ensuring their robustness and reliability. 

The presentation not only demystifies the complexities of security testing in LLMs but also sparks a conversation about our daily interactions with GenAi, prompting us to ponder our conscious and subconscious engagements with these technologies.


Rob Ragan

About the speaker, Rob Ragan

Principal Researcher

Rob Ragan is a Principal Researcher at Bishop Fox. Rob focuses on pragmatic solutions for clients and technology. He oversees strategy for continuous security automation. Rob has presented at Black Hat, DEF CON, and RSA. He is also a contributing author to Hacking Exposed Web Applications 3rd Edition. His writing has appeared in Dark Reading and he has been quoted in publications such as Wired.

Rob has more than a decade of security experience and once worked as a Software Engineer at Hewlett-Packard's Application Security Center. Rob was also with SPI Dynamics where he was a software engineer on the dynamic analysis engine for WebInspect and the static analysis engine for DevInspect.

More by Rob

This site uses cookies to provide you with a great user experience. By continuing to use our website, you consent to the use of cookies. To find out more about the cookies we use, please see our Privacy Policy.