Testing LLM Algorithms While AI Tests Us
- Date:
- Wednesday, June 19, 2024
- Time:
- 11 a.m. PT / 2 p.m. ET
In an era where artificial intelligence (AI) and Large Language Models (LLMs) are becoming integral to our digital interactions, ensuring their security and usability is paramount. This presentation embarks on a journey through the compelling intersection of these two pivotal domains within the automation landscape. The discourse unfolds cutting-edge methodologies, techniques, and tools employed in threat modeling, API testing, and red teaming, all aimed at fortifying security measures within these artificial narrow intelligent systems. Engage in a thought-provoking exploration of how we, as users and developers, can strategically plan and implement tests for GenAi & LLM systems, ensuring their robustness and reliability.
The presentation not only demystifies the complexities of security testing in LLMs but also sparks a conversation about our daily interactions with GenAi, prompting us to ponder our conscious and subconscious engagements with these technologies.