Loading…
Tuesday February 11, 2025 3:00pm - 4:50pm PST
Aruneesh Salhotra, Seasoned technologist
 OWASP Certified

This learning lab addresses the growing security challenges of prompt injection attacks in generative AI models. As AI evolves, threat actors find new ways to exploit vulnerabilities in how models process prompts. Participants will explore the risks to data integrity, trustworthiness, and operational security through hands-on activities and real-world attack scenarios. You'll learn how these attacks impact AI-driven decisions and critical applications, while also discovering how models can be compromised.

On the defensive side, practical strategies for identifying and mitigating these vulnerabilities will be covered, using tools like NVIDIA NeMo to build more resilient models. Offensive techniques will also be applied to test and secure AI systems, with Google Colab serving as a platform for experimentation.

This interactive lab includes live demonstrations, coding exercises, and research insights. By the end, you'll have the skills to implement both immediate and long-term countermeasures. It’s ideal for security professionals, AI developers, and anyone keen on safeguarding AI systems, offering valuable insights and tools for emerging threats.


Speakers
avatar for Aruneesh Salhotra

Aruneesh Salhotra

Seasoned Technologist
Aruneesh Salhotra is a seasoned technologist and servant leader, renowned for his extensive expertise across cybersecurity, DevSecOps, AI, Business Continuity, Audit, Sales. His impactful presence as an industry thought leader is underscored by his contributions as a speaker and panelist... Read More →
Tuesday February 11, 2025 3:00pm - 4:50pm PST
DevExec World Stage
  AI DevWorld

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link