The Principles of AI/ML Security course aims to equip participants with an understanding of the baseline ideas for securing this greenfield technology, focusing on generative AI and its applications, industry trends, governance standards, and the risks that affect both the business and the technology itself. Participants will learn about secure integration, model explainability, and verification techniques, and they will explore common AI/ML architectures and their specific vulnerabilities. Through labs on penetration testing and threat modeling, including hands-on exercises in a prebuilt sandbox environment and a hybrid exercise focusing on retrieval augmented generation/large language model (RAG/LLM) architecture, attendees will gain practical experience with exploiting and mitigating risks in AI/ML projects while applying comprehensive threat modeling methodologies to secure AI/ML applications effectively.
At the end of this course, you will be able to
Delivery Format:
Duration: 8 Hours
Level: Intermediate
Intended Audience
Equip development teams with the skills and education to write secure code and fix issues faster