The Synopsys Software Integrity Group is now Black Duck®. Learn More

close search bar

Sorry, not available in this language yet

close language selection

Principles of AI/ML Security

Course Description

The Principles of AI/ML Security course aims to equip participants with an understanding of the baseline ideas for securing this greenfield technology, focusing on generative AI and its applications, industry trends, governance standards, and the risks that affect both the business and the technology itself. Participants will learn about secure integration, model explainability, and verification techniques, and they will explore common AI/ML architectures and their specific vulnerabilities. Through labs on penetration testing and threat modeling, including hands-on exercises in a prebuilt sandbox environment and a hybrid exercise focusing on retrieval augmented generation/large language model (RAG/LLM) architecture, attendees will gain practical experience with exploiting and mitigating risks in AI/ML projects while applying comprehensive threat modeling methodologies to secure AI/ML applications effectively.

Learning Objectives

At the end of this course, you will be able to

  • Recognize industry standards and governance
  • Identify AI/ML security risks at each development stage
  • Apply threat modeling techniques to analyze security of AI/ML projects
  • Apply threat modeling techniques to mitigate risks in AI/ML projects

Details

Delivery Format

  • Traditional Classroom
  • Virtual Classroom

Duration:  8 Hours

Level: Intermediate

Intended Audience

  • Architects
  • DevSecOps
  • Developers
  • Security Practitioners

 

Training

Developer Security Training

Equip development teams with the skills and education to write secure code and fix issues faster