1.1 Understanding the Significance of LLMs in the AI Landscape
1.2 Exploring the Resources for this Course - GitHub Repositories and Others
1.3 Introducing Retrieval Augmented Generation (RAG)
1.4 Understanding the OWASP Top-10 Risks for LLMs
1.5 Exploring the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework
2.1 Defining Prompt Injection Attacks
2.2 Exploring Real-life Prompt Injection Attacks
2.3 Using ChatML for OpenAI API Calls to Indicate to the LLM the Source of Prompt Input
2.4 Enforcing Privilege Control on LLM Access to Backend Systems
2.5 Best Practices Around API Tokens for Plugins, Data Access, and Function-level Permissions
2.6 Understanding Insecure Output Handling Attacks
2.7 Using the OWASP ASVS to Protect Against Insecure Output Handling
3.1 Understanding Training Data Poisoning Attacks
3.2 Exploring Model Denial of Service Attacks
3.3 Understanding the Risks of the AI and ML Supply Chain
3.4 Best Practices when Using Open-Source Models from Hugging Face and Other Sources
3.5 Securing Amazon BedRock, SageMaker, Microsoft Azure AI Services, and Other Environments
4.1 Understanding Sensitive Information Disclosure
4.2 Exploiting Insecure Plugin Design
4.3 Avoiding Excessive Agency
5.1 Understanding Overreliance
5.2 Exploring Model Theft Attacks
5.3 Understanding Red Teaming of AI Models
6.1 Understanding the RAG, LangChain, Llama Index, and AI Orchestration
6.2 Securing Embedding Models
6.3 Securing Vector Databases
6.4 Monitoring and Incident Response
Learning objectives
Learning objectives (1)
Learning objectives (2)
Learning objectives (3)
Learning objectives (4)
Learning objectives (5)
Securing Generative AI Introduction