1. Introduction to Large Language Models
2. How Large Language Models (LLMs) are trained
3. Capabilities of LLMs
4. Challenges of LLMs
5. Introduction to Transformers - Attention is all you need
6. Positional Encodings
8. Self Attention & Multi Head Attention
9. Self Attention & Multi Head Attention - Deep Dive
10. Understanding Masked Multi Head Attention
11. Masked Multi Head Attention - Deep Dive
12. Encoder Decoder Architecture
13. Customization of LLMs - Prompt Engineering
14. Customization of LLMs - Prompt Learning - Prompt Tuning & p-tuning
15. Difference between Prompt Tuning and p-tuning
16. PEFT - Parameter Efficient Fine Tuning
17. Training data for LLMs
18. Pillars of LLM Training Data Quality, Diversity, and Ethics
19. Data Cleaning for LLMs
20. Biases in Large Language Models
21. Loss Functions for LLMs