1 -Introduction
2 -Understand how to Fine-tune a Model
3 -Understanding Large Dataset to Train the model
4 -Setting Environment ready with Jupyter Notebook
5 -Step 1 Importing Dataset from HuggingFace
6 -Analysis Dataset for better understanding
7 -Step 2 Perform Tokenizer by converting Text to Tokens
8 -Step 3 Split Test, Train and Validation Data
9 -Create Dataset in DatasetDict Format with Training data
10 -Step 4 Tokenize all data from Dataset
11 -Understanding Base Model and using it with AutoModel
12 -Step 5 Using AutoModelForSequenceClassification for Model Head
13 -Step 6 Creating Training Arguments
14 -Step 7 Create Compute Metrics for Model Evaluation
15 -Step 8 Creating and Running Trainer to train the model
16 -Understand Trainer Metrics after Training is complete
17 -Using our Fine-Tuned Model from Local machine and Inference via Pipeline
18 -Upload our Fine-tuned Model to HuggingFace and use it from HuggingFace
19 -TrainingModel.zip
19 - Complete Source.html