وب سایت تخصصی شرکت فرین
دسته بندی دوره ها

Machine Learning Essentials (2023)

سرفصل های دوره

Kickstart Machine Learning, understand maths behind essential algorithms, implement them in python & build 8+ projects!


1. Introduction
  • 1. Course Overview
  • 2. Artificial Intelligence
  • 3. Machine Learning
  • 4. Deep Learning
  • 5. Computer Vision
  • 6. Natural Language Processing
  • 7. Automatic Speech Recognition
  • 8. Reinforcement Learning
  • 9. Pre-requisites.html
  • 10. Code Repository.html

  • 2. Supervised vs Unsupervised Learning
  • 1. Supervised Learning Introduction
  • 2. Supervised Learning Example
  • 3. Unsupervised Learning

  • 3. Linear Regression
  • 1. Introduction to Linear Regression
  • 2. Notation
  • 3. Hypothesis
  • 4. Loss Error Function
  • 5. Training Idea
  • 6. Gradient Descent Optimisation
  • 7. Gradient Descent Code
  • 8. Gradient Descent - for Linear Regression
  • 9. The Math of Training
  • 10. Code 01 - Data Generation
  • 11. Code 02 - Data Normalisation
  • 12. Code 03 - Train Test Split
  • 13. Code 04 - Modelling
  • 14. Code 05 - Predictions
  • 15. R2 Score
  • 16. Code 06 - Evaluation
  • 17. Code 07 - Visualisation
  • 18. Code 08 - Trajectory [Optional]

  • 4. Linear Regression - Multiple Features
  • 1. Introduction
  • 2. Hypothesis
  • 3. Loss Function
  • 4. Training & Gradient Updates
  • 5. Code 01 - Data Prep
  • 6. Code 02 - Hypothesis
  • 7. Code 03 - Loss Function
  • 8. Code 04 - Gradient Computation
  • 9. Code 05 - Training Loop
  • 10. A Note about Shapes
  • 11. Code 06 - Evaluation
  • 12. Linear Regression using Sk-Learn

  • 5. Logistic Regression
  • 1. Binary Classification Introduction
  • 2. Notation
  • 3. Hypothesis Function
  • 4. Binary Cross-Entropy Loss Function
  • 5. Gradient Update Rule
  • 6. Code 01 - Data Prep
  • 7. Code 02 - Hypothesis Logit Model
  • 8. Code 03 - Binary Cross Entropy Loss
  • 9. Code 04 - Gradient Computation
  • 10. Code 05 - Training Loop
  • 11. Code 06 - Visualise Decision Boundary
  • 12. Code 07 - Predictions & Accuracy
  • 13. Logistic Regression using Sk-Learn
  • 14. Multiclass Classification One Vs Rest
  • 15. Multiclass Classification One Vs One

  • 6. Dimensionality Reduction Feature Selection
  • 1. Curse of Dimensionality
  • 2. Feature Selection Vs. Feature Extraction
  • 3. Filter Method
  • 4. Wrapper Method
  • 5. Embedded Method
  • 6.1 train.csv
  • 6. Feature Selection - Code

  • 7. Principal Component Analysis (PCA)
  • 1. Introduction to PCA
  • 2. Conceptual Overview of PCA
  • 3. Maximising Variance
  • 4. Minimising Distances
  • 5. Eigen Values & Eigen Vectors
  • 6. PCA Summary
  • 7. Understanding Eigen Values
  • 8. PCA Code
  • 9. Choosing the right dimensions

  • 8. K-Nearest Neigbours
  • 1. Introduction
  • 2. KNN Idea
  • 3. KNN Data Prep
  • 4. KNN Algorithm Code
  • 5. Euclidean and Manhattan Distance
  • 6. Deciding value of K
  • 7. KNN and Data Standardisation
  • 8. KNN Pros and Cons
  • 9. KNN using Sk-Learn.html

  • 9. PROJECT - Face Recognition
  • 1. OpenCV - Working with Images
  • 2. OpenCV - Video Input from WebCam
  • 3. Object Detection using Haarcascades
  • 4. Face Detection in Images
  • 5. Face Detection in Live Video
  • 6. Face Recognition Project Intro
  • 7. Face Recognition 01 - Data Collection
  • 8. Face Recognition 02 - Loading Data
  • 9. Face Recognition 03 - Predictions using KNN

  • 10. K-Means
  • 1. K-Means Algorithm
  • 2. Code 01 - Data Prep
  • 3. Code 02 - Init Centers
  • 4. Code 03 - Assigning Points
  • 5. Code 04 - Updating Centroids
  • 6. Code 05 - Visualizing K-Means & Results

  • 11. Project - Dominant Color Extraction
  • 1. Introduction
  • 2. Reading Images
  • 3. Finding Clusters
  • 4. Dominant Color Swatches
  • 5. Image in K-Colors

  • 12. Naive Bayes Algorithm
  • 1. Bayes Theorem
  • 2. Derivation of Bayes Theorem
  • 3. Bayes Theorem Question
  • 4. Naive Bayes Algorithm
  • 5. Naive Bayes for Text Classification
  • 6. Computing Likelihood
  • 7.1 golf.csv
  • 7. Understanding Golf Dataset
  • 8. CODE - Prior Probability
  • 9. CODE - Conditional Probability
  • 10. CODE - Likelihood
  • 11. CODE - Prediction
  • 12. Implementing Naive Bayes - Sklearn

  • 13. Multinomial Naive Bayes
  • 1. Multinomial Naive Bayes
  • 2. Laplace Smoothing
  • 3. Multinomial Naive Bayes Example
  • 4. Bernoulli Naive Bayes
  • 5. Bernoulli Naive Bayes Example
  • 6. Bias Variance Tradeoff
  • 7. Gaussian Naive Bayes
  • 8. CODE - Variants of Naive Bayes

  • 14. PROJECT Spam Classifier
  • 1. Project Overview
  • 2. Data Clearning
  • 3. WordCloud
  • 4. Text Featurization
  • 5. Model Building
  • 6. Model Evaluation

  • 15. Decision Trees
  • 1. Decision Trees Introduction
  • 2. Decision Trees Example
  • 3. Entropy
  • 4. CODE Entropy
  • 5. Information Gain
  • 6. CODE Split Data
  • 7. CODE Information Gain
  • 8. Construction of Decision Trees
  • 9. Stopping Conditions

  • 16. Decision Trees Implementation
  • 1. CODE - Decision Tree Node
  • 2. CODE - Train Decision Tree
  • 3. CODE - Assign Target Variable to Each Node
  • 4. CODE - Stopping Conditions
  • 5. CODE - Train Child Nodes
  • 6. CODE - Explore Decision Tree Model
  • 7. CODE - Prediction
  • 8. Handling Numeric Features
  • 9. Bias Variance Tradeoff
  • 10. Decision Trees for Regression
  • 11. Decision Tree Code - Sklearn

  • 17. PROJECT Titanic Survival Prediction
  • 1.1 titanic train.csv
  • 1. Project Overview
  • 2. Exploratory Data Analysis
  • 3. Exploratory Data Analysis - II
  • 4. Data Preparation for ML Model
  • 5. Handling Missing Values
  • 6. Decision Tree Model Building
  • 7. Visualize Decision Tree

  • 18. Ensemble Learning Bagging
  • 1. Ensemble Learning
  • 2. Bagging Model
  • 3. Why Bagging Helps
  • 4. Random Forest Algorithm
  • 5. Bias Variance Tradeoff
  • 6. CODE Random Forest

  • 19. Ensemble Learning Boosting
  • 1. Boosting Introduction
  • 2. Boosting Intuition
  • 3. Boosting Mathematical Formulation
  • 4. Concept of Pseudo Residuals
  • 5. GBDT Algorithm
  • 6. Bias Variance Tradeoff
  • 7. CODE - Gradient Boosting Decision Trees
  • 8. XGBoost
  • 9. Adaptive Boosting (AdaBoost)

  • 20. PROJECT Customer Churn Prediction
  • 1. Project Overview
  • 2. Exploratory Data Analysis
  • 3. Data Visualisation
  • 4. Finding relations
  • 5. Data Preparation
  • 6. Model Building
  • 7. Hyperparameter tuning

  • 21. Deep Learning Introduction - Neural Network
  • 1. Biological Neural Network
  • 2. A Neuron
  • 3. How does a perceptron Learns
  • 4. Gradient Descent Updates
  • 5. Neural Networks
  • 6. 3 Layer NN
  • 7. Why Neural Nets
  • 8. Tensorflow Playground
  • 9. CODE -Data Preparation
  • 10. CODE - Model Building
  • 11. CODE - Model Training and Testing

  • 22. PROJECT Pokemon Image Classification
  • 1.1 Dataset Link.html
  • 1. Introduction
  • 2. The Data
  • 3. Structured Data
  • 4. Data Loading
  • 5. Data Preprocessing
  • 6. Model Architecture
  • 7. Softmax Function
  • 8. Model Training
  • 9. Model evaluation
  • 10. Predictions
  • 63,400 تومان
    بیش از یک محصول به صورت دانلودی میخواهید؟ محصول را به سبد خرید اضافه کنید.
    خرید دانلودی فوری

    در این روش نیاز به افزودن محصول به سبد خرید و تکمیل اطلاعات نیست و شما پس از وارد کردن ایمیل خود و طی کردن مراحل پرداخت لینک های دریافت محصولات را در ایمیل خود دریافت خواهید کرد.

    ایمیل شما:
    تولید کننده:
    شناسه: 9601
    حجم: 16233 مگابایت
    مدت زمان: 1678 دقیقه
    تاریخ انتشار: 22 فروردین 1402
    طراحی سایت و خدمات سئو

    63,400 تومان
    افزودن به سبد خرید