Course 1 : Classical Machine Learning Algorithms
Course Curriculum
ML models – Evolution

Intro on Machine learning
supervised – Logistic regression
Logistic regression is a popular supervised learning algorithm used for binary classification tasks. It models the relationship between a dependent variable and one or more independent variables by estimating the probabilities of the target class.

Linear regression for classification (excel demo)
00:00 
sigmoid function
00:00 
Example demo of Logistic regression (sklearn)
00:00 
Model evaluation – metrics (ROC, AUC)
00:00 
Logit Function
00:00 
Probability Threshold
00:00 
type – Binary Logistic Regression
00:00 
type – Multinomial Logistic Regression
00:00 
Regularization : demo using sklearn
00:00 
Data imbalance : demo using sklearn
00:00 
Hyperparameter tuning – Grid search
00:00 
Assumptions
00:00
supervised – Decision Trees
Decision trees are a popular supervised learning algorithm used for both classification and regression tasks. They provide a clear and interpretable representation of the decisionmaking process by constructing a treelike model of decisions and their possible consequences.

whats a tree? Key terms used
00:00 
Entropy : definition and examples
00:00 
Gini index : definition and examples
00:00 
Information gain : definition and example
00:00 
Attribute Selection Measures:
00:00 
Splitting Criteria:
00:00 
Demo : attribute selection/splitting (excel)
00:00 
Demo – using sklearn (classification)
00:00 
How decision trees handle numeric features
00:00 
Decision trees : regression
00:00 
Hyperparameters of Decision Trees
00:00 
Tuning a decision tree (PRUNING)
00:00 
Data imbalance : dec trees
00:00 
strengths and limitations of a decision tree
00:00 
Summarize : Key points on Decision tree
00:00
Supervised – Random Forest
Random Forest is an ensemble learning method that combines multiple decision trees to make predictions. It leverages the wisdom of the crowd by aggregating the predictions of individual trees.

What is ENSEMBLE in machine learning?
00:00 
Decision trees – weaknesses and how ensemble can help
00:00 
Describe BAGGING type ensemble
00:00 
DEMO : sklearn implementation of Random Forest
00:00 
OutofBag (OOB) Error Estimation:
00:00 
Hyperparameter tuning with Grid Search
00:00 
sklearn version of bagging algorithms
00:00 
Interpretability and Feature Importance
00:00
Supervised – Support Vector Machines
Support Vector Machines (SVM) is a powerful supervised learning algorithm used for classification and regression tasks.

background on Support Vector Machines
00:00 
DEMO : basic usage of sklearn implementation of SVM
00:00 
Margin Maximization:
00:00 
Linear and Nonlinear Classification
00:00 
Support Vectors
00:00 
Kernel Functions:
00:00 
C Parameter and Soft Margin
00:00 
Hyperparameter Tuning:
00:00 
Multiclass Classification
00:00 
Support Vector Regression
00:00
supervised – Naive Bayes model
The Naive Bayes model is a popular supervised learning algorithm commonly used for classification tasks. It is based on the Bayes' theorem and assumes that features are conditionally independent of each other given the class label. Despite its simplicity, Naive Bayes often performs well and is efficient in terms of training and prediction time

Bayes’ Theorem : overview
00:00 
Example using sklearn
00:00 
Probability Estimation
00:00 
Training phase
00:00 
Classification phase
00:00 
Laplace Smoothing
00:00 
Assumptions and Limitations
00:00 
Variants
00:00 
MCQs on Naive Bayes
Unsupervised – Variations of Kmeans model
Unsupervised – Hierarchical models
Unsupervised – Density based models
Unsupervised – Gaussian Mixture Models (GMM)

Probability Distributions
00:00 
Mixture Models
00:00 
Model Representation
00:00 
Model Training
00:00 
Use case : Clustering
00:00 
Use case : Generative Model
00:00
Unsupervised – Spectral Clustering
Spectral clustering treats the data points as nodes in a graph and uses the eigenvectors of the graph Laplacian matrix to find clusters. It first constructs an affinity matrix to measure the similarity between data points and then performs dimensionality reduction on the affinity matrix. Finally, it applies Kmeans or another clustering algorithm on the reduceddimensional space to assign the data points to clusters.
Unsupervised – Mean Shift Clustering
Mean Shift clustering is a densitybased algorithm that iteratively moves a window (kernel) over the data points, shifting it towards the region of highest density. It aims to find the modes or peaks of the underlying density function, which correspond to the cluster centers. Mean Shift clustering does not require specifying the number of clusters in advance and can handle irregularly shaped clusters.
Unsupervised – SelfOrganizing Maps (SOM)
SOM is an artificial neural networkbased clustering technique that maps highdimensional data onto a lowerdimensional grid. It organizes the grid nodes (neurons) based on the similarity of their weight vectors to the input data. SOM preserves the topological relationships between the data points and can reveal the underlying structure of the data.
These are just a few examples of clustering models commonly used in machine learning. Each model has its strengths, limitations, and assumptions, and the choice of clustering algorithm depends on the nature of the data, the desired clustering outcome, and the specific requirements of the problem at hand.
Student Ratings & Reviews
No Review Yet
₹15,000

LevelIntermediate

Duration6 hours

Last UpdatedApril 15, 2024
Hi, Welcome back!