Course 1 : GenAI – Classical models
About Course
 Naive Bayes:
 Naive Bayes is a probabilistic classifier based on Bayes’ theorem with the “naive” assumption of independence between features. Despite its simplicity, it’s widely used in text classification and spam filtering. In a generative context, it models the joint probability distribution of features and class labels.
 Gaussian Mixture Models (GMM):
 GMM is a probabilistic model representing a mixture of Gaussian distributions. It’s often used for clustering and density estimation tasks. From a generative perspective, GMM assumes that the data points are generated from a mixture of several Gaussian distributions, with each Gaussian representing a cluster in the data.
 Hopfield Networks:
 Hopfield Networks are a type of recurrent neural network (RNN) with symmetric connections. They are used for associative memory tasks, where the network can recall patterns based on partial inputs. From a generative standpoint, Hopfield Networks can be used to generate patterns by starting from an initial state and allowing the network to evolve dynamically.
 Boltzmann Machines:
 Boltzmann Machines are stochastic generative models that use energybased learning. They consist of visible and hidden units with symmetric connections. Boltzmann Machines can learn the underlying structure of the data and generate new samples by sampling from the learned distribution.
 Restricted Boltzmann Machines (RBMs):
 RBMs are a variant of Boltzmann Machines with restrictions on the connections between visible and hidden units, usually arranged in a bipartite graph. They are trained using contrastive divergence or other learning algorithms. RBMs are often used for feature learning, collaborative filtering, and generative modeling tasks.
 Deep Belief Nets (DBNs):
 DBNs are hierarchical generative models composed of multiple layers of stochastic, latent variables. They combine the layerwise training of RBMs with a global finetuning step using backpropagation. DBNs can learn complex hierarchical representations of data and are often used in unsupervised and semisupervised learning tasks.
 Autoencoders and Variants:
 Autoencoders are neural network architectures trained to reconstruct input data, typically by learning a compressed representation (encoding) of the input. Variants include convolutional autoencoders, denoising autoencoders, and variational autoencoders (VAEs). VAEs, in particular, are probabilistic generative models that learn a latent space representation of the data and can generate new samples.
 GANs and Variants:
 Generative Adversarial Networks (GANs) are a framework for training generative models by simultaneously training two neural networks: a generator and a discriminator, in a game theoretic setup. Variants of GANs include conditional GANs, Wasserstein GANs (WGANs), and Progressive GANs. GANs are known for their ability to generate realistic samples, particularly in image synthesis tasks.
Course Content
Overview on Generative models

Deep learning refresher
00:00 
Generative Modeling – overview
00:00
Naive Bayes as a Generative model
Hidden Markov model as a generative model
GMM – as a generative model
Hebb learning
Auto Associative Memory Nets
Hopfield Nets
Boltzmann machines
Restricted Boltzmann Machines (RBMs)
Deep Belief Nets
Deep Boltzmann Machines
Boltzmann Machines for RealValued Data
Convolutional Boltzmann Machines
Boltzmann Machines for Structured or Sequential Outputs
Autoencoders
Variational AEs
Generative Adversarial Nets
Variants of GAN
Student Ratings & Reviews
No Review Yet