We are working on the course content with rocket speed !! sooner we will be loaded with all the courses

Course 1 : GenAI – Classical models

About Course

  1. Naive Bayes:
    • Naive Bayes is a probabilistic classifier based on Bayes’ theorem with the “naive” assumption of independence between features. Despite its simplicity, it’s widely used in text classification and spam filtering. In a generative context, it models the joint probability distribution of features and class labels.
  2. Gaussian Mixture Models (GMM):
    • GMM is a probabilistic model representing a mixture of Gaussian distributions. It’s often used for clustering and density estimation tasks. From a generative perspective, GMM assumes that the data points are generated from a mixture of several Gaussian distributions, with each Gaussian representing a cluster in the data.
  3. Hopfield Networks:
    • Hopfield Networks are a type of recurrent neural network (RNN) with symmetric connections. They are used for associative memory tasks, where the network can recall patterns based on partial inputs. From a generative standpoint, Hopfield Networks can be used to generate patterns by starting from an initial state and allowing the network to evolve dynamically.
  4. Boltzmann Machines:
    • Boltzmann Machines are stochastic generative models that use energy-based learning. They consist of visible and hidden units with symmetric connections. Boltzmann Machines can learn the underlying structure of the data and generate new samples by sampling from the learned distribution.
  5. Restricted Boltzmann Machines (RBMs):
    • RBMs are a variant of Boltzmann Machines with restrictions on the connections between visible and hidden units, usually arranged in a bipartite graph. They are trained using contrastive divergence or other learning algorithms. RBMs are often used for feature learning, collaborative filtering, and generative modeling tasks.
  6. Deep Belief Nets (DBNs):
    • DBNs are hierarchical generative models composed of multiple layers of stochastic, latent variables. They combine the layer-wise training of RBMs with a global fine-tuning step using backpropagation. DBNs can learn complex hierarchical representations of data and are often used in unsupervised and semi-supervised learning tasks.
  7. Autoencoders and Variants:
    • Autoencoders are neural network architectures trained to reconstruct input data, typically by learning a compressed representation (encoding) of the input. Variants include convolutional autoencoders, denoising autoencoders, and variational autoencoders (VAEs). VAEs, in particular, are probabilistic generative models that learn a latent space representation of the data and can generate new samples.
  8. GANs and Variants:
    • Generative Adversarial Networks (GANs) are a framework for training generative models by simultaneously training two neural networks: a generator and a discriminator, in a game theoretic setup. Variants of GANs include conditional GANs, Wasserstein GANs (WGANs), and Progressive GANs. GANs are known for their ability to generate realistic samples, particularly in image synthesis tasks.
Show More

What Will You Learn?

  • Generative Versus Discriminative Modeling
  • refresher on Deep Learning
  • Hebb Learning
  • Auto Associative Memory Nets
  • Hopfield Nets
  • BM/ RBMs
  • Deep Belief Nets

Course Content

Overview on Generative models

  • Deep learning refresher
    00:00
  • Generative Modeling – overview
    00:00

Naive Bayes as a Generative model
Naive Bayes is typically used as a classifier, it can also be used as a generative model in certain cases. The generative aspect arises from the ability of Naive Bayes to estimate the joint probability distribution of the input features and the class labels. By modeling this joint distribution, new samples can be generated from the learned model.

Hidden Markov model as a generative model
Hidden Markov Models (HMMs) are commonly used as generative models. HMMs are probabilistic models that are capable of generating sequences of observations based on an underlying hidden state sequence. In an HMM, the hidden states represent an unobservable or latent process, while the observations are the visible outputs of that process. The model assumes that the hidden states follow a Markov process, where the current state depends only on the previous state. Additionally, each hidden state generates an observation according to an emission probability distribution. The generative aspect of HMMs comes from their ability to generate new sequences of observations. Given the learned parameters (transition probabilities and emission probabilities) of the model, you can generate new sequences of observations by sampling from the model.

GMM – as a generative model
GMM (Gaussian Mixture Model) is a generative model that can be used to model and generate new data based on an underlying probability distribution.

Hebb learning
Hebbian learning, named after Donald Hebb, is a learning rule in neural networks that is often associated with unsupervised learning and can be utilized in generative models. However, Hebbian learning itself is not a generative model but rather a learning principle that can be employed within generative models to facilitate learning. Hebbian learning is based on the idea that "cells that fire together, wire together." It is a local learning rule that adjusts the connection weights between neurons based on the correlation of their activities. In other words, if two neurons consistently activate at the same time, the connection between them is strengthened. This learning rule allows neurons to learn associations and patterns in the input data.

Auto Associative Memory Nets
Auto-associative memory networks, also known as autoassociators or autoencoder networks, are a type of neural network architecture that are primarily used for unsupervised learning tasks. They are designed to learn an efficient representation of the input data by attempting to reconstruct the original input from a compressed or encoded representation.

Hopfield Nets
Hopfield networks, or Hopfield nets, are a type of recurrent neural network (RNN) introduced by John Hopfield in 1982. They are designed to function as content-addressable memory systems or associative memories. Hopfield networks are often used for pattern recognition, pattern completion, and optimization problems.

Boltzmann machines

Restricted Boltzmann Machines (RBMs)

Deep Belief Nets

Deep Boltzmann Machines

Boltzmann Machines for Real-Valued Data

Convolutional Boltzmann Machines

Boltzmann Machines for Structured or Sequential Outputs

Autoencoders

Variational AEs

Generative Adversarial Nets

Variants of GAN

Student Ratings & Reviews

No Review Yet
No Review Yet