We are working on the course content with rocket speed !! sooner we will be loaded with all the courses

DL - Part - Basic Models


  1. The perceptron is a fundamental building block of artificial neural networks and is the simplest form of a single-layer neural network.
  2. It is a binary linear classifier that takes multiple inputs, applies weights to them, and computes a weighted sum.
  3. The weighted sum is then passed through an activation function (usually a step function) to produce the output, which is either 0 or 1.
  4. The perceptron learning algorithm adjusts the weights based on the prediction error to learn the optimal decision boundary that separates the input data into different classes.
  5. Perceptrons are limited to linearly separable problems and cannot handle more complex patterns or non-linear relationships.


Multi-Layer Perceptron (MLP):

  1. The Multi-Layer Perceptron (MLP) is an extension of the perceptron that introduces one or more hidden layers between the input and output layers.
  2. MLPs can handle more complex patterns and non-linear relationships in data due to the introduction of non-linear activation functions (such as sigmoid, ReLU, or tanh) in the hidden layers.
  3. Each neuron in the hidden layers computes a weighted sum of inputs, passes it through an activation function, and propagates the output to the next layer.
  4. MLPs use the backpropagation algorithm to train the network by iteratively adjusting the weights using gradient descent and updating them based on the error gradients.
  5. MLPs are universal function approximators, meaning they can approximate any continuous function given enough hidden neurons and proper training.
Deep Learning - Basic

Lorem Ipsum