Course 6 : Deep Learning – Primer (TF/Keras)
About Course
The primer on Deep Learning initiates with the application of perceptron and Multi-Layer Perceptron (MLP) using scikit-learn (sklearn). Sklearn’s perceptron serves as an entry point, offering a fundamental understanding of single-layer neural networks tailored for binary classification tasks.
Moving beyond, MLP in sklearn expands the scope, enabling the exploration of multi-layer architectures capable of handling more intricate, non-linear data relationships. Transitioning towards TensorFlow (TF) and Keras, the primer delves into deeper concepts of deep learning.
TF/Keras provides a robust platform, empowering users with the flexibility and scalability required for constructing complex neural networks.
Within this framework, participants are introduced to crucial components such as layers, optimizers, and loss functions.
Through practical exercises, learners gain hands-on experience in constructing and training deep neural networks for a diverse array of tasks, spanning from image classification to natural language processing.
Mastering TF/Keras equips individuals with the foundational knowledge necessary for delving into advanced deep learning methodologies and real-world applications.
Course Content
what are activation functions
-
Heaviside Step function
-
Sigmoid
-
Hyperbolic Tangent (Tanh)
-
Rectified Linear Unit (ReLU)
-
Leaky ReLU
-
Parametric ReLU (PReLU)
-
Exponential Linear Unit (ELU)
-
Softmax