Flash Sale : Limited time offer! | Offer ends in
Use Coupon Code: GET to get extra % Discount
Use coupon GET50 to get 50% offer on this course Bundle
Artificial Intelligence Combo Pack
Upskill yourself with the most in-demand career choice of AI with our Artificial Intelligence Combo Pack.
Whether you’re just learning to code or already have experience, you’ll find this course helpful you develop your skills and advance your projects.
In this training you will learn various aspects of AI like Deep Learning with Pytorch, Machine Learning, Artificial Neural Networks, Expert systems, Object Detection and Vernacular Language Signboard Translation as a Capstone Project.
You will get 12 Deep Learning module certificates and 3 final certificate.This program will prepare you to solve real world problems using cutting edge technologies.
Key Features- Course taught by IIT Madras Professors who are subject matter experts in the field.
- 100+ hours of self paced videos with Lifetime Access.
- 15 Modules certifications.
- Globally Recognized Certificate from GUVI after successful completion
- Dedicated Forum Support from the Instructors
- 100% Online Learning
- Access to community of DL Students and Enthusiasts
- Frequent Kaggle contests
- 7 days Refund Policy
Price :
199
Price :
799
- Time commitment of 4 hours a week.
- Laptop / desktop with internet access.
- Python
- Machine Learning
- Deep Learning
- IT Professionals
- IT Consultants
- IT Graduates & Freshers
- Any Graduate with Programming Knowledge
Arun Prakash is the Founder & Chief Technology Officer of GUVI. He is a technologist with more than 17 years of experience. He has worked with bigwigs like Paypal, Symantec & Honeywell in the past.
Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework. After spending years working in tech in the Bay Area, New York, and Singapore at companies such as Microsoft, Google, and Flipkart, Janani finally decided to combine her love for technology with her passion for teaching. She is now the co-founder of Loonycorn, a content studio focused on providing high-quality content for technical skill development. Loonycorn is working on developing an engine (patent filed) to automate animations for presentations and educational content.
Mitesh M. Khapra is an Assistant Professor in the Department of Computer Science and Engineering at IIT Madras. He researches in the areas of Deep Learning, Multimodal Multilingual Processing, Dialog systems and Question Answering. He holds masters and Ph.D degrees from IIT Bombay. He has worked for over 4.5 years at IBM Research and published over 25 papers. He was a recipient of IBM PhD Fellowship and the Microsoft Rising Star Award. He is also a recipient of the Google Faculty Research Award, 2018.
Pratyush is an Assistant Professor at the Department of Computer Science and Engineering at IIT Madras since April 2018. He received his Bachelors and Masters of Technology in Electrical Engineering from IIT Bombay in 2009. He then completed his PhD in Computer Engineering from ETH Zurich in 2014. He then spent over 2.5 years at IBM Research, Bangalore and a few months consulting for machine learning and startups. His current research focus is on hardware-software co-design of deep learning systems. He has authored over 35 research papers and has applied for over 20 patents.
Python
35 Lesson
6 hrs
- Why to use python ?
- Python IDE
- Hello World Program in Python
- Numbers And Math functions
- Common Errors in Python
- Assignment 1
- Final Quiz
- Variables & Names
- String basics
- Conditional statements
- Assignment 2
- Functions
- For and While (loop)
- Final Quiz
- Assignment 3
- Functions as arguments
- List,Tuple and Dictionaries
- List Comprehension
- Assignment 4
- File handling
- Debugging elements breakpoints watch and stepin
- Debugging step in step out
- Assignment 5
- Debugging watch variables
- Class and Objects
- Final Quiz
- Assignment 6
- Lambda,Filter and Map
- Python pip
- Read Excel Data in Python
- Python MySQL
- Assignment 7
- Iterators
- Pickling
- Python_Json
Machine Learning
17 Lesson
3 hrs
- About Machine Learning Course
- Installation of Anaconda
- What is Machine Learning
- Types of Machine Learning, Supervised Learning and Regression
- Types of ML,Logistic Regression and Unsupervised Learning
- SVM -What is SVM and How do they work
- SVM-Loading and Examining our dataset
- SVM-Building and Tweaking our SVM Classification mode
- What is Decision Tree?
- Building the Decision Tree : Decision Tree Learning
- Building a Decision Tree - Information Gain a Gini Impurity
- Decision Tree Lab:Building our First Decision Tree
- Decision Tree Lab:Viewing and Tweaking our Decision Tree
- What is Overfitting
- Random Forest Lab
- Teamwork
- Avoiding Overfitted Models
DL#101 - Getting Started
37 Lesson
10 hrs
- Python Basics: Google Colaboratory
- Python Basics: Basic Data Types
- Python Basics: Lists
- Python Basics: Tuples, Sets, Dictionaries
- Python Basics: Packages
- Python Basics: File Handling
- Python Basics: Classes
- Python Basics: Numpy
- Python Basics: Plotting
- Expert Systems & 6 Jars: Expert Systems
- Expert Systems & 6 Jars: Say Hi To ML
- Expert Systems & 6 Jars: Introduction
- Expert Systems & 6 Jars: Data
- Expert Systems & 6 Jars: Tasks
- Expert Systems & 6 Jars: Models
- Expert Systems & 6 Jars: Loss Function
- Expert Systems & 6 Jars: Learning Algorithm
- Expert Systems & 6 Jars: Evaluation
- Expert Systems & 6 Jars: Six Jars Summary (Part 1)
- Expert Systems & 6 Jars: Six Jars Summary (Part 2)
- Vectors & Matrices: Introduction to Vectors
- Vectors & Matrices: Dot product of vectors
- Vectors & Matrices: Unit Vectors
- Vectors & Matrices: Projection of one vector onto another
- Vectors & Matrices: Angle between two vectors
- Vectors & Matrices: Why do we care about vectors?
- Vectors & Matrices: Introduction to Matrices
- Vectors & Matrices: Multiplying a vector by a matrix
- Vectors & Matrices: Multiplying a matrix by another matrix
- Vectors & Matrices: An alternate way of multiplying two matrices
- Vectors & Matrices: Why do we care about matrices?
- Python Basics + Linear Algebra: Google Drive and Colab Integration
- Python Basics + Linear Algebra: Pandas
- Python Basics + Linear Algebra: Python Debugger
- Python Basics + Linear Algebra: Plotting Vectors
- Python Basics + Linear Algebra: Vector Addition and Subtraction
- Python Basics + Linear Algebra: Vector Dot Product
DL#102 - Primitive Neurons
38 Lesson
10 hrs
- MP Neuron: Introduction
- MP Neuron: Model
- MP Neuron: Data & Task
- MP Neuron: Loss
- MP Neuron: Learning
- MP Neuron: Evaluation
- MP Neuron: Geometry Basics
- MP Neuron: Geometric Interpretation
- MP Neuron: Summary
- Perceptron: Introduction
- Perceptron: Data & Task
- Perceptron: Model
- Perceptron: Geometric Interpretation
- Perceptron: Loss Function
- Perceptron: Learning - General Recipe
- Perceptron: Learning Algorithm
- Perceptron: Learning - Why it Works?
- Perceptron: Learning - Will it Always Work?
- Perceptron: Evaluation
- Perceptron: Summary
- MP Neurons & Perceptron using Python: Toy Example (Perceptron)
- MP Neurons & Perceptron using Python: Loading Data
- MP Neurons & Perceptron using Python: Train-Test Split
- MP Neurons & Perceptron using Python: Binarization
- MP Neurons & Perceptron using Python: Inference And Search
- MP Neurons & Perceptron using Python: Inference
- MP Neurons & Perceptron using Python: Class
- MP Neurons & Perceptron using Python: Perceptron Class
- MP Neurons & Perceptron using Python: Epochs
- MP Neurons & Perceptron using Python: Checkpointing
- MP Neurons & Perceptron using Python: Learning Rate
- MP Neurons & Perceptron using Python: Weight Animation
- MP Neurons & Perceptron using Python: Exercises
- Contest - Mobile Phone Like/Dislike predictor: Introduction to Contests.
- Contest - Mobile Phone Like/Dislike predictor: Creating a Kaggle account
- Contest - Mobile Phone Like/Dislike predictor: Data preprocessing
- Contest - Mobile Phone Like/Dislike predictor: Submitting Entries
- Contest - Mobile Phone Like/Dislike predictor: Clarifications
DL#103 - Sigmoid Neuron
57 Lesson
10 hrs
- 6 Jars of Sigmoid Neuron (I): Recap from last lecture
- 6 Jars of Sigmoid Neuron (I):Revisiting limitations of perceptron model
- 6 Jars of Sigmoid Neuron (I): Sigmoid Model Part 1
- 6 Jars of Sigmoid Neuron (I): Sigmoid Model Part 2
- 6 Jars of Sigmoid Neuron (I): Sigmoid Model Part 3
- 6 Jars of Sigmoid Neuron (I): Sigmoid Model Part 4
- 6 Jars of Sigmoid Neuron (I): Sigmoid: Data and Tasks
- 6 Jars of Sigmoid Neuron (I): Sigmoid: Loss Function
- 6 Jars of Sigmoid Neuron (I): Learning: Intro to Learning Algorithm
- 6 Jars of Sigmoid Neuron (I): Learning: Learning by guessing
- 6 Jars of Sigmoid Neuron (I): Learning - Error Surfaces for learning
- 6 Jars of Sigmoid Neuron (I): Learning - Mathematical setup for the learning algorithm
- 6 Jars of Sigmoid Neuron (I): Learning - The math-free version of learning algorithm
- 6 Jars of Sigmoid Neuron (II): Learning - Introducing Taylor Series
- 6 Jars of Sigmoid Neuron (II): Learning - More intuitions about Taylor series
- 6 Jars of Sigmoid Neuron (II): Learning - Deriving the Gradient Descent Update rule
- 6 Jars of Sigmoid Neuron (II): Learning - The complete learning algorithm
- 6 Jars of Sigmoid Neuron (II): Learning - Computing Partial Derivatives
- 6 Jars of Sigmoid Neuron (II): Learning - Writing the code
- 6 Jars of Sigmoid Neuron (II): Sigmoid - Dealing with more than 2 parameters
- 6 Jars of Sigmoid Neuron (II): Sigmoid - Evaluation
- 6 Jars of Sigmoid Neuron (II): Summary and take-aways
- Sigmoid Neurons Using Python (I): Plotting Sigmoid 2D
- Sigmoid Neurons Using Python (I): Plotting Sigmoid 3D
- Sigmoid Neurons Using Python (I): Plotting Loss
- Sigmoid Neurons Using Python (I): Contour Plot
- Sigmoid Neurons Using Python (I): Class
- Sigmoid Neurons Using Python (I): Toy Data Fit
- Sigmoid Neurons Using Python (I): Toy Data Plot - 1/2
- Sigmoid Neurons Using Python (I): Toy Data Plot - 2/2
- Sigmoid Neurons Using Python (II): Loading Data
- Sigmoid Neurons Using Python (II): Standardisation
- Sigmoid Neurons Using Python (II): Test/Train Split (1/2)
- Sigmoid Neurons Using Python (II): Test/Train Split (2/2)
- Sigmoid Neurons Using Python (II): Fitting Data
- Sigmoid Neurons Using Python (II): Loss Plot
- Sigmoid Neurons Using Python (II): Progress Bar
- Sigmoid Neurons Using Python (II): Exercises
- Probability Theory Introduction
- Probability Theory: Random Variable - Intuition
- Probability Theory: Random Variable - Formal Definition
- Probability Theory: Random Variable - Continuous and Discrete
- Probability Theory: Probability Distribution
- Probability Theory: True and Predicted Distribution
- Probability Theory: Certain Events
- Probability Theory: Why Do we Care About Distributions
- Information Theory and Cross Entropy: Expectation
- Information Theory and Cross Entropy: Information Content
- Information Theory and Cross Entropy: Entropy
- Information Theory and Cross Entropy: Relation To Number Of Bits
- Information Theory and Cross Entropy: KL-Divergence and Cross Entropy
- Information Theory and Cross Entropy: Sigmoid Neuron and Cross Entropy
- Information Theory and Cross Entropy: Using Cross Entropy With Sigmoid Neuron
- Information Theory and Cross Entropy: Learning Algorithm for Cross Entropy loss function
- Information Theory and Cross Entropy: Computing partial derivatives with cross entropy loss
- Information Theory and Cross Entropy: Code for Cross Entropy Loss function
- Contest - Text, Non-Text Classification: Overview
DL#104 - Feedforward Neural Networks
25 Lesson
10 hrs
- Representation Power of function: Why do we need complex functions
- Representation Power of function: Complex functions in real world examples
- Representation Power of function: A simple recipe for building complex functions
- Representation Power of function: Illustrative Proof of Universal Approximation Theorem
- Representation Power of function: Summary
- Deep Neural Networks: Setting the context
- Deep Neural Networks: Data and Tasks
- Deep Neural Networks: Model: A simple deep neural network
- Deep Neural Networks: Model: A generic deep neural network
- Deep Neural Networks: Model: Understanding the computations in a deep neural network
- Deep Neural Networks: Model: The output layer of a deep neural network
- Deep Neural Networks: Model: Output layer of a multi-class classification problem
- Deep Neural Networks: Model: How do you choose the right network configuration
- Deep Neural Networks: Loss function for binary classification
- Deep Neural Networks: Loss function for multi-class classification
- Deep Neural Networks: Learning Algorithm (non-mathy version)
- Deep Neural Networks: Evaluation
- Deep Neural Networks: Summary
- DNNs using Python: Outline
- DNNs using Python: Generating Data
- DNNs using Python: Classification with Sigmoid Neuron
- DNNs using Python: Classification with FF Network
- DNNs using Python: Generic Class of FF Neuron
- DNNs using Python: Multi Class Classification with FF Network
- DNNs using Python: Exercise
DL#105 - Training Feedforward Neural Networks
41 Lesson
10 hrs
- Backpropagation - the light math version: Setting the context
- Backpropagation - the light math version: Revisiting Basic Calculus
- Backpropagation - the light math version: Why do we care about the chain rule of derivatives
- Backpropagation - the light math version: Applying chain rule across multiple paths
- Backpropagation - the light math version: Applying Chain rule in a neural network
- Backpropagation - the light math version: Computing Partial Derivatives w.r.t. a weight - Part 1
- Backpropagation - the light math version: Computing Partial Derivatives w.r.t. a weight - Part 2
- Backpropagation - the light math version: Computing Partial Derivatives w.r.t. a weight - Part 3
- Backpropagation - the light math version: Computing Partial Derivatives w.r.t. a weight when there are multiple paths
- Backpropagation - the light math version: Takeaways and what next ?
- Backpropagating using Python: Outline
- Backpropagating using Python: Single Weight Update
- Backpropagating using Python: Single Weight Training
- Backpropagating using Python: Multiple Weight Update
- Backpropagating using Python: Visualising Outputs
- Backpropagating using Python: Visualising Weights
- Backpropagating using Python: Backpropagation for Multiple Class Classification
- Backpropagating using Python: Shortened Backpropagation Code
- Backpropagating using Python: Exercises
- Backpropagation - the full version: Errata from last theory slot
- Backpropagation - the full version: Setting the Context
- Backpropagation - the full version: Intuition behind backpropagation
- Backpropagation - the full version: Understanding the dimensions of gradients
- Backpropagation - the full version: Computing Derivatives w.r.t. Output Layer - Part 1
- Backpropagation - the full version: Computing Derivatives w.r.t. Output Layer - Part 2
- Backpropagation - the full version: Computing Derivatives w.r.t. Output Layer - Part 3
- Backpropagation - the full version: Quick recap of the story so far
- Backpropagation - the full version: Computing Derivatives w.r.t. Hidden Layers - Part 1
- Backpropagation - the full version: Computing Derivatives w.r.t. Hidden Layers - Part 2
- Backpropagation - the full version: Computing Derivatives w.r.t. Hidden Layers - Part 3
- Backpropagation - the full version: Computing derivatives w.r.t. one weight in any layer
- Backpropagation - the full version: Computing derivatives w.r.t. all weights in any layer
- Backpropagation - the full version: A running example of backpropagation
- Backpropagation - the full version: Summary
- Backpropagating - the full version using Python: Outline
- Backpropagating - the full version using Python: Benefits of Vectorisation
- Backpropagating - the full version using Python: Scalar Class - Recap
- Backpropagating - the full version using Python: Vectorising weights
- Backpropagating - the full version using Python: Vectorising inputs and weights
- Backpropagating - the full version using Python: Evaluation of Classes
- Backpropagating - the full version using Python: Exercises
DL#106 - Optimization Algorithms
70 Lesson
10 hrs
- Variants of Gradient Desscent: A quick history of DL to set the context
- Variants of Gradient Desscent: Highlighting a limitation of Gradient Descent
- Variants of Gradient Descent: A deeper look into the limitation of gradient descent
- Variants of Gradient Descent: Introducing contour maps
- Variants of Gradient Descent: Exercise: Guess the 3D surface
- Variants of Gradient Descent: Visualizing gradient descent on a 2D contour map
- Variants of Gradient Descent: Intuition for momentum based gradient descent
- Variants of Gradient Descent: Dissecting the update rule for momentum based gradient descent
- Variants of Gradient Descent: Running and visualizing momentum based gradient descent
- Variants of Gradient Descent: A disadvantage of momentum based gradient descent
- Variants of Gradient Descent: Intuition behind nesterov accelerated gradient descent
- Variants of Gradient Descent: Running and visualizing nesterov accelerated gradient descent
- Variants of Gradient Descent: Summary and what next
- Variants of Gradient Descent: The idea of stochastic and mini-batch gradient descent
- Variants of Gradient Descent: Running stochastic gradient descent
- Variants of Gradient Descent: Running mini-batch gradient descent
- Variants of Gradient Descent: Epochs and Steps
- Variants of Gradient Descent: Why do we need an adaptive learning rate ?
- Variants of Gradient Descent: Introducing Adagrad
- Variants of Gradient Descent: Running and Visualizing Adagrad
- Variants of Gradient Descent: A limitation of Adagrad
- Variants of Gradient Descent: Running and visualizing RMSProp
- Variants of Gradient Descent: Running and visualizing Adam
- Variants of Gradient Descent: Summary
- Implementing Optimization Algorithms Using Python: Outline
- Implementing Optimization Algorithms Using Python: Modified Sigmoid Neuron Class
- Implementing Optimization Algorithms Using Python: Setup for Plotting
- Implementing Optimization Algorithms Using Python: Gradient Descent Algorithm
- Implementing Optimization Algorithms Using Python: GD Algorithm - Contour Plot
- Implementing Optimization Algorithms Using Python: Momentum
- Implementing Optimization Algorithms Using Python: Nesterov Accelerated GD
- Implementing Optimization Algorithms Using Python: Mini-Batch GD
- Implementing Optimization Algorithms Using Python: AdaGrad
- Implementing Optimization Algorithms Using Python: RMSProp
- Implementing Optimization Algorithms Using Python: Adam
- Implementing Optimization Algorithms Using Python: Vectorised Class Recap
- Implementing Optimization Algorithms Using Python: Vectorised GD Algorithms
- Implementing Optimization Algorithms Using Python: Performance of Different Algorithms
- Implementing Optimization Algorithms Using Python: Good solutions and Exercise
- Activation Functions & Initialization Methods: Setting the context
- Activation Functions & Initialization Methods: Saturation in logistic neuron
- Activation Functions & Initialization Methods: Zero centered functions
- Activation Functions & Initialization Methods: Introducing Tanh and ReLU activation functions
- Activation Functions & Initialization Methods: Tanh and ReLU Activation Functions
- Activation Functions & Initialization Methods: Symmetry Breaking Problem
- Activation Functions & Initialization Methods: Xavier and He initialization
- Activation Functions & Initialization Methods: Summary and what next
- Hands-on - activation functions & initialization methods: Introduction and Activation Functions
- Hands-on - activation functions & initialization methods: Activation Functions
- Hands-on - activation functions & initialization methods: Plotting Setup
- Hands-on - activation functions & initialization methods: Sigmoid
- Hands-on - activation functions & initialization methods: Tanh
- Hands-on - activation functions & initialization methods: ReLu
- Hands-on - activation functions & initialization methods: Leaky ReLU
- Hands-on - activation functions & initialization methods: Exercises
- Regularization: Simple v/s complex models
- Regularization: Analysing the behavior of simple and complex models
- Regularization: Bias and Variance
- Regularization: Test error due to high bias and high variance
- Regularization: Overfitting in deep neural networks
- Regularization: A detour into hyperparameter tuning
- Regularization: L2 regularization
- Regularization: Dataset Augmentation and Early Stopping
- Regularization: Summary
- Hands-on - Regularization: Outline and Libraries
- Hands-on - Regularization: L2 Regularisation in Code
- Hands-on - Regularization: Bias on Increasing Model Complexity
- Hands-on - Regularization: L2 Regularisation in Action
- Hands-on - Regularization: Adding Noise to Input Features
- Hands-on - Regularization: Early Stopping and Exercises
DL#107 - Introduction to Pytorch
15 Lesson
10 hrs
- Basics of Pytorch: Outline
- Basics of Pytorch: PyTorch Tensors
- Basics of Pytorch: Simple Tensor Operations
- Basics of Pytorch: NumPy vs PyTorch
- Basics of Pytorch: GPU PyTorch
- Basics of Pytorch: Automatic Differentiation
- Basics of Pytorch: Loss Function with AutoDiff
- Basics of Pytorch: Learning Loop GPU
- FNNs using Pytorch: Outline
- FNNs using Pytorch: Forward Pass With Tensors
- FNNs using Pytorch: Functions for Loss, Accuracy, Backpropagation
- FNNs using Pytorch: PyTorch Modules - NN and Optim
- FNNs using Pytorch: NN Sequential and Code Structure
- FNNs using Pytorch: GPU Execution
- FNNs using Pytorch: Exercises and Recap
DL#108 - Convolutional Neural Networks
24 Lesson
10 hrs
- The convolution operation: Setting the Context
- The convolution operation: The 1D convolution operation
- The convolution operation: The 2D Convolution Operation
- The convolution operation: Examples of 2D convolution
- The convolution operation: 2D convolution with a 3D filter
- The convolution operation: Terminilogy
- The convolution operation: Padding and Stride
- From convolution operation to neural networks: How is the convolution operation related to Neural Networks - Part 1
- From convolution operation to neural networks: How is the convolution operation related to Neural Networks - Part 2
- From convolution operation to neural networks: How is the convolution operation related to Neural Networks - Part 3
- From convolution operation to neural networks: Understanding the input/output dimensions
- From convolution operation to neural networks: Sparse Connectivity and Weight Sharing
- From convolution operation to neural networks: Max Pooling and Non-Linearities
- From convolution operation to neural networks: Our First Convolutional Neural Network (CNN)
- From convolution operation to neural networks: Training CNNs
- From convolution operation to neural networks: Summary and what next
- CNNs in Pytorch: Outline
- CNNs in Pytorch: Loading Data Sets
- CNNs in Pytorch: Visualising Weights
- CNNs in Pytorch: Single Convolutional Layer
- CNNs in Pytorch: Deep CNNs
- CNNs in Pytorch: LeNet
- CNNs in Pytorch: Training Le Net
- CNNs in Pytorch: Visualising Intermediate Layers, Exercises
DL#109 - Deep Convolutional Neural Networks
61 Lesson
10 hrs
- CNN Architectures - Part 1: Setting the context
- CNN Architectures - Part 1: The Imagenet Challenge
- CNN Architectures - Part 1: Understanding the first layer of AlexNet
- CNN Architectures - Part 1: Understanding all layers of AlexNet
- CNN Architectures - Part 1: ZFNet
- CNN Architectures - Part 1: VGGNet
- CNN Architectures - Part 1: Summary
- CNN Architectures - Part 2: Setting the context
- CNN Architectures - Part 2: Number of computations in a convolution layer
- CNN Architectures - Part 2: 1x1 Convolutions
- CNN Architectures - Part 2: The Intuition behind GoogLeNet
- CNN Architectures - Part 2: The Inception Module
- CNN Architectures - Part 2: The GoogleNet Architecture
- CNN Architectures - Part 2: Average Pooling
- CNN Architectures - Part 2: Auxiliary Loss for training a deep network
- CNN Architectures - Part 2: ResNet
- Building CNN Architectures Using Pytorch: Outline
- Building CNN Architectures Using Pytorch: Image Transforms
- Building CNN Architectures Using Pytorch: VGG
- Building CNN Architectures Using Pytorch: Training VGG
- Building CNN Architectures Using Pytorch: Pre-trained Models
- Building CNN Architectures Using Pytorch: Checkpointing Models
- Building CNN Architectures Using Pytorch: ResNet
- Building CNN Architectures Using Pytorch: Inception Part 1
- Building CNN Architectures Using Pytorch: Inception Part 2
- Building CNN Architectures Using Pytorch: Exercises
- Visualising CNNs: Receptive field of a neuron
- Visualising CNNs: Identifying images which cause certain neurons to fire
- Visualising CNNs: Visualising filters
- Visualising CNNs: Occlusion experiments
- Visualising CNNs Using Python: Outline
- Visualising CNNs Using Python: Custom Torchvision Dataset
- Visualising CNNs Using Python: Visualising inputs
- Visualising CNNs Using Python: Occlusion
- Visualising CNNs Using Python: Visualising filters
- Visualising CNNs Using Python: Visualising filters - code
- Batch Normalization and Dropout: Normalizing inputs
- Batch Normalization and Dropout: Why should we normalize the inputs
- Batch Normalization and Dropout: Batch Normalization
- Batch Normalization and Dropout: Learning Mu and Sigma
- Batch Normalization and Dropout: Ensemble Methods
- Batch Normalization and Dropout: The idea of dropout
- Batch Normalization and Dropout: Training without dropout
- Batch Normalization and Dropout: How does weight sharing help ?
- Batch Normalization and Dropout: Using dropout at test time
- Batch Normalization and Dropout: How does dropout act as a regularizer ?
- Batch Normalization and Dropout: Summary and what next ?
- Batch Normalization and Dropout Using Python: Outline and Dataset
- Batch Normalization and Dropout Using Python: Batch Norm Layer
- Batch Normalization and Dropout Using Python: Batch Norm Visualisation
- Batch Normalization and Dropout Using Python: Batch Norm 2d
- Batch Normalization and Dropout Using Python: Dropout layer
- Batch Normalization and Dropout Using Python: Dropout Visualisation and Exercises
- Hyperparameter Tuning and MLFlow: Outline
- Hyperparameter Tuning and MLFlow: Colab on Local Runtime
- Hyperparameter Tuning and MLFlow: MLFlow installation and basic usage
- Hyperparameter Tuning and MLFlow: Hyperparamater Tuning
- Hyperparameter Tuning and MLFlow: Refined Search for Hyperparameters
- Hyperparameter Tuning and MLFlow: Logging Image Artifacts
- Hyperparameter Tuning and MLFlow: Logging and Loading Models
- Hyperparameter Tuning and MLFlow: One Last Visualisation
DL#110 - Sequence Models
61 Lesson
13 hrs
- Sequence Learning Problems: Setting the context
- Sequence Learning Problems: Introduction to sequence learning problems
- Sequence Learning Problems: Some more examples of sequence learning problems
- Sequence Learning Problems: Sequence learning problems using video and speech data
- Sequence Learning Problems: A wishlist for modelling sequence learning problems
- Sequence Learning Problems: Intuition behind RNNs - Part 1
- Sequence Learning Problems: Intuition behind RNNs - Part 2
- Sequence Learning Problems: Introducing RNNs
- Sequence Learning Problems: Summary and what next
- Recurrent Neural Networks: Setting the context
- Recurrent Neural Networks: Data and Tasks - Sequence Classification - Part 1
- Recurrent Neural Networks: Data and Tasks - Sequence Classification - Part 2
- Recurrent Neural Networks: A clarification about padding
- Recurrent Neural Networks: Data and Tasks - Sequence Labelling
- Recurrent Neural Networks: Model
- Recurrent Neural Networks: Loss Function
- Recurrent Neural Networks: Learning Algorithm
- Recurrent Neural Networks: Learning Algorithm - Derivatives w.r.t. V
- Recurrent Neural Networks: Learning Algorithm - Derivatives w.r.t. W
- Recurrent Neural Networks: Evaluation
- Recurrent Neural Networks: Summary and what next
- Vanishing and Exploding Gradients: Revisiting the gradient wrt W
- Vanishing and Exploding Gradients: Zooming into one element of the chain rule - Part 1
- Vanishing and Exploding Gradients: Zooming into one element of the chain rule - Part 2
- Vanishing and Exploding Gradients: A small detour to calculus
- Vanishing and Exploding Gradients: Looking at the magnitude of the derivative
- Vanishing and Exploding Gradients: Exploding and vanishing gradients
- Vanishing and Exploding Gradients: Summary and what next
- LSTMs and GRUs: Dealing with longer sequences
- LSTMs and GRUs: The white board analogy
- LSTMs and GRUs: Real world example of longer sequences
- LSTMs and GRUs: Going back to RNNs
- LSTMs and GRUs: Selective Write - Part 1
- LSTMs and GRUs: Selective Write - Part 2
- LSTMs and GRUs: Selective Read
- LSTMs and GRUs: Selective forget
- LSTMs and GRUs: An example computation with LSTMs
- LSTMs and GRUs: Gated recurrent units
- LSTMs and GRUs: Summary and what next
- Sequence Models in PyTorch: OutlineSequence Models in PyTorch: Outline
- Sequence Models in PyTorch: Dataset and Task
- Sequence Models in PyTorch: RNN Model
- Sequence Models in PyTorch: Inference on RNN
- Sequence Models in PyTorch: Training RNN
- Sequence Models in PyTorch: Training Setup
- Sequence Models in PyTorch: LSTM
- Sequence Models in PyTorch: GRU and Exercises
- Addressing the problem of vansihing and exploding gradients: Quick Recap
- Addressing the problem of vansihing and exploding gradients: Intuition: How gates help to solve the problem of vanishing gradients
- Addressing the problem of vansihing and exploding gradients: Revisiting vanishing gradients in RNNs
- Addressing the problem of vansihing and exploding gradients: Dependency diagram for LSTMs
- Addressing the problem of vansihing and exploding gradients: Computing the gradient
- Addressing the problem of vansihing and exploding gradients: When do the gradients vanish?
- Addressing the problem of vansihing and exploding gradients: Dealing with exploding gradients
- Addressing the problem of vansihing and exploding gradients: Summary and what next
- Batching for Sequence Models in Pytorch: Overview
- Batching for Sequence Models in Pytorch: Recap on Sequence Models
- Batching for Sequence Models in Pytorch: Batching for Sequence Models
- Batching for Sequence Models in Pytorch: Padding Vector Representations
- Batching for Sequence Models in Pytorch: Packing in PyTorch
- Batching for Sequence Models in Pytorch: Training with Batched Input
DL#111 - Encoder Decoder Models
25 Lesson
5 hrs
- Neural Encoders and Decoders: Setting the context
- Neural Encoders and Decoders: Revisiting the task of language modelling
- Neural Encoders and Decoders: Using RNNs for language modelling
- Neural Encoders and Decoders: Introducing Encoder Decoder Model
- Neural Encoders and Decoders: Connecting encoder decoder models to the six jars
- Neural Encoders and Decoders: A compact notation for RNNs, LSTMs and GRUs
- Neural Encoders and Decoders: Encoder decoder model for image captioning
- Neural Encoders and Decoders: Six jars for image captioning
- Neural Encoders and Decoders: Encoder decoder for Machine translation
- Neural Encoders and Decoders: Encoder decoder model for transliteration
- Neural Encoders and Decoders: Summary
- Attention Mechanism: Motivation for attention mechanism
- Attention Mechanism: Attention mechanism with an oracle
- Attention Mechanism: A model for attention
- Attention Mechanism: The attention function
- Attention Mechanism: Machine translation with attention
- Attention Mechanism: Summary and what next
- Encoder Decoder Models and Attention Mechanism Using Pytorch: Outline
- Encoder Decoder Models and Attention Mechanism Using Pytorch: Data set and Task
- Encoder Decoder Models and Attention Mechanism Using Pytorch: Data Ingestion - XML processing
- Encoder Decoder Models and Attention Mechanism Using Pytorch: Encoder Decoder Model - 1
- Encoder Decoder Models and Attention Mechanism Using Pytorch: Encoder Decoder Model - 2
- Encoder Decoder Models and Attention Mechanism Using Pytorch: Adding Attention - 1
- Encoder Decoder Models and Attention Mechanism Using Pytorch: Adding Attention - 2
- Encoder Decoder Models and Attention Mechanism Using Pytorch: Model Evaluation and Exercises
DL#112 - Introduction to Object Detection
12 Lesson
3 hrs
- RCNN: More clarity on regression
- RCNN: Setting the context
- RCNN: A typical pipeline for object detection
- RCNN: Region Proposal
- RCNN: Feature Extraction
- RCNN: Classification
- RCNN: Regression
- RCNN: Training
- YOLO: Introduction to YOLO
- YOLO: The Output of YOLO
- YOLO: Training
- Object Detection: Summary and what next
DL#113 - Capstone Project
1 Lesson
1 hrs
View all 13 Modules
View less
"This course is the best as it focuses both on the theory and hands on.This course introduced me to Kaggle competitions and I got addicted to it.I feel more confident that I can contribute to real world projects involving deep learning after taking this course."
- Praveen R
"Learned a lot from this course. Before starting this course, I have no knowledge of deep learning but after learning this course I am pretty confident."
- Vudata Rohit
"This course can turn you into a deep learning enthusiast if you have the urge to learn something new (even if you don't know anything about deep learning at all)."
- Kanumuri Sri Naga Sai Ajith
- Automates the processes: Artificial Intelligence allows robots to develop repetitive, routine and process optimization tasks automatically and without human intervention.
- Enhance creative tasks: AI frees people from routine and repetitive tasks and allows them to spend more time on creative functions.
- Helps in Daily Applications: Daily applications such as Apple’s Siri, Window’s Cortana, Google’s OK Google are frequently used in our daily routine whether it is for searching a location, taking a selfie, making a phone call, replying to a mail and many more.
- Quick Learning: With around 5-8 hours of study per week and around 6 months of time, learners can progress rapidly from novice to intermediate/adept levels with this AI Combo Pack.
How is this different from other courses?
Will I get a certificate on completing a course?
What are the pre-requisites?
How can I ask my doubts?
