alternative
  • Home (current)
  • About
  • Tutorial
    Technologies
    C#
    Deep Learning
    Statistics for AIML
    Natural Language Processing
    Machine Learning
    SQL -Structured Query Language
    Python
    Ethical Hacking
    Placement Preparation
    Quantitative Aptitude
    View All Tutorial
  • Quiz
    C#
    SQL -Structured Query Language
    Quantitative Aptitude
    Java
    View All Quiz Course
  • Q & A
    C#
    Quantitative Aptitude
    Java
    View All Q & A course
  • Programs
  • Articles
    Identity And Access Management
    Artificial Intelligence & Machine Learning Project
    How to publish your local website on github pages with a custom domain name?
    How to download and install Xampp on Window Operating System ?
    How To Download And Install MySql Workbench
    How to install Pycharm ?
    How to install Python ?
    How to download and install Visual Studio IDE taking an example of C# (C Sharp)
    View All Post
  • Tools
    Program Compiler
    Sql Compiler
    Replace Multiple Text
    Meta Data From Multiple Url
  • Contact
  • User
    Login
    Register

Deep Learning - CNN - Convolutional Neural Network - CNN Architecture Tutorial

CNN Architecture - It has three layers namely convolutional, pooling, and a fully connected layer.

 

This is the most common architecture. we can also modify the architecture using convolution layer, filter mapping, stride, padding, FC nodes and layer, activation function, dropout, and batch normalization.

Different CNN architecture are-

ImageNET-

1] LeNET – Yann Lecunn

2] AlexNET

3] GoogleNET

4] VggNET

5] ResNET

6] Inception

 

1] LeNET - 5

Input layer(32 x 32) →

Layer 1 → Convolutional layer(5 x 5)(6 - filter) +  Average Pooling Layer( receptive field of size 2 x 2 and stride size of 2) - (28 x 28 x 6) to (14 x 14 x 6)

Layer 2 → Convolutional layer(5 x 5)(16 - filter) + Average Pooling Layer( receptive field of size 2 x 2 and stride size of 2) - (10 x 10 x 16) to (5 x 5 x 16)  → Flatten(1-D tensor) - 400

Layer 3 → Fully Connected Layer(120 nodes) - 400 x 120

Layer 4 → Fully Connected Layer(84 nodes) - 120 x 84

Layer 5 → Softmax Layer - 84 x 10

 

model.summary()

That’s is the reason it is called LENET-5 (because of 5 layers)

 

CNN - Convolutional Neural Network VS ANN - Artificial Neural Network

 

The problem faced by ANN-

1] Computational Cost

2] Overfitting

3] Loss of important features like spatial arrangement of pixels

For Mnist Data,

In ANN, the Mnist image will be converted to 1D → then it is passed to a fully connected layer →, and then finally we will get a resultant answer

In CNN, the Mnist image is already in 2D shape → then it is passed to the filter(convolutional layer) → resultant will feature map, in feature map you will add bias, → and then you will send it to an activation function like RELU → then you will apply max pooling layer → flatten → fully connected layer → softmax.

 

The similarity is that ANN nodes and CNN filters seem to be similar. (both contain weight and bias).

 

Calculate the trainable parameter of CNN-

for 50 layers → each layer is of size → 3 x 3 x 3  = 27

total bias = each layer has one bias, for 50 layers there are 50 bias

total learnable parameter is = total weight + total bias = 27 x 50 + 50 =1350+50 = 1400

The trainable parameter in CNN does not depend on input size, it depends on filter size and the number of filters.

 

Difference-

If you change the input layer size, keeping the same CNN layer, then also trainable parameter is 1400,

because the trainable parameter does not depend on the input layer in the case of CNN

 

But in the case of ANN, as you increase the input layer size(image size) the trainable parameter will also increase.

Obviously if no. of trainable parameter increase, then computation cost increase, overfitting increase, and there is a loss of important feature like spatial arrangement of pixels(i.e 2D like CNN not 1D like ANN)

Deep Learning

Deep Learning

  • Introduction
  • LSTM - Long Short Term Memory
    • Introduction
  • ANN - Artificial Neural Network
    • Perceptron
    • Multilayer Perceptron (Notation & Memoization)
    • Forward Propagation
    • Backward Propagation
    • Perceptron Loss Function
    • Loss Function
    • Gradient Descent | Batch, Stochastics, Mini Batch
    • Vanishing & Exploding Gradient Problem
    • Early Stopping, Dropout. Weight Decay
    • Data Scaling & Feature Scaling
    • Regularization
    • Activation Function
    • Weight Initialization Techniques
    • Optimizer
    • Keras Tuner | Hyperparameter Tuning
  • CNN - Convolutional Neural Network
    • Introduction
    • Padding & Strides
    • Pooling Layer
    • CNN Architecture
    • Backpropagation in CNN
    • Data Augmentation
    • Pretrained Model & Transfer Learning
    • Keras Functional Model
  • RNN - Recurrent Neural Network
    • RNN Architecture & Forward Propagation
    • Types Of RNN
    • Backpropagation in RNN
    • Problems with RNN

About Fresherbell

Best learning portal that provides you great learning experience of various technologies with modern compilation tools and technique

Important Links

Don't hesitate to give us a call or send us a contact form message

Terms & Conditions
Privacy Policy
Contact Us

Social Media

© Untitled. All rights reserved. Demo Images: Unsplash. Design: HTML5 UP.

Toggle