Академический Документы
Профессиональный Документы
Культура Документы
If the terminator had CNN, the other Sarahs wouldn’t have died: image courtesy Adweek.com
When it comes to classifying images — lets say with size 64x64x3 — fully
connected layers need 12288 weights in the first hidden layer! The
number of weights will be even bigger for images with size 225x225x3
= 151875. Networks having large number of parameter face several
problems, for e.g. slower training time, chances of overfitting e.t.c.
Courtesy: analyticsvidhya.com
• ReLU or Rectified Linear Unit — ReLU is mathematically expressed
as max(0,x). It means that any number below 0 is converted to 0
while any positive number is allowed to pass as it is.
Usually the convolution layers, ReLUs and Maxpool layers are repeated
number of times to form a network with multiple hidden layer
commonly known as deep neural network.
A Convolution Neural Network: courtesy MDPI.com
References
http://cs231n.github.io/convolutional-networks/
https://github.com/soumith/convnet-benchmarks
https://austingwalters.com/convolutional-neural-networks-cnn-to-
classify-sentences/
. . .