Вы находитесь на странице: 1из 21

Deep Learning and

its Applications
STOCK MARKET PREDICTION AND CYBERSECURITY
What is Deep
Learning?
• Deep learning is a sub-field of
machine learning dealing with
algorithms inspired by the
structure and function of the
brain called artificial neural
networks.
• In other words, It mirrors the
functioning of our brains. Deep
learning algorithms are similar to
how nervous system structured
where each neuron connected
each other and passing
information.
Deep learning models work in layers and a typical
model has at least three layers. Each layer accepts the
information from previous and pass it on to the next
one
Origins of Deep Learning
Why Deep Learning

Deep learning models tend to perform well with amount


of data where as old machine learning models stop
improving after a saturation point.
Difference between ML and DL

One of differences between machine learning and deep


learning model is the feature extraction area. Feature
extraction is done by humans in machine learning whereas
deep learning models figure it out by themselves.
Two main aspects of Deep
Learning
• LINEAR REGRESSION
• LOGISTIC REGRESSION
Linear regression

• It is a statistical method that


allows us to summarize and study
relationships between two
continuous variables.
Logistic regression
• It is a statistical method for
analyzing a dataset in which
there are one or more
independent variables that
determine an outcome.
• The outcome is measured in
which there are only two possible
outcomes: True or False.
Activation Function

 Activation functions are functions that decide, given the inputs into
the node, what should be the node’s output? Because it’s the
activation function that decides the actual output, we often refer to
the outputs of a layer as its “activations”.
 One of the simplest activation functions is the Heaviside step
function. This function returns a 0 if the linear combination is less than
0. It returns a 1 if the linear combination is positive or equal to zero.
Weights

 When input data comes into a neuron, it gets multiplied by a weight


value that is assigned to this particular input.
 These weights start out as random values, and as the neural network
learns more about what kind of input data leads to a particular
outcome, the network adjusts the weights based on any errors in
categorization that the previous weights resulted in.
 This is called training the neural network
Bias

 A bias unit is an "extra" neuron added to each pre-output layer that


stores the value of 1.
 Bias units aren't connected to any previous layer and in this sense
don't represent a true "activity".
 Weights and biases are the learnable parameters of the deep
learning models.
Neural Network

As explained before, deep learning is a sub-field of


machine learning dealing with algorithms inspired
by the structure and function of the brain called
artificial neural networks.
As we have neurons in nervous system, we can
define each line as one neuron and connected to
next layer neurons along with neurons in the same
layer.
Neural Networks
Example

• In this case we have two neurons


that represents the two lines.
• The given picture is an example
of simple neural network where
two neurons accept that input
data and compute yes or no
based on their condition and
pass it to the second layer neuron
to concatenate the result from
previous layer.
Training

 Weights start out as random values, and as the neural network


learns more about what kind of input data leads to an output, the
network adjusts the weights based on any errors in categorization
that the previous weights resulted in.
 This is called training the neural network.
 Once we have the trained network, we can use it for predicting the
output for the similar input.
Error

 This very important concept to define how well a network


performing during the training. In the training phase of the network,
it makes use of error values to adjust the weights so that it can get
reduced error at each step. The goal of the training phase to
minimize the error.
 Mean Squared Error is one of the popular error function. it is a
modified version of Sum Squared Error.
Forward Propagation

 By propagating values from the first layer (the input layer) through all
the mathematical functions represented by each node, the
network outputs a value. This process is called a forward pass.
Gradient Descent

 Gradient descent is an optimization algorithm used to find the


values of parameters (coefficients) of a function (f) that minimizes a
cost function (cost).
 Gradient descent is best used when the parameters cannot be
calculated analytically (e.g. using linear algebra) and must be
searched for by an optimization algorithm.
 Gradient descent is used to find the minimum error by minimizing a
“cost” function.
Back Propagation

 In neural networks, you forward propagate to get the output and


compare it with the real value to get the error.
 Now, to minimize the error, you propagate backwards by finding the
derivative of error with respect to each weight and then subtracting
this value from the weight value.
 This is called back propagation.
Regularisation and Optimisation

 Regularisation is the technique used to solve the over-fitting


problem. Over-fitting happens when model is biased to one type of
datset. There are different types of regularisation techniques, I think
the mostly used regularisation is dropout.
 Optimisation is technique used to minimize the loss function of the
network. There are different type of optimisation algorithms.
However, Gradient decent and it’s variants are popular ones these
days.
What we intend to do

 We intend to use Deep Learning to train neural networks that can


predict stock market prices and also implement it in cybersecurity.
 Stock market prediction and correlation of trends between stock
markets will be done using data from the past and also on sentiment
analysis of news articles and social media posts
 Sambuddha add something here
THE END

Вам также может понравиться