Академический Документы
Профессиональный Документы
Культура Документы
1
Rosenblatt (1958) proposed the perceptron as the first
model for learning with a teacher
2
Perceptron
[+1 -1]
Hard Limiter
[+1 -1] →
3
m-dimensional input (signal / feature) space
Hyperplane:
4
Perceptron Convergence Theorem
5
(particular n)
So the two classes must be linearly separable
(boundary A linear equation)
6
Subspace of training vectors from class
Boundary
7
The Algo:
Rule 1:
Rule 2:
Positive
8
Updation for Type 1 error:
Suppose ∀ 𝑛𝑛 ∈
That is incorrect classification (of second type)
Iteratively
9
Recall our assumption
Linearly separable
A solution exists:
∀ 𝑛𝑛 ∈
Now let
Now from
10
By
Euclidean!
RESULT 1
11
≤0
Adding across k
RESULT 2
12
Cannot go beyond For the
error to
happen
real
So, the adaption (error correction) process terminates
(converges) at some value of n when there exists a solution
Assumed WLOG
• Sufficient samples?!
13 • Finds boundary only for training samples!
The theorem:
Let
and we have
14
So if sign of is wrong
So
Will yield correct classification
Repeated presentation of
Now if we write
Output:
Desired output:
like LMS
Error correction learning:
Stable weight estimate (smooth,
Usually: small step)
Fast adaptation (large step)
16
Relation with Bayes Classifier
*
⊇
Risk
Gaussianity
So…
19
Perceptron and Gauss Bayes CL,
similar yet different
21