General Rule
For each training example (i.e. input vector), if the perceptron outputs:
- the correct answer (either 0 or 1), then leave the weights alone
- a false negative (0 when the answer is 1), then add the input vector to the weights vector
- a false positive (1 when the answer is 0), then subtract the input vector from the weights vector
Geometric Intuition
Initialize a vector space where the dimensions are weights (including the bias). A location in this weight space is dictated by the values of the weights.
For simplicity, let's initialize a 2D weight space (one weight, and one bias). Since our input vector is the same dimension as our weights vectors (note the input for bias is always equal to 1), we can also initialize our input vector within the weights space.
Now letโs take a single training example where the right answer is 1. In the scenario where the right answer is 1, that means we want the z we plug into our step function to be greater than 0.
We must ask, then, when is z, the dot product of the weights and inputs, greater than 0?ย
If you are familiar with the geometric view of dot products, you will know that dot products can be seen as a measure of how much two vectors point in the same direction.
- When two vectors lie in similar directions, the dot product is positive.
- When they lie in dissimilar directions, the dot product is negative.
- The dot product isย exactly zero when the two vectors are perpendicular.
So, we can section off our weight space by creating a dividing line, which is perpendicular to our input vector. This dividing line represents where, if a weight vector was placed along it, would return a z of 0.
Then, depending on our desired answer, we can want our weights vector to be on either side of the dividing line. For example, in our case:
When the Desired Answer is 1 | When the Desired Answer is 0 |
---|
| swap "good weights" with "bad weights" |
Thus you can see how to "general rule" above applies.