Parameter Learning
Last updated
Last updated
So now we have a way to measure how well fits the training data (remember we just have to minimize the cost function). Okay, but how do we estimate those parameters?
Visually here, is graphed with its parameters. If we started where the red circles are. Imagine we are trying to look for the lowest point to walk to. Once we choose that point, we would walk to it, and now our position would be at that lower point. This process continues until you reach the lowest point. However, notice that if you start at another location, you could possibly end up at another lowest point.
An algorithm that finds the value that results in the lowest cost function.
Repeat until convergence {
}
Let's explain what this means.
is just an assignment operator. In math, it's just not right to say , but in CS it is. So using this operator makes it all good.
is the learning rate.
Visual representation: The distance between the X's in the graph is the magnitude of the learning rate.
If is large → The steps are greater.
If is small → The steps are smaller.
is the partial derivative of the cost function. It represents the direction of the step taken.
This algorithm keeps reassigning itself until it has reached convergence.
At each iteration of this algorithm, the parameters of the loss function (so and in this case), are updated simultaneously. If you do not do this, the first parameter's update value will affect the next one, and we don't want that.
Summary: The point is, wherever we start on the curve, gradient descent ensures that our hypothesis becomes more and more accurate as it tries to minimize the cost function curve.
If decreases
If increases
The slope of the highest point on the graph has the steepest slope (and thus highest derivative), and the slopes keep decreasing as we go down the graph. Looking at the equation again: , we can see that as our derivative/slope decreases, the gradient descent changes less and less - explaining the smaller steps taken in the graph. Conclusion: We actually don't need to decrease over time.
Aka Batch Gradient Descent because we are using m training samples which represents ALL training samples
Repeat until convergence {
}, is the # of training samples
Steps that we took to get the above equation:
Start with the gradient descent algorithm
Sub the cost function with the mean squared error function since we are finding the gradient descent algorithm for linear regression.
Write the hypothesis function in terms of parameters .
Now we should find the single variable derivatives of the parameters like so:
Sub the hypothesis equation back and you get the gradient descent algorithm for linear regression!
Note: In these examples, we are using linear regression. Remember the cost function for linear regression problem is a bowl shape (if it is multvariable/has multiple parameters), so it has ONE minimum. So, gradient descent always converges for linear regression assuming the learning rate is not too large (otherwise, the gradient descent would overshoot and diverge).