Friday, March 25, 2022

Ruminating on Gradient Descent

Gradient Descent is the most popular optimization algorithm used to train machine learning models - by minimizing the cost function. In deep learning, neural nets use back-propagation that internally use a cost function (aka lost function) like Gradient Descent. 

The Gradient descent function essentially uses calculus to find the direction of travel and then to find the local minimal of a function. The following 2 videos are excellent tutorials to understand Gradient Descent and their use in neural nets. 

https://youtu.be/IHZwWFHWa-w

https://youtu.be/sDv4f4s2SB8

Once you understand these concepts, it will help you also realize that there is no magic involved when a neural net learns by itself -- ultimately a neural net learning by itself just means minimizing a cost function (aka loss function).

Neural nets start with random values for their weights (of the channels) and biases. Then by using the cost function, these hundreds of weights/biases are shifted towards the optimal value - by using optimization techniques such as gradient descent. 

No comments:

Post a Comment