Member-only story
DL Recap: Gradient Descent in Neural Networks
Let’s dive in deep to understand Gradient Descent
We have all learnt how the model passes the images through all the hidden layers, computes the activations of each node and gets an output. Revise that here.
But the model will obviously not be perfect and output the expected values in the first attempt. If that were the case, there won’t be any learning in ML.
To correctly understand that is happening here, suppose you have a child. And you task him with eating noodles. Since he’s a child, he starts eating those noodles with bare hands. You notice that he is making a mistake. And you show him the correct way, and provide him with a fork to see if he adapts. If he doesn’t, you catch his hand, and make him eat the noddles with fork. And so on.. You see? There were and will be more iterations before the baby can start eating correctly.
Now, consider your ML model as your child. You task it to perform something, and he does it. You see if thats right or wrong. Based on it, you try to correct it. And this process goes on and on for…