Stochastic Gradient Descent Using Pytorch Linear Module
4 min readApr 3, 2022
In the previous tutorial here on SGD, I explored the way in which we can implement using PyTorch's built-in gradient calculation, loss, and optimization implementation. in our present discussion, we’ll continue exploring PyTorch by using the same constructs and using also using the built-in Linear module.
Steps :
Preparation Steps
- Prepare training Data
- Define Putroch linear model
- Define a loss function to evaluate the difference between the output of the above function and the actual values of the output.
- Define learning rate and optimizer.
Training Steps
- Call the PyTorch model on the training data to calculate the output.
- Adjust weight parameters according to the derivatives and specified learning rate.
- Repeat previous steps till the loss is beyond an acceptable thresold or after completing a maximum number of iterations.
As we’ll see it is easy to perform using PyTorch.
Step 1
Prepare the Input
In this step, we’ll use PyTorch tensors to create input values X which is a sequence of numbers…