Illustration 1

Let’s create a simple training loop that shows how to use the custom optimizer to train a model. The loop would perform the following steps:

  1. Initialize the gradients of the model’s parameters to zero using the optimizer’s zero_grad method.
  2. Compute the forward pass of the model on some input data and calculate the loss.
  3. Compute the gradients of the model’s parameters with respect to the loss using the backward method.
  4. Call the step method of the optimizer to update the model’s parameters based on the current gradients and the optimizer’s internal state.

Step 1. Import the necessary libraries:

Python3




# Import the necessary libraries
import torch
import torch.nn as nn
# To plot the figure
import matplotlib.pyplot as plt


Step 2: Define a custom optimizer class that inherits from torch.optim.Optimizer. In this example, we will create a custom optimizer that implements the Momentum optimization algorithm.

Python3




# MomentumOptimizer
class MomentumOptimizer(torch.optim.Optimizer):
      
    # Init Method:
    def __init__(self, params, lr=1e-3, momentum=0.9):
        super(MomentumOptimizer, self).__init__(params, defaults={'lr': lr})
        self.momentum = momentum
        self.state = dict()
        for group in self.param_groups:
            for p in group['params']:
                self.state[p] = dict(mom=torch.zeros_like(p.data))
      
    # Step Method
    def step(self):
        for group in self.param_groups:
            for p in group['params']:
                if p not in self.state:
                    self.state[p] = dict(mom=torch.zeros_like(p.data))
                mom = self.state[p]['mom']
                mom = self.momentum * mom - group['lr'] * p.grad.data
                p.data += mom


Step 3: Define a simple model, loss function and also initialize an instance of the custom optimizer:

Python3




# Define a simple model
model = nn.Linear(2, 2)
  
# Define a loss function
criterion = nn.MSELoss()
  
# Define the optimizer
optimizer = MomentumOptimizer(model.parameters(), lr=1e-3, momentum=0.9)


Step 4: Generate some random data to train the model

Python3




# Generate some random data
X = torch.randn(100, 2)
y = torch.randn(100, 1)


Step 5:Train the model with custom optimizer and Plot the training loss.

Python3




# Training loop
for i in range(2500):
    optimizer.zero_grad()
    y_pred = model(X)
    loss = criterion(y_pred, y)
      
    # Plot losses
    if i%100 ==0:
        plt.plot(i,loss.item(),'ro-')
      
    loss.backward()
    optimizer.step()
      
plt.title('Losses over iterations')
plt.xlabel('iterations')
plt.ylabel('Losses')
plt.show()


Output:

Losses

You will notice that your custom optimizer is correctly updating the parameters of the model and minimizing the loss function.

Note: The above loop is an example on how to use the custom optimizer and it will help you understand how the step method of optimizer is working.

Custom Optimizers in Pytorch

In PyTorch, an optimizer is a specific implementation of the optimization algorithm that is used to update the parameters of a neural network. The optimizer updates the parameters in such a way that the loss of the neural network is minimized. PyTorch provides various built-in optimizers such as SGD, Adam, Adagrad, etc. that can be used out of the box. However, in some cases, the built-in optimizers may not be suitable for a particular problem or may not perform well. In such cases, one can create their own custom optimizer.

A custom optimizer in PyTorch is a class that inherits from the torch.optim.Optimizer base class. The custom optimizer should implement the init and step methods. The init method is used to initialize the optimizer’s internal state, and the step method is used to update the parameters of the model.

Similar Reads

Creating a Custom Optimizer:

In PyTorch, creating a custom optimizer is a two-step process. First, we need to create a class that inherits from the torch.optim.Optimizer class, and override the following methods:...

Illustration 1:

...

Customizing Optimizers:

Let’s create a simple training loop that shows how to use the custom optimizer to train a model. The loop would perform the following steps:...

Illustration 2:

...

Conclusion:

...

Contact Us