Visualizing Training Progress in PyTorch Using Matplotlib

Matplotlib is a widely used plotting library in Python that provides a flexible and powerful tool for creating static, animated, and interactive visualizations in Python. It is particularly well-suited for creating publication-quality plots and charts.

Step 1: Import Necessary Libraries and Generate a Sample Dataset

In this step, we are going to import necessary libraries and generate sample dataset.

import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt
# Sample data
X = torch.randn(100, 1)  # Sample features
y = 3 * X + 2 + torch.randn(100, 1)  # Sample labels with noise


Step 2: Define the Model

  1. The `LinearRegression` class in PyTorch defines a simple linear regression model. It inherits from the `nn.Module` class, making it a neural network model.
  2. The constructor (`__init__` method) initializes the model’s structure, creating a single linear layer (`nn.Linear`) with one input feature and one output feature.
  3. This linear layer is stored as an attribute named `self.linear`. The `forward` method defines how input data `x` is processed through this linear layer to produce the model’s output.
  4. Specifically, the input `x` is passed through `self.linear`, and the resulting output is returned. This method encapsulates the forward pass computation of the neural network, determining how inputs are transformed into outputs by the model.
# Define a simple linear regression model
class LinearRegression(nn.Module):
    def __init__(self):
        super(LinearRegression, self).__init__()
        self.linear = nn.Linear(1, 1)  # One input feature, one output

    def forward(self, x):
        return self.linear(x)

model = LinearRegression()

Step 3: Define the Loss Function, Optimizer and Training Loop

In the following code, we have define Mean Squared Error as the loss function and Stochastic Gradient Descent(SGD) optimizer as optimizer that modifies the model’s parameters by using calculated gradients with a learning rate of 0.01.

# Define loss function and optimizer
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

This code runs a training loop for a neural network model over multiple epochs, computing and optimizing the loss using gradient descent. Loss values are stored for plotting, and progress is printed every 10 epochs.

# Training loop
num_epochs = 100
losses = []
for epoch in range(num_epochs):
    # Forward pass
    outputs = model(X)
    loss = criterion(outputs, y)

    # Backward pass and optimization
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    # Print progress
    if (epoch+1) % 10 == 0:
        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

    # Store loss for plotting
    losses.append(loss.item())

Step 4: Visualizing Training Progress in PyTorch using Matplotlib

Using the following code, we can visualize the training loss curve using matplotlib.

  • The plt.plot(losses) line plots the loss values stored in the losses list against the epoch number.
  • The x-axis represents the epoch number, and the y-axis represents the corresponding loss value.
  • The plt.xlabel(‘Epoch’), plt.ylabel(‘Loss’), and plt.title(‘Training Loss’) lines set the labels and title for the plot.
  • Finally, plt.show() displays the plot, allowing you to visually analyze how the loss decreases (or converges) over the training epochs.
# Plot the loss curve
plt.plot(losses)
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Training Loss')
plt.show()

Typically, you would expect to see a decreasing trend in the loss curve, indicating that the model is learning and improving over time.

Complete Code:

Python3
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt

# Sample data
X = torch.randn(100, 1)  # Sample features
y = 3 * X + 2 + torch.randn(100, 1)  # Sample labels with noise

# Define a simple linear regression model
class LinearRegression(nn.Module):
    def __init__(self):
        super(LinearRegression, self).__init__()
        self.linear = nn.Linear(1, 1)  # One input feature, one output

    def forward(self, x):
        return self.linear(x)

model = LinearRegression()

# Define loss function and optimizer
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

# Training loop
num_epochs = 100
losses = []
for epoch in range(num_epochs):
    # Forward pass
    outputs = model(X)
    loss = criterion(outputs, y)

    # Backward pass and optimization
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    # Print progress
    if (epoch+1) % 10 == 0:
        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

    # Store loss for plotting
    losses.append(loss.item())
    
# Plot the loss curve
plt.plot(losses)
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Training Loss')
plt.show()
   

Output:

Visualizing loss at each epoch using matplotlib.

The output graph shows how the training loss changes with time as plotted against number of iterations. This visualization enables one to see how a model reduces its loss when being trained. Further, the Matplotlib plot have other things like axes labels, titles and maybe markers or lines indicating specific events such as minimum achieved loss or sharp declines in losses.

How to visualize training progress in PyTorch?

Deep learning and understanding the mechanics of learning and progress during training is vital to optimize performance while diagnosing problems such as underfitting or overfitting. The process of visualizing training progress offers valuable insights into the dynamics of learning that allow us to make sound decisions. In this article, we will learn how to visualize the training progress in Pytorch.

Two methods by which training progress must be visualized are:

  • Using Matplotlib
  • Using Tensor Board

Similar Reads

Visualizing Training Progress in PyTorch Using Matplotlib

Matplotlib is a widely used plotting library in Python that provides a flexible and powerful tool for creating static, animated, and interactive visualizations in Python. It is particularly well-suited for creating publication-quality plots and charts....

Visualizing Training Progress Using TensorBoard

In order to visualize the training process in a deep learning model, we can use SummaryWriter class from torch.utils.tensorboard module, which seamlessly integrates with TensorBoard, a visualization tool developed by TensorFlow....

Contact Us