Data Fitting with Newton’s Method
Suppose we have some data points in the form of (x, y), and we want to fit a line of the form y = mx + b to these points, where m is the slope and b is the y-intercept. We can use Newton’s method to minimize the sum of squared errors between the observed y-values and the predicted y-values from our model.
For the implementation we are generating some random sample data. Along with finding the optimal parameters, we are using the Matplotlib library to plot the fitted line. Following is the Python implementation:
import numpy as np
import matplotlib.pyplot as plt
# Generate some sample data
np.random.seed(0)
x = np.linspace(0, 10, 20)
y = 2 * x + 1 + np.random.normal(0, 1, 20)
# Define the model: y = mx + b
def model(x, m, b):
return m * x + b
# Define the loss function: Mean Squared Error
def loss_function(params):
m, b = params
y_pred = model(x, m, b)
return np.mean((y - y_pred) ** 2)
# Define the gradient of the loss function
def gradient(params):
m, b = params
grad_m = -2 * np.mean(x * (y - model(x, m, b)))
grad_b = -2 * np.mean(y - model(x, m, b))
return np.array([grad_m, grad_b])
# Define the Hessian matrix of the loss function
def hessian(params):
m, b = params
hessian_mm = 2 * np.mean(x ** 2)
hessian_mb = 2 * np.mean(x)
hessian_bb = 2
return np.array([[hessian_mm, hessian_mb], [hessian_mb, hessian_bb]])
# Newton's method for optimization
def newtons_method(init_params, max_iterations=100, tolerance=1e-6):
params = init_params
for i in range(max_iterations):
grad = gradient(params)
hess = hessian(params)
params -= np.linalg.inv(hess).dot(grad)
if np.linalg.norm(grad) < tolerance:
break
return params
# Initial parameters
initial_params = np.array([0.0, 0.0])
# Run Newton's method to find optimal parameters
optimal_params = newtons_method(initial_params)
print('The optimal parameters are:', optimal_params)
# Plot the data points
plt.scatter(x, y, label='Data')
# Plot the fitted line
plt.plot(x, model(x, *optimal_params), color='red', label='Fitted Line')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Line fitting with Newton\'s Method')
plt.legend()
plt.grid(True)
plt.show()
Output:
The optimal parameters are: [1.88627741 2.13794752]
The plot displays the same data points along with the fitted line obtained using Newton’s method.
Newton’s method in Machine Learning
Optimization algorithms are essential tools across various fields, ranging from engineering and computer science to economics and physics. Among these algorithms, Newton’s method holds a significant place due to its efficiency and effectiveness in finding the roots of equations and optimizing functions, here in this article we will study more about Newton’s method and it’s use in machine learning.
Table of Content
- Newton’s Method for Optimization
- Second-Order Approximation
- Newton’s Method for Finding Local Minima or Maxima in Python
- Convergence Properties of Newton’s Method
- Complexity of Newton’s Method
- Time Complexity of Newton’s Method
- Parameter Estimation in Logistic Regression using Newton’s Method
- Data Fitting with Newton’s Method
- Newton’s Method vs Other Optimization Algorithms
- Applications of Newton’s Method
Contact Us