Newton’s Method for Finding Local Minima or Maxima in Python
In this section, we are going to demonstrate how to use Newton’s method for optimization using python.
The following code aims to find the local minima of f(x) using the following steps:
f(x)
: The function to be minimized. In this case, it’sx^2 - 4
.numerical_derivative(f, x, h=1e-6)
: A function to compute the numerical derivative off
atx
using a small step sizeh
.newton_method_optimization(initial_guess, f, tolerance=1e-6, max_iterations=100)
: The main function implementing Newton’s method. It takes an initial guess for the minimum (initial_guess
), the function to minimize (f
), a tolerance level for convergence (tolerance
), and a maximum number of iterations (max_iterations
).initial_guess
: A random initial guess for the minimum, generated usingrandom.uniform(-10, 10)
.optimal_solution
: The result of running Newton’s method to find the optimal solution.- Printing the result: If an optimal solution is found, it prints the rounded optimal solution and the value of the function at that optimal solution.
import random
def f(x):
return x**2 - 4
def numerical_derivative(f, x, h=1e-6):
return (f(x + h) - f(x)) / h
def newton_method_optimization(initial_guess, f, tolerance=1e-6, max_iterations=100):
x = initial_guess
for iteration in range(max_iterations):
# Compute the first derivative at current point
f_prime_x = numerical_derivative(f, x)
# Compute the second derivative
f_double_prime_x = numerical_derivative(lambda x: numerical_derivative(f, x), x)
if f_double_prime_x == 0:
print("Error: Division by zero or small derivative.")
return None
# Compute the update using Newton's method
delta_x = - f_prime_x / f_double_prime_x
# Update x
x += delta_x
# Check for convergence
if abs(delta_x) < tolerance:
print("Converged after", iteration + 1, "iterations.")
return x
print("Did not converge within", max_iterations, "iterations.")
return None
# Initial guess
initial_guess = random.uniform(-10, 10)
# Run Newton's method
optimal_solution = newton_method_optimization(initial_guess, f)
# Printing the optimal solution rounded to three decimal points
if optimal_solution is not None:
optimal_solution_rounded = round(optimal_solution, 3)
function_value_rounded = round(f(optimal_solution), 3)
print("Optimal solution:", optimal_solution_rounded)
print("Value of the function at optimal solution:", function_value_rounded)
Output:
Converged after 3 iterations.
Optimal solution: -0.0
Value of the function at optimal solution: -4.0
In the above implementation, we are applying Newton’s Method to find the minimum value of the function f(x) = x^2 – 4. We have considered the convergence criteria by setting the max_iterations = 100 and the tolerance level = 10^-6.
Newton’s method in Machine Learning
Optimization algorithms are essential tools across various fields, ranging from engineering and computer science to economics and physics. Among these algorithms, Newton’s method holds a significant place due to its efficiency and effectiveness in finding the roots of equations and optimizing functions, here in this article we will study more about Newton’s method and it’s use in machine learning.
Table of Content
- Newton’s Method for Optimization
- Second-Order Approximation
- Newton’s Method for Finding Local Minima or Maxima in Python
- Convergence Properties of Newton’s Method
- Complexity of Newton’s Method
- Time Complexity of Newton’s Method
- Parameter Estimation in Logistic Regression using Newton’s Method
- Data Fitting with Newton’s Method
- Newton’s Method vs Other Optimization Algorithms
- Applications of Newton’s Method
Contact Us