Newton’s Method for Finding Local Minima or Maxima in Python

In this section, we are going to demonstrate how to use Newton’s method for optimization using python.

The following code aims to find the local minima of f(x) using the following steps:

  1. f(x): The function to be minimized. In this case, it’s x^2 - 4.
  2. numerical_derivative(f, x, h=1e-6): A function to compute the numerical derivative of f at x using a small step size h.
  3. newton_method_optimization(initial_guess, f, tolerance=1e-6, max_iterations=100): The main function implementing Newton’s method. It takes an initial guess for the minimum (initial_guess), the function to minimize (f), a tolerance level for convergence (tolerance), and a maximum number of iterations (max_iterations).
  4. initial_guess: A random initial guess for the minimum, generated using random.uniform(-10, 10).
  5. optimal_solution: The result of running Newton’s method to find the optimal solution.
  6. Printing the result: If an optimal solution is found, it prints the rounded optimal solution and the value of the function at that optimal solution.
Python3

import random def f(x): return x**2 - 4 def numerical_derivative(f, x, h=1e-6): return (f(x + h) - f(x)) / h def newton_method_optimization(initial_guess, f, tolerance=1e-6, max_iterations=100): x = initial_guess for iteration in range(max_iterations): # Compute the first derivative at current point f_prime_x = numerical_derivative(f, x) # Compute the second derivative f_double_prime_x = numerical_derivative(lambda x: numerical_derivative(f, x), x) if f_double_prime_x == 0: print("Error: Division by zero or small derivative.") return None # Compute the update using Newton's method delta_x = - f_prime_x / f_double_prime_x # Update x x += delta_x # Check for convergence if abs(delta_x) < tolerance: print("Converged after", iteration + 1, "iterations.") return x print("Did not converge within", max_iterations, "iterations.") return None # Initial guess initial_guess = random.uniform(-10, 10) # Run Newton's method optimal_solution = newton_method_optimization(initial_guess, f) # Printing the optimal solution rounded to three decimal points if optimal_solution is not None: optimal_solution_rounded = round(optimal_solution, 3) function_value_rounded = round(f(optimal_solution), 3) print("Optimal solution:", optimal_solution_rounded) print("Value of the function at optimal solution:", function_value_rounded)

Output:

Converged after 3 iterations.
Optimal solution: -0.0
Value of the function at optimal solution: -4.0

In the above implementation, we are applying Newton’s Method to find the minimum value of the function f(x) = x^2 – 4. We have considered the convergence criteria by setting the max_iterations = 100 and the tolerance level = 10^-6.

Newton’s method in Machine Learning

Optimization algorithms are essential tools across various fields, ranging from engineering and computer science to economics and physics. Among these algorithms, Newton’s method holds a significant place due to its efficiency and effectiveness in finding the roots of equations and optimizing functions, here in this article we will study more about Newton’s method and it’s use in machine learning.

Table of Content

  • Newton’s Method for Optimization
  • Second-Order Approximation
  • Newton’s Method for Finding Local Minima or Maxima in Python
  • Convergence Properties of Newton’s Method
  • Complexity of Newton’s Method
  • Time Complexity of Newton’s Method
  • Parameter Estimation in Logistic Regression using Newton’s Method
  • Data Fitting with Newton’s Method
  • Newton’s Method vs Other Optimization Algorithms
  • Applications of Newton’s Method

Similar Reads

Newton’s Method for Optimization

Newton’s method can be extended to solve optimization problems by finding the minima or maxima of a real-valued function f(x). The goal of optimization is to find the value of x that minimizes or maximizes the function f(x). We are interested in finding critical points of the function where the first derivative is zero (for minima or maxima). Newton’s method utilizes the first and second derivatives of the function to iteratively refine the solution....

Second-Order Approximation

We begin our derivation by considering the second order approximation of function f(x) at point x = xn that is given by:...

Newton’s Method for Finding Local Minima or Maxima in Python

In this section, we are going to demonstrate how to use Newton’s method for optimization using python....

Convergence Properties of Newton’s Method

Newton’s method converges quadratically, meaning that with each iteration, the number of digits approximately doubles. Its convergence may be affected by several factors:...

Complexity of Newton’s Method

Newton’s method has a favorable convergence rate, but its complexity can be higher compared to methods like gradient descent. The reason for this could be:...

Time Complexity of Newton’s Method

Computing Gradient and Hessian: Computing the gradient typically requires O(n) operations for a function with n variables. Computing the Hessian involves O(n^2) operations for a function with n variables. However, if the Hessian has a specific structure (e.g., it’s sparse), specialized algorithms can reduce this complexity.Solving Linear System: In each iteration, Newton’s method involves solving a linear system, usually by methods like Gaussian elimination, LU decomposition, or iterative solvers like conjugate gradient descent. Solving a dense linear system typically requires O(n^3) operations, but this can be reduced to O(n^1.5) for certain specialized methods. If the Hessian is sparse, specialized solvers for sparse linear systems can be employed, potentially reducing the complexity significantly.Number of Iterations: The number of iterations required for convergence varies depending on factors such as the chosen optimization tolerance, the curvature of the function, and the choice of initial guess. In ideal conditions, Newton’s method converges quadratically....

Parameter Estimation in Logistic Regression using Newton’s Method

Parameter Estimation in Logistic Regression...

Data Fitting with Newton’s Method

Suppose we have some data points in the form of (x, y), and we want to fit a line of the form y = mx + b to these points, where m is the slope and b is the y-intercept. We can use Newton’s method to minimize the sum of squared errors between the observed y-values and the predicted y-values from our model....

Newton’s Method vs Other Optimization Algorithms

Now, we compare Newton’s Method with some other popular optimization algorithms....

Applications of Newton’s Method

Root Finding: Newton’s method can be used to find the roots of equations in various engineering and scientific applications, such as in solving nonlinear equations and systems of equations.Optimization in Machine Learning: Newton’s method can optimize parameters in machine learning algorithms, such as logistic regression, support vector machines (SVMs) and Gaussian mixture models (GMMs).Computer Graphics: Newton’s method is used in computer graphics for tasks such as finding intersections of curves and surfaces, ray tracing, and solving geometric problems.Signal Processing: It’s used in signal processing for tasks like system identification, filter design, and spectral analysis.Image Processing: It’s used in image processing for tasks like image registration, image reconstruction, and image segmentation....

Contact Us