Mathematical implementation of Univariate Optimization
min f(x) w.r.t x
Given f(x) = 3x4 – 4x3 – 12x2 + 3
According to the first-order necessary condition which states that the minimum occurs at a critical point where the derivative of the function is zero or undefined
Taking the derivative with respect to x:
The equation to find the critical points:
The quadratic equation:
The solutions of the quadratic equation:
So, we have two critical points: x = -1 and x = 2. Now, we need to analyze the nature of these critical points to determine if they correspond to a minimum.
To do this, we can evaluate the second derivative of f(x):
Now, we want to know among these 3 values of x which are actually minimizers. To do so we look at the second-order sufficiency condition. So according to the second-order sufficiency condition:
0" title="Rendered by QuickLaTeX.com" height="30" width="331" style="vertical-align: -7px;">
Putting each value of x in the above equation:
f”(x) | x = 0 = -24 (Don’t satisfy the sufficiency condition)
f”(x) | x = -1 = 36 > 0 (Satisfy the sufficiency condition)
f”(x) | x = 2 = 72 > 0 (Satisfy the sufficiency condition)
Hence -1 and 2 are the actual minimizer of f(x). So for these 2 values
f(x) | x = -1 = -2
f(x) | x = 2 = -29
Uni-variate Optimization – Data Science
Optimization is an important part of any data science project, with the help of optimization we try to find the best parameters for our machine learning model which will give the minimum loss value. There can be several ways of minimizing the loss function, However, generally, we use variations of the gradient method for our optimization. In this article, we will discuss univariate optimization.
Contact Us