Optimization (finding minima or maxima of a function) is a large field in mathematics. Often we need to find the maximum or minimum of a particular function f(x) where f is a scalar function but x could be a vector. Typical applications are the minimization of entities such as cost, risk and error, or the maximization of productivity, efficiency and profit. Optimization routines typically provide a method to minimize a given function: if we need to maximize f(x) we create a new function g(x) that reverses the sign of f, i.e. g(x) = - f(x) and we minimize g(x).
To use the optimization module in scipy first include the optimize module:
from scipy import optimize
1. Finding minima of a function
Let's make a program to find the minima of a simple function of a single variable. See the code below:
from scipy import optimize
from numpy import *
import numpy as N
from matplotlib import pyplot as plt
def f(x):
return 4*x**3 + (x-2)**2 + x**4
fig, ax = plt.subplots()
x = linspace(-5, 3, 100)
ax.plot(x, f(x))
plt.show()
x_min = optimize.fmin_bfgs(f, -2)
print("x_min is:")
print(x_min)
The program first plots a graph of the function and then calculates the minima as shown below:
Optimization terminated successfully.
Current function value: -3.506641
Iterations: 5
Function evaluations: 24
Gradient evaluations: 8
x_min is:
[-2.67298151]
------------------
(program exited with code: 0)
Press any key to continue . . .
In our program we use the fmin_bfgs function to find the minima of a function: x_min = optimize.fmin_bfgs(f, -2).
We can also use the brent or fminbound functions. They have a bit different syntax and use different algorithms as shown in the following program:
from scipy import optimize
from numpy import *
import numpy as N
from matplotlib import pyplot as plt
def f(x):
return 4*x**3 + (x-2)**2 + x**4
print(optimize.brent(f))
print('\n')
print(optimize.fminbound(f, -4, 2))
The output of the program is shown below:
0.46961743402759754
-2.6729822917513886
------------------
(program exited with code: 0)
Press any key to continue . . .
Next , we provide an example showing (i) the definition of the test function and (ii) the call of the
scipy.optimize.fmin function which takes as argument a function f to minimize and an initial value
x0 from which to start the search for the minimum, and which returns the value of x for which f(x) is
(locally) minimized. Typically, the search for the minimum is a local search, i.e. the algorithm follows the local gradient. We repeat the search for the minimum for two values (x0 = 1:0 and x0 = 2:0,respectively) to demonstrate that depending on the starting value we may find different minima of
the function f.
The majority of the commands (after the two calls to fmin) in the program shown below creates the plot of the function, the start points for the searches and the minima obtained:
from scipy import optimize
from numpy import *
import numpy as N
from matplotlib import pyplot as plt
from scipy import arange , cos , exp
import pylab
def f(x):
return cos(x) - 3 * exp ( -(x - 0.2) ** 2)
# find minima of f(x),
# starting from 1.0 and 2.0 respectively
minimum1 = optimize.fmin_bfgs (f, 1.0)
print (" Start search at x=1. , minimum is")
print(minimum1)
minimum2 = optimize.fmin_bfgs (f, 2.0)
print (" Start search at x=2. , minimum is")
print(minimum2)
# plot function
x = arange (-10, 10, 0.1)
y = f(x)
pylab . plot (x, y, label ='$\cos(x)-3e^{ -(x -0.2)^2} $')
pylab . xlabel ('x')
pylab . grid ()
pylab . axis ([-5, 5, -2.2, 0.5])
# add minimum1 to plot
pylab . plot ( minimum1 , f( minimum1 ), 'vr ',label ='minimum 1')
# add start1 to plot
pylab . plot (1.0 , f(1.0) , 'or ', label ='start 1')
#add minimum2 to plot
pylab . plot ( minimum2 ,f( minimum2 ),'vg ' ,label ='minimum 2')
#add start2 to plot
pylab . plot (2.0 ,f(2.0) , 'og ',label ='start 2')
pylab . legend (loc='lower left ')
pylab . show ()
The output of the program is shown below:
Calling the fmin function, will produce some diagnostic output which can be seen in the following output window:
Optimization terminated successfully.
Current function value: -2.023866
Iterations: 5
Function evaluations: 18
Gradient evaluations: 6
Start search at x=1. , minimum is
[0.23961728]
Optimization terminated successfully.
Current function value: -1.000529
Iterations: 3
Function evaluations: 21
Gradient evaluations: 7
Start search at x=2. , minimum is
[3.13845708]
------------------
(program exited with code: 0)
Press any key to continue . . .
The return value from the fmin function is a numpy array which (for the example above) contains only one number as we have only one parameter (here x) to vary.
In general, fmin can be used to find the minimum in a higher-dimensional parameter space if there
are several parameters. In that case, the numpy array would contain those parameters that minimise
the objective function. The objective function f(x) has to return a scalar even if there are more
parameters, i.e. even if x is a vector as in f(x).
2. Finding a solution to a function
To find the root for a function of the form f(x) = 0 we can use the fsolve function. It requires an initial
guess. Let's make our program which defines the function and then plot the graph:
from scipy import optimize
from numpy import *
import numpy as N
from matplotlib import pyplot as plt
from scipy import arange , cos , exp
import pylab
omega_c = 3.0
def f(omega):
# a transcendental equation: resonance frequencies of a low-Q SQUID terminated microwave re
return tan(2*pi*omega) - omega_c/omega
fig, ax = plt.subplots(figsize=(10,4))
x = linspace(0, 3, 1000)
y = f(x)
mask = where(abs(y) > 50)
x[mask] = y[mask] = NaN # get rid of vertical line when the function flip sign
ax.plot(x, y)
ax.plot([0, 3], [0, 0], 'k')
ax.set_ylim(-5,5);
plt.show()
The output of the program plots the following graph:
Now we find the root for a function using optimize.fsolve() as shown below:
print(optimize.fsolve(f, 0.1))
print('\n')
print(optimize.fsolve(f, 0.6))
print('\n')
print(optimize.fsolve(f, 0.6))
The output of the program is shown below:
[0.23743014]
[0.71286972]
[0.71286972]
------------------
(program exited with code: 0)
Press any key to continue . . .
Here I am ending this post regarding optimization with SciPy. Next post will discuss about Interpolation. Till we meet next keep learning and practicing Python as Python is easy to learn!
To use the optimization module in scipy first include the optimize module:
from scipy import optimize
1. Finding minima of a function
Let's make a program to find the minima of a simple function of a single variable. See the code below:
from scipy import optimize
from numpy import *
import numpy as N
from matplotlib import pyplot as plt
def f(x):
return 4*x**3 + (x-2)**2 + x**4
fig, ax = plt.subplots()
x = linspace(-5, 3, 100)
ax.plot(x, f(x))
plt.show()
x_min = optimize.fmin_bfgs(f, -2)
print("x_min is:")
print(x_min)
The program first plots a graph of the function and then calculates the minima as shown below:
Optimization terminated successfully.
Current function value: -3.506641
Iterations: 5
Function evaluations: 24
Gradient evaluations: 8
x_min is:
[-2.67298151]
------------------
(program exited with code: 0)
Press any key to continue . . .
In our program we use the fmin_bfgs function to find the minima of a function: x_min = optimize.fmin_bfgs(f, -2).
We can also use the brent or fminbound functions. They have a bit different syntax and use different algorithms as shown in the following program:
from scipy import optimize
from numpy import *
import numpy as N
from matplotlib import pyplot as plt
def f(x):
return 4*x**3 + (x-2)**2 + x**4
print(optimize.brent(f))
print('\n')
print(optimize.fminbound(f, -4, 2))
The output of the program is shown below:
0.46961743402759754
-2.6729822917513886
------------------
(program exited with code: 0)
Press any key to continue . . .
Next , we provide an example showing (i) the definition of the test function and (ii) the call of the
scipy.optimize.fmin function which takes as argument a function f to minimize and an initial value
x0 from which to start the search for the minimum, and which returns the value of x for which f(x) is
(locally) minimized. Typically, the search for the minimum is a local search, i.e. the algorithm follows the local gradient. We repeat the search for the minimum for two values (x0 = 1:0 and x0 = 2:0,respectively) to demonstrate that depending on the starting value we may find different minima of
the function f.
The majority of the commands (after the two calls to fmin) in the program shown below creates the plot of the function, the start points for the searches and the minima obtained:
from scipy import optimize
from numpy import *
import numpy as N
from matplotlib import pyplot as plt
from scipy import arange , cos , exp
import pylab
def f(x):
return cos(x) - 3 * exp ( -(x - 0.2) ** 2)
# find minima of f(x),
# starting from 1.0 and 2.0 respectively
minimum1 = optimize.fmin_bfgs (f, 1.0)
print (" Start search at x=1. , minimum is")
print(minimum1)
minimum2 = optimize.fmin_bfgs (f, 2.0)
print (" Start search at x=2. , minimum is")
print(minimum2)
# plot function
x = arange (-10, 10, 0.1)
y = f(x)
pylab . plot (x, y, label ='$\cos(x)-3e^{ -(x -0.2)^2} $')
pylab . xlabel ('x')
pylab . grid ()
pylab . axis ([-5, 5, -2.2, 0.5])
# add minimum1 to plot
pylab . plot ( minimum1 , f( minimum1 ), 'vr ',label ='minimum 1')
# add start1 to plot
pylab . plot (1.0 , f(1.0) , 'or ', label ='start 1')
#add minimum2 to plot
pylab . plot ( minimum2 ,f( minimum2 ),'vg ' ,label ='minimum 2')
#add start2 to plot
pylab . plot (2.0 ,f(2.0) , 'og ',label ='start 2')
pylab . legend (loc='lower left ')
pylab . show ()
The output of the program is shown below:
Calling the fmin function, will produce some diagnostic output which can be seen in the following output window:
Optimization terminated successfully.
Current function value: -2.023866
Iterations: 5
Function evaluations: 18
Gradient evaluations: 6
Start search at x=1. , minimum is
[0.23961728]
Optimization terminated successfully.
Current function value: -1.000529
Iterations: 3
Function evaluations: 21
Gradient evaluations: 7
Start search at x=2. , minimum is
[3.13845708]
------------------
(program exited with code: 0)
Press any key to continue . . .
The return value from the fmin function is a numpy array which (for the example above) contains only one number as we have only one parameter (here x) to vary.
In general, fmin can be used to find the minimum in a higher-dimensional parameter space if there
are several parameters. In that case, the numpy array would contain those parameters that minimise
the objective function. The objective function f(x) has to return a scalar even if there are more
parameters, i.e. even if x is a vector as in f(x).
2. Finding a solution to a function
To find the root for a function of the form f(x) = 0 we can use the fsolve function. It requires an initial
guess. Let's make our program which defines the function and then plot the graph:
from scipy import optimize
from numpy import *
import numpy as N
from matplotlib import pyplot as plt
from scipy import arange , cos , exp
import pylab
omega_c = 3.0
def f(omega):
# a transcendental equation: resonance frequencies of a low-Q SQUID terminated microwave re
return tan(2*pi*omega) - omega_c/omega
fig, ax = plt.subplots(figsize=(10,4))
x = linspace(0, 3, 1000)
y = f(x)
mask = where(abs(y) > 50)
x[mask] = y[mask] = NaN # get rid of vertical line when the function flip sign
ax.plot(x, y)
ax.plot([0, 3], [0, 0], 'k')
ax.set_ylim(-5,5);
plt.show()
The output of the program plots the following graph:
Now we find the root for a function using optimize.fsolve() as shown below:
print(optimize.fsolve(f, 0.1))
print('\n')
print(optimize.fsolve(f, 0.6))
print('\n')
print(optimize.fsolve(f, 0.6))
The output of the program is shown below:
[0.23743014]
[0.71286972]
[0.71286972]
------------------
(program exited with code: 0)
Press any key to continue . . .
Here I am ending this post regarding optimization with SciPy. Next post will discuss about Interpolation. Till we meet next keep learning and practicing Python as Python is easy to learn!
0 comments:
Post a Comment