Scipy provides a somewhat generic function (based on the Levenburg-Marquardt algorithm )through
scipy.optimize.curve fit to t a given (Python) function to a given data set. The assumption is
that we have been given a set of data with points x1; x2; : : : xN and with corresponding function values yi and a dependence of yi on xi such that:
We want to determine the parameter vector :
so that r, the sum of the residuals, is as small as possible:
Curve tting is of particular use if the data is noisy: for a given xi and
we have a (unknown) error term
We use the following example to clarify this:
See the following program:
from scipy . optimize import curve_fit
import scipy as sp
import numpy as np
import matplotlib as mpl
import pylab
from matplotlib import pyplot as plt
def f(x, a, b, c):
""" Fit function y=f(x,p) with parameters p=(a,b,c). """
return a * np. exp (- b * x) + c
# create fake data
x = np. linspace (0, 4, 50)
y = f(x, a=2.5 , b=1.3 , c =0.5)
#add noise
yi = y + 0.2 * np. random . normal ( size =len (x))
# call curve fit function
popt , pcov = curve_fit (f, x, yi)
a, b, c = popt
print (" Optimal parameters are a=%g, b=%g, and c=%g" % (a, b, c))
# plotting
import pylab
yfitted = f(x, * popt ) # equivalent to f(x, popt [0] , popt [1] , popt [2])
pylab . plot (x, yi , 'o', label ='data $y_i$ ')
pylab . plot (x, yfitted , '-', label ='fit $f(x_i)$')
pylab . xlabel ('x')
pylab . legend ()
Blue dots are interpolated by piecewise polynomial of order 0 (\nearest"), order 1(\linear") and order 3 (\cubic").
In the program above we define the formatting function y = f(x) through Python code. We can thus find t (nearly) arbitrary functions using the curve fit method.
The curve fit function returns a tuple popt, pcov. The first entry popt contains a tuple of the OPTimal Parameters. The second entry contains the covariance matrix for all parameters. The diagonals provide the variance of the parameter estimations.
For the curve tting process to work, the Levenburg-Marquardt algorithm needs to start the tting
process with initial guesses for the nal parameters. If these are not speci ed (as in the example above),
the value \1.0\ is used for the initial guess.
If the algorithm fails to t a function to data (even though the function describes the data reasonably), we need to give the algorithm better estimates for the initial parameters. For the example shown above, we could give the estimates to the curve fit function by changing the line
popt , pcov = curve_fit (f, x, yi)
to
popt , pcov = curve_fit (f, x, yi , p0 =(2 ,1 ,0.6))
if our initial guesses would be a = 2; b = 1 and c = 0:6. Once we take the algorithm \roughly in the
right area" in parameter space, the tting usually works well.
Plotting the noisy data together with the t, we get the following plot:
The particular fitted parameters for this run a=2.3669, b=1.06078, and c=0.464289.
Here I am ending today's post, will catch up with another post. Till we meet again keep practicing and learning Python as Python is easy to learn!
scipy.optimize.curve fit to t a given (Python) function to a given data set. The assumption is
that we have been given a set of data with points x1; x2; : : : xN and with corresponding function values yi and a dependence of yi on xi such that:
We want to determine the parameter vector :
so that r, the sum of the residuals, is as small as possible:
Curve tting is of particular use if the data is noisy: for a given xi and
we have a (unknown) error term
We use the following example to clarify this:
See the following program:
from scipy . optimize import curve_fit
import scipy as sp
import numpy as np
import matplotlib as mpl
import pylab
from matplotlib import pyplot as plt
def f(x, a, b, c):
""" Fit function y=f(x,p) with parameters p=(a,b,c). """
return a * np. exp (- b * x) + c
# create fake data
x = np. linspace (0, 4, 50)
y = f(x, a=2.5 , b=1.3 , c =0.5)
#add noise
yi = y + 0.2 * np. random . normal ( size =len (x))
# call curve fit function
popt , pcov = curve_fit (f, x, yi)
a, b, c = popt
print (" Optimal parameters are a=%g, b=%g, and c=%g" % (a, b, c))
# plotting
import pylab
yfitted = f(x, * popt ) # equivalent to f(x, popt [0] , popt [1] , popt [2])
pylab . plot (x, yi , 'o', label ='data $y_i$ ')
pylab . plot (x, yfitted , '-', label ='fit $f(x_i)$')
pylab . xlabel ('x')
pylab . legend ()
Blue dots are interpolated by piecewise polynomial of order 0 (\nearest"), order 1(\linear") and order 3 (\cubic").
In the program above we define the formatting function y = f(x) through Python code. We can thus find t (nearly) arbitrary functions using the curve fit method.
The curve fit function returns a tuple popt, pcov. The first entry popt contains a tuple of the OPTimal Parameters. The second entry contains the covariance matrix for all parameters. The diagonals provide the variance of the parameter estimations.
For the curve tting process to work, the Levenburg-Marquardt algorithm needs to start the tting
process with initial guesses for the nal parameters. If these are not speci ed (as in the example above),
the value \1.0\ is used for the initial guess.
If the algorithm fails to t a function to data (even though the function describes the data reasonably), we need to give the algorithm better estimates for the initial parameters. For the example shown above, we could give the estimates to the curve fit function by changing the line
popt , pcov = curve_fit (f, x, yi)
to
popt , pcov = curve_fit (f, x, yi , p0 =(2 ,1 ,0.6))
if our initial guesses would be a = 2; b = 1 and c = 0:6. Once we take the algorithm \roughly in the
right area" in parameter space, the tting usually works well.
Plotting the noisy data together with the t, we get the following plot:
The particular fitted parameters for this run a=2.3669, b=1.06078, and c=0.464289.
Here I am ending today's post, will catch up with another post. Till we meet again keep practicing and learning Python as Python is easy to learn!
0 comments:
Post a Comment