Lsqcurvefit Cannot Continue User Function is Returning Inf or Nan Values
Solve nonlinear curve-fitting (data-fitting) problems in the least squares sense. That is, given input data xdata, and the observed output ydata, find coefficients x that "best-fit" the equation F(x, xdata)
where xdata and ydata are vectors and F(x, xdata) is a vector valued function.
The function        lsqcurvefit        uses the same algorithm as        lsqnonlin. Its purpose is to provide an interface designed specifically for data-fitting problems.
                  Syntax                
      
x = lsqcurvefit(fun,x0,xdata,ydata) x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub) x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options) x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options,P1,P2,...) [x,resnorm] = lsqcurvefit(...) [x,resnorm,residual] = lsqcurvefit(...) [x,resnorm,residual,exitflag] = lsqcurvefit(...) [x,resnorm,residual,exitflag,output] = lsqcurvefit(...) [x,resnorm,residual,exitflag,output,lambda] = lsqcurvefit(...) [x,resnorm,residual,exitflag,output,lambda,jacobian] = lsqcurvefit(...)
                  Description                
      
        lsqcurvefit        solves nonlinear data-fitting problems.        lsqcurvefit        requires a user-defined function to compute the vector-valued function        F(x, xdata). The size of the vector returned by the user-defined function must be the same as the size of        ydata.
        x = lsqcurvefit(fun,x0,xdata,ydata)                starts at        x0        and finds coefficients        x        to best fit the nonlinear function        fun(x,xdata)        to the data        ydata        (in the least squares sense).        ydata        must be the same size as the vector (or matrix)        F        returned by        fun.
        x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)                defines a set of lower and upper bounds on the design variables,        x, so that the solution is always in the range        lb <= x <= ub.
        x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options)                minimizes with the optimization parameters specified in the structure        options. Pass empty matrices for        lb        and        ub        if no bounds exist.
        x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options,P1,P2,...)                passes the problem-dependent parameters        P1,        P2, etc., directly to the function        fun. Pass an empty matrix for        options        to use the default values for        options.
        [x,resnorm] = lsqcurvefit(...)                returns the value of the squared 2-norm of the residual at        x:        sum{(fun(x,xdata)-ydata).^2}.
        [x,resnorm,residual] = lsqcurvefit(...)                returns the value of the residual,        fun(x,xdata)-ydata, at the solution        x.
        [x,resnorm,residual,exitflag] = lsqcurvefit(...)                returns a value        exitflag        that describes the exit condition.
        [x,resnorm,residual,exitflag,output] = lsqcurvefit(...)                returns a structure        output        that contains information about the optimization.
        [x,resnorm,residual,exitflag,output,lambda] = lsqcurvefit(...)                returns a structure        lambda        whose fields contain the Lagrange multipliers at the solution        x.
        [x,resnorm,residual,exitflag,output,lambda,jacobian] =                  returns the Jacobian of        99          99lsqcurvefit(...)        fun        at the solution        x.
                  Arguments                
                      
        Input Arguments.        Table 4-1, Input Arguments, contains general descriptions of arguments passed in to        lsqcurvefit. This section provides function-specific details for        fun        and        options:
                fun                 |               The function to be fit.                fun                is a function that takes a vector                x                and returns a vector                F, the objective functions evaluated at                x. The function                fun                can be specified as a function handle.x = lsqcurvefit(@myfun,x0,xdata,ydata)where myfun                is a MATLAB function such asfunction F = myfun(x,xdata) F = ... % Compute function values at x fun                can also be an inline object.f = inline('x(1)*xdata.^2+x(2)*sin(xdata)',...            'x','xdata'); x = lsqcurvefit(f,x0,xdata,ydata);                                                  If the Jacobian can also be computed                and                options.Jacobian                is                'on', set byoptions = optimset('Jacobian','on')                                                  then the function                fun                must return, in a second output argument, the Jacobian value                J, a matrix, at                x. Note that by checking the value of                nargout                the function can avoid computing                J                when                fun                is called with only one output argument (in the case where the optimization algorithm only needs the value of                F                but not                J).function [F,J] = myfun(x,xdata) F = ... % objective function values at x if nargout > 1 % two output arguments J = ... % Jacobian of the function evaluated at x endIf fun                returns a vector (matrix) of                m                components and                x                has length                n, where                n                is the length of                x0, then the Jacobian                J                is an m-by-n matrix where                J(i,j)                is the partial derivative of                F(i)                with respect to                x(j). (Note that the Jacobian                J                is the transpose of the gradient of                F.) |             
                options                 |               Options provides the function-specific details for the                options                parameters. |             
        Output Arguments.        Table 4-2, Output Arguments, contains general descriptions of arguments returned by        lsqcurvefit. This section provides function-specific details for        exitflag,        lambda, and        output:
                exitflag                 |               Describes the exit condition: |             |
|                  |               > 0 |               The function converged to a solution                x. |             
|                  |                               0 |               The maximum number of function evaluations or iterations was exceeded. |             
|                  |               < 0 |               The function did not converge to a solution. |             
                lambda                 |               Structure containing the Lagrange multipliers at the solution                x                (separated by constraint type). The fields of the structure are: |             |
|                  |                               lower                 |               Lower bounds                lb                 |             
|                  |                               upper                 |               Upper bounds                ub                 |             
                output                 |               Structure containing information about the optimization. The fields of the structure are: |             |
|                  |                               iterations                 |               Number of iterations taken. |             
|                  |                                                                  |               Number of function evaluations. |             
|                  |                               algorithm                 |               Algorithm used. |             
|                  |                               cgiterations                 |               The number of PCG iterations (large-scale algorithm only). |             
|                  |                                                                  |               The final step size taken (medium-scale algorithm only). |             
|                  |                                                                  |               Measure of first-order optimality (large-scale algorithm only) For large-scale bound constrained problems, the first-order optimality is the infinity norm of v.*g, where                v                is defined as in Box Constraints, and                g                is the gradient                g                =                J                T                F                (see Nonlinear Least Squares). |             
Note The sum of squares should not be formed explicitly. Instead, your function should return a vector of function values. See the examples below.
                  Options                
      
Optimization options parameters used by        lsqcurvefit. Some parameters apply to all algorithms, some are only relevant when using the large-scale algorithm, and others are only relevant when using the medium-scale algorithm.You can use        optimset        to set or change the values of these fields in the parameters structure,        options. See Table 4-3, Optimization Options Parameters,, for detailed information.      
We start by describing the        LargeScale        option since it states a        preference        for which algorithm to use. It is only a preference since certain conditions must be met to use the large-scale or medium-scale algorithm. For        the large-scale algorithm, the nonlinear system of equations cannot be under-determined; that is, the number of equations (the number of elements of        F        returned by        fun) must be at least as many as the length of        x. Furthermore, only the large-scale algorithm handles bound constraints:
                LargeScale                 |               Use large-scale algorithm if possible when set to                'on'. Use medium-scale algorithm when set to                'off'. |             
Medium-Scale and Large-Scale Algorithms. These parameters are used by both the medium-scale and large-scale algorithms:
                Diagnostics                 |               Print diagnostic information about the function to be minimized. |             
                Display                 |               Level of display.                'off'                displays no output;                'iter'                displays output at each iteration;                'final'                (default) displays just the final output. |             
                Jacobian                 |               If                'on',                lsqcurvefit                uses a user-defined Jacobian (defined in                fun), or Jacobian information (when using                JacobMult), for the objective function. If                'off',                lsqcurvefit                approximates the Jacobian using finite differences. |             
                MaxFunEvals                 |               Maximum number of function evaluations allowed. |             
                MaxIter                 |               Maximum number of iterations allowed. |             
                TolFun                 |               Termination tolerance on the function value. |             
                TolX                 |               Termination tolerance on                x. |             
Large-Scale Algorithm Only. These parameters are used only by the large-scale algorithm:
                JacobMult                 |               Function handle for Jacobian multiply function. For large-scale structured problems, this function computes the Jacobian matrix products                J*Y,                J'*Y, or                J'*(J*Y)                without actually forming                J. The function is of the formW = jmfun(Jinfo,Y,flag,p1,p2,...)where Jinfo                and the additional parameters                p1,p2,...                contain the matrices used to compute                J*Y                (or                J'*Y, or                J'*(J*Y)). The first argument                Jinfo                must be the same as the second argument returned by the objective function                fun.[F,Jinfo] = fun(x,p1,p2,...)The parameters p1,p2,...                are the same additional parameters that are passed to                lsqcurvefit                (and to                fun).lsqcurvefit(fun,...,options,p1,p2,...) Y                is a matrix that has the same number of rows as there are dimensions in the problem.                flag                determines which product to compute. If                flag == 0                then                W = J'*(J*Y). If                flag > 0                then                W = J*Y. If                flag < 0                then                W = J'*Y. In each case,                J                is not formed explicitly.                lsqcurvefit                uses                Jinfo                to compute the preconditioner. |             
|                  |                                                    Note                                          |             
                JacobPattern                 |               Sparsity pattern of the Jacobian for finite-differencing. If it is not convenient to compute the Jacobian matrix                J                in                fun,                lsqcurvefit                can approximate                J                via sparse finite-differences provided the structure of                J, i.e., locations of the nonzeros, is supplied as the value for                JacobPattern. In the worst case, if the structure is unknown, you can set                JacobPattern                to be a dense matrix and a full finite-difference approximation is computed in each iteration (this is the default if                JacobPattern                is not set). This can be very expensive for large problems so it is usually worth the effort to determine the sparsity structure. |             
                MaxPCGIter                 |               Maximum number of PCG (preconditioned conjugate gradient) iterations (see the Algorithm section below). |             
                PrecondBandWidth                 |               Upper bandwidth of preconditioner for PCG. By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. |             
                TolPCG                 |               Termination tolerance on the PCG iteration.                 |             
                TypicalX                 |               Typical                x                values. |             
Medium-Scale Algorithm Only. These parameters are used only by the medium-scale algorithm:
                DerivativeCheck                 |                                Compare user-supplied derivatives (Jacobian) to finite-differencing derivatives.  |             
                DiffMaxChange                 |                                Maximum change in variables for finite-differencing.  |             
                DiffMinChange                 |                                Minimum change in variables for finite-differencing.  |             
                LevenbergMarquardt                 |                                Choose Levenberg-Marquardt over Gauss-Newton algorithm.  |             
                LineSearchType                 |                                Line search algorithm choice.  |             
                  Examples                
      
Vectors of data xdata and ydata are of length n. You want to find coefficients x to find the best fit to the equation
that is, you want to minimize
where        F(x,xdata) = x(1)*xdata.^2 + x(2)*sin(xdata) + x(3)*xdata.^3, starting at the point                x0 = [0.3, 0.4, 0.1].
First, write an M-file to return the value of        F        (F        has        n        components).
function F = myfun(x,xdata) F = x(1)*xdata.^2 + x(2)*sin(xdata) + x(3)*xdata.^3;
Next, invoke an optimization routine:
% Assume you determined xdata and ydata experimentally xdata = [3.6 7.7 9.3 4.1 8.6 2.8 1.3 7.9 10.0 5.4]; ydata = [16.5 150.6 263.1 24.7 208.5 9.9 2.7 163.9 325.0 54.3]; x0 = [10, 10, 10] % Starting guess [x,resnorm] = lsqcurvefit(@myfun,x0,xdata,ydata)
Note that at the time that        lsqcurvefit        is called,        xdata        and        ydata        are assumed to exist and are vectors of the same size. They must be the same size because the value                F        returned by        fun        must be the same size as        ydata.
After 33 function evaluations, this example gives the solution
x = 0.2269 0.3385 0.3021 % residual or sum of squares resnorm = 6.2950
The residual is not zero because in this case there was some noise (experimental error) in the data.
                  Algorithm                
                      
        Large-Scale Optimization.        By default        lsqcurvefit        chooses the large-scale algorithm. This algorithm is a subspace trust region method and is based on the interior-reflective Newton method described in [1], [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust Region Methods for Nonlinear Minimization, and Preconditioned Conjugate Gradients in the "Large-Scale Algorithms" section.
          Medium-Scale Optimization.lsqcurvefit          with          options.LargeScale          set to          'off'          uses the Levenberg-Marquardt method with line-search [4], [5], [6]. Alternatively, a Gauss-Newton method [3] with line-search may be selected. The choice of algorithm is made by setting          options.LevenbergMarquardt. Setting          options.LevenbergMarquardt          to          'off'          (and          options.LargeScale          to          'off') selects the Gauss-Newton method, which is generally faster when the residual
                    is small.
The default line search algorithm, i.e.,        options.LineSearchType        set to        'quadcubic', is a safeguarded mixed quadratic and cubic polynomial interpolation and extrapolation method. A safeguarded cubic polynomial method can be selected by setting        options.LineSearchType        to        'cubicpoly'. This method generally requires fewer function evaluations but more gradient evaluations. Thus, if gradients are being supplied and can be calculated inexpensively, the cubic polynomial line search method is preferable. The algorithms used are described fully in Standard Algorithms.
                  Diagnostics                
                      
        Large-Scale Optimization.        The large-scale code does not allow equal upper and lower bounds. For example if        lb(2)==ub(2)        then        lsqlin        gives the error
Equal upper and lower bounds not permitted.
        (lsqcurvefit        does not handle equality constraints, which is another way to formulate equal bounds. If equality constraints are present, use        fmincon,        fminimax, or        fgoalattain        for alternative formulations where equality constraints can be included.)
                  Limitations                
      
The function to be minimized must be continuous.        lsqcurvefit        may only give local solutions.
        lsqcurvefit        only handles real variables (the user-defined function must only return real values). When x has complex variables, the variables must be split into real and imaginary parts.
        Large-Scale Optimization.        The large-scale method for        lsqcurvefit        does not solve underdetermined systems; it requires that the number of equations, i.e., row dimension of        F, be at least as great as the number of variables. In the underdetermined case, the medium-scale algorithm is used instead. See Table 1-4, Large-Scale Problem Coverage and Requirements for more information on what problem formulations are covered and what information must be provided.
The preconditioner computation used in the preconditioned conjugate gradient part of the large-scale method forms J T J (where J is the Jacobian matrix) before computing the preconditioner; therefore, a row of J with many nonzeros, which results in a nearly dense product J T J, may lead to a costly solution process for large problems.
If components of        x        have no upper (or lower) bounds, then        lsqcurvefit        prefers that the corresponding components of        ub                (or        lb) be set to        inf                (or        -inf        for lower bounds) as opposed to an arbitrary but very large positive (or negative for lower bounds) number.
Currently, if the analytical Jacobian is provided in        fun, the        options        parameter        DerivativeCheck        cannot be used with the large-scale method to compare the analytic Jacobian to the finite-difference Jacobian. Instead, use the medium-scale method to check the derivatives with        options        parameter        MaxIter        set to zero iterations. Then run the problem with the large-scale method.
                  See Also                
      
        @        (function_handle),          \,        lsqlin,        lsqnonlin        ,                lsqnonneg,        optimset      
                  References                
      
[1] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418-445, 1996.
[2] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189-224, 1994.
[3] Dennis, J. E. Jr., "Nonlinear Least Squares," State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269-312, 1977.
[4] Levenberg, K., "A Method for the Solution of Certain Problems in Least Squares," Quarterly Applied Math. 2, pp. 164-168, 1944.
[5] Marquardt, D., "An Algorithm for Least Squares Estimation of Nonlinear Parameters," SIAM Journal Applied Math. Vol. 11, pp. 431-441, 1963.
[6] More, J. J., "The Levenberg-Marquardt Algorithm: Implementation and Theory," Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105-116, 1977.
|                  |               linprog | lsqlin |                  |             
christensenaladvid.blogspot.com
Source: http://matrix.etseq.urv.es/manuals/matlab/toolbox/optim/lsqcurvefit.html
0 Response to "Lsqcurvefit Cannot Continue User Function is Returning Inf or Nan Values"
Post a Comment