--- /dev/null
+# Created by Octave 3.6.2, Tue Jun 12 20:02:41 2012 UTC <root@t61>
+# name: cache
+# type: cell
+# rows: 3
+# columns: 58
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 16
+LinearRegression
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 717
+ general linear regression
+
+ [p,y_var,r,p_var]=LinearRegression(F,y)
+ [p,y_var,r,p_var]=LinearRegression(F,y,weight)
+
+ determine the parameters p_j (j=1,2,...,m) such that the function
+ f(x) = sum_(i=1,...,m) p_j*f_j(x) fits as good as possible to the
+ given values y_i = f(x_i)
+
+ parameters
+ F n*m matrix with the values of the basis functions at the support points
+ in column j give the values of f_j at the points x_i (i=1,2,...,n)
+ y n column vector of given values
+ weight n column vector of given weights
+
+ return values
+ p m vector with the estimated values of the parameters
+ y_var estimated variance of the error
+ r weighted norm of residual
+ p_var estimated variance of the parameters p_j
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 27
+ general linear regression
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+adsmax
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1909
+ADSMAX Alternating directions method for direct search optimization.
+ [x, fmax, nf] = ADSMAX(FUN, x0, STOPIT, SAVIT, P) attempts to
+ maximize the function FUN, using the starting vector x0.
+ The alternating directions direct search method is used.
+ Output arguments:
+ x = vector yielding largest function value found,
+ fmax = function value at x,
+ nf = number of function evaluations.
+ The iteration is terminated when either
+ - the relative increase in function value between successive
+ iterations is <= STOPIT(1) (default 1e-3),
+ - STOPIT(2) function evaluations have been performed
+ (default inf, i.e., no limit), or
+ - a function value equals or exceeds STOPIT(3)
+ (default inf, i.e., no test on function values).
+ Progress of the iteration is not shown if STOPIT(5) = 0 (default 1).
+ If a non-empty fourth parameter string SAVIT is present, then
+ `SAVE SAVIT x fmax nf' is executed after each inner iteration.
+ By default, the search directions are the co-ordinate directions.
+ The columns of a fifth parameter matrix P specify alternative search
+ directions (P = EYE is the default).
+ NB: x0 can be a matrix. In the output argument, in SAVIT saves,
+ and in function calls, x has the same shape as x0.
+ ADSMAX(fun, x0, STOPIT, SAVIT, P, P1, P2,...) allows additional
+ arguments to be passed to fun, via feval(fun,x,P1,P2,...).
+ Reference:
+ N. J. Higham, Optimization by direct search in matrix computations,
+ SIAM J. Matrix Anal. Appl, 14(2): 317-333, 1993.
+ N. J. Higham, Accuracy and Stability of Numerical Algorithms,
+ Second edition, Society for Industrial and Applied Mathematics,
+ Philadelphia, PA, 2002; sec. 20.5.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 69
+ADSMAX Alternating directions method for direct search optimization.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 7
+battery
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 474
+ battery.m: repeatedly call bfgs using a battery of
+ start values, to attempt to find global min
+ of a nonconvex function
+
+ INPUTS:
+ func: function to mimimize
+ args: args of function
+ minarg: argument to minimize w.r.t. (usually = 1)
+ startvals: kxp matrix of values to try for sure (don't include all zeros, that's automatic)
+ max iters per start value
+ number of additional random start values to try
+
+ OUTPUT: theta - the best value found - NOT iterated to convergence
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 9
+ battery.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 7
+bfgsmin
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1747
+ bfgsmin: bfgs or limited memory bfgs minimization of function
+
+ Usage: [x, obj_value, convergence, iters] = bfgsmin(f, args, control)
+
+ The function must be of the form
+ [value, return_2,..., return_m] = f(arg_1, arg_2,..., arg_n)
+ By default, minimization is w.r.t. arg_1, but it can be done
+ w.r.t. any argument that is a vector. Numeric derivatives are
+ used unless analytic derivatives are supplied. See bfgsmin_example.m
+ for methods.
+
+ Arguments:
+ * f: name of function to minimize (string)
+ * args: a cell array that holds all arguments of the function
+ The argument with respect to which minimization is done
+ MUST be a vector
+ * control: an optional cell array of 1-8 elements. If a cell
+ array shorter than 8 elements is provided, the trailing elements
+ are provided with default values.
+ * elem 1: maximum iterations (positive integer, or -1 or Inf for unlimited (default))
+ * elem 2: verbosity
+ 0 = no screen output (default)
+ 1 = only final results
+ 2 = summary every iteration
+ 3 = detailed information
+ * elem 3: convergence criterion
+ 1 = strict (function, gradient and param change) (default)
+ 0 = weak - only function convergence required
+ * elem 4: arg in f_args with respect to which minimization is done (default is first)
+ * elem 5: (optional) Memory limit for lbfgs. If it's a positive integer
+ then lbfgs will be use. Otherwise ordinary bfgs is used
+ * elem 6: function change tolerance, default 1e-12
+ * elem 7: parameter change tolerance, default 1e-6
+ * elem 8: gradient tolerance, default 1e-5
+
+ Returns:
+ * x: the minimizer
+ * obj_value: the value of f() at x
+ * convergence: 1 if normal conv, other values if not
+ * iters: number of iterations performed
+
+ Example: see bfgsmin_example.m
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 63
+ bfgsmin: bfgs or limited memory bfgs minimization of function
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 15
+bfgsmin_example
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 16
+ initial values
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 16
+ initial values
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 14
+brent_line_min
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1186
+ -- Function File: [S,V,N] brent_line_min ( F,DF,ARGS,CTL )
+ Line minimization of f along df
+
+ Finds minimum of f on line x0 + dx*w | a < w < b by bracketing.
+ a and b are passed through argument ctl.
+
+Arguments
+---------
+
+ * F : string : Name of function. Must return a real value
+
+ * ARGS : cell : Arguments passed to f or RxC : f's only
+ argument. x0 must be at ARGS{ CTL(2) }
+
+ * CTL : 5 : (optional) Control variables, described
+ below.
+
+Returned values
+---------------
+
+ * S : 1 : Minimum is at x0 + s*dx
+
+ * V : 1 : Value of f at x0 + s*dx
+
+ * NEV : 1 : Number of function evaluations
+
+Control Variables
+-----------------
+
+ * CTL(1) : Upper bound for error on s
+ Default=sqrt(eps)
+
+ * CTL(2) : Position of minimized argument in args
+ Default= 1
+
+ * CTL(3) : Maximum number of function evaluations
+ Default= inf
+
+ * CTL(4) : a
+ Default=-inf
+
+ * CTL(5) : b
+ Default= inf
+
+ Default values will be used if ctl is not passed or if nan values
+are given.
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 32
+Line minimization of f along df
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+cauchy
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 766
+ -- Function File: cauchy (N, R, X, F )
+ Return the Taylor coefficients and numerical differentiation of a
+ function F for the first N-1 coefficients or derivatives using the
+ fft. N is the number of points to evaluate, R is the radius of
+ convergence, needs to be chosen less then the smallest singularity,
+ X is point to evaluate the Taylor expansion or differentiation.
+ For example,
+
+ If X is a scalar, the function F is evaluated in a row vector of
+ length N. If X is a column vector, F is evaluated in a matrix of
+ length(x)-by-N elements and must return a matrix of the same size.
+
+ d = cauchy(16, 1.5, 0, @(x) exp(x));
+ => d(2) = 1.0000 # first (2-1) derivative of function f (index starts from zero)
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 80
+Return the Taylor coefficients and numerical differentiation of a
+function F for
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 5
+cdiff
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1377
+ c = cdiff (func,wrt,N,dfunc,stack,dx) - Code for num. differentiation
+ = "function df = dfunc (var1,..,dvar,..,varN) .. endfunction
+
+ Returns a string of octave code that defines a function 'dfunc' that
+ returns the derivative of 'func' with respect to it's 'wrt'th
+ argument.
+
+ The derivatives are obtained by symmetric finite difference.
+
+ dfunc()'s return value is in the same format as that of ndiff()
+
+ func : string : name of the function to differentiate
+
+ wrt : int : position, in argument list, of the differentiation
+ variable. Default:1
+
+ N : int : total number of arguments taken by 'func'.
+ If N=inf, dfunc will take variable argument list.
+ Default:wrt
+
+ dfunc : string : Name of the octave function that returns the
+ derivatives. Default:['d',func]
+
+ stack : string : Indicates whether 'func' accepts vertically
+ (stack="rstack") or horizontally (stack="cstack")
+ arguments. Any other string indicates that 'func'
+ does not allow stacking. Default:''
+
+ dx : real : Step used in the symmetric difference scheme.
+ Default:10*sqrt(eps)
+
+ See also : ndiff, eval, todisk
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 54
+ c = cdiff (func,wrt,N,dfunc,stack,dx) - Code for num.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+cg_min
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 2479
+ -- Function File: [X0,V,NEV] cg_min ( F,DF,ARGS,CTL )
+ NonLinear Conjugate Gradient method to minimize function F.
+
+Arguments
+---------
+
+ * F : string : Name of function. Return a real value
+
+ * DF : string : Name of f's derivative. Returns a (R*C) x 1
+ vector
+
+ * ARGS: cell : Arguments passed to f.
+ * CTL : 5-vec : (Optional) Control variables, described
+ below
+
+Returned values
+---------------
+
+ * X0 : matrix : Local minimum of f
+
+ * V : real : Value of f in x0
+
+ * NEV : 1 x 2 : Number of evaluations of f and of df
+
+Control Variables
+-----------------
+
+ * CTL(1) : 1 or 2 : Select stopping criterion amongst :
+
+ * CTL(1)==0 : Default value
+
+ * CTL(1)==1 : Stopping criterion : Stop search when value
+ doesn't improve, as tested by ctl(2) > Deltaf/max(|f(x)|,1)
+ where Deltaf is the decrease in f observed in the last
+ iteration (each iteration consists R*C line searches).
+
+ * CTL(1)==2 : Stopping criterion : Stop search when updates
+ are small, as tested by ctl(2) > max { dx(i)/max(|x(i)|,1) |
+ i in 1..N } where dx is the change in the x that occured in
+ the last iteration.
+
+ * CTL(2) : Threshold used in stopping tests.
+ Default=10*eps
+
+ * CTL(2)==0 : Default value
+
+ * CTL(3) : Position of the minimized argument in args
+ Default=1
+
+ * CTL(3)==0 : Default value
+
+ * CTL(4) : Maximum number of function evaluations
+ Default=inf
+
+ * CTL(4)==0 : Default value
+
+ * CTL(5) : Type of optimization:
+
+ * CTL(5)==1 : "Fletcher-Reves" method
+
+ * CTL(5)==2 : "Polak-Ribiere" (Default)
+
+ * CTL(5)==3 : "Hestenes-Stiefel" method
+
+ CTL may have length smaller than 4. Default values will be used if
+ctl is not passed or if nan values are given.
+
+Example:
+--------
+
+ function r=df( l ) b=[1;0;-1]; r = -( 2*l{1} - 2*b +
+rand(size(l{1}))); endfunction
+ function r=ff( l ) b=[1;0;-1]; r = (l{1}-b)' * (l{1}-b);
+endfunction
+ ll = { [10; 2; 3] };
+ ctl(5) = 3;
+ [x0,v,nev]=cg_min( "ff", "df", ll, ctl )
+ Comment: In general, BFGS method seems to be better performin in
+many cases but requires more computation per iteration
+
+ See also: bfgsmin,
+http://en.wikipedia.org/wiki/Nonlinear_conjugate_gradient
+
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 59
+NonLinear Conjugate Gradient method to minimize function F.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 9
+cpiv_bard
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1197
+ [lb, idx, ridx, mv] = cpiv_bard (v, m[, incl])
+
+ v: column vector; m: matrix; incl (optional): index. length (v)
+ must equal rows (m). Finds column vectors w and l with w == v + m *
+ l, w >= 0, l >= 0, l.' * w == 0. Chooses idx, w, and l so that
+ l(~idx) == 0, l(idx) == -inv (m(idx, idx)) * v(idx), w(idx) roughly
+ == 0, and w(~idx) == v(~idx) + m(idx, ~idx).' * l(idx). idx indexes
+ at least everything indexed by incl, but l(incl) may be < 0. lb:
+ l(idx) (column vector); idx: logical index, defined above; ridx:
+ ~idx & w roughly == 0; mv: [m, v] after performing a Gauss-Jordan
+ 'sweep' (with gjp.m) on each diagonal element indexed by idx.
+ Except the handling of incl (which enables handling of equality
+ constraints in the calling code), this is called solving the
+ 'complementary pivot problem' (Cottle, R. W. and Dantzig, G. B.,
+ 'Complementary pivot theory of mathematical programming', Linear
+ Algebra and Appl. 1, 102--125. References for the current
+ algorithm: Bard, Y.: Nonlinear Parameter Estimation, p. 147--149,
+ Academic Press, New York and London 1974; Bard, Y., 'An eclectic
+ approach to nonlinear programming', Proc. ANU Sem. Optimization,
+ Canberra, Austral. Nat. Univ.).
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 48
+ [lb, idx, ridx, mv] = cpiv_bard (v, m[, incl])
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 13
+curvefit_stat
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 749
+ -- Function File: INFO = residmin_stat (F, P, X, Y, SETTINGS)
+ Frontend for computation of statistics for fitting of values,
+ computed by a model function, to observed values.
+
+ Please refer to the description of `residmin_stat'. The only
+ differences to `residmin_stat' are the additional arguments X
+ (independent values) and Y (observations), that the model function
+ F, if provided, has a second obligatory argument which will be set
+ to X and is supposed to return guesses for the observations (with
+ the same dimensions), and that the possibly user-supplied function
+ for the jacobian of the model function has also a second
+ obligatory argument which will be set to X.
+
+ See also: residmin_stat
+
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 80
+Frontend for computation of statistics for fitting of values, computed
+by a mode
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+d2_min
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 2782
+ [x,v,nev,h,args] = d2_min(f,d2f,args,ctl,code) - Newton-like minimization
+
+ Minimize f(x) using 1st and 2nd derivatives. Any function w/ second
+ derivatives can be minimized, as in Newton. f(x) decreases at each
+ iteration, as in Levenberg-Marquardt. This function is inspired from the
+ Levenberg-Marquardt algorithm found in the book "Numerical Recipes".
+
+ ARGUMENTS :
+ f : string : Cost function's name
+
+ d2f : string : Name of function returning the cost (1x1), its
+ differential (1xN) and its second differential or it's
+ pseudo-inverse (NxN) (see ctl(5) below) :
+
+ [v,dv,d2v] = d2f (x).
+
+ args : list : f and d2f's arguments. By default, minimize the 1st
+ or matrix : argument.
+
+ ctl : vector : Control arguments (see below)
+ or struct
+
+ code : string : code will be evaluated after each outer loop that
+ produced some (any) improvement. Variables visible from
+ "code" include "x", the best parameter found, "v" the
+ best value and "args", the list of all arguments. All can
+ be modified. This option can be used to re-parameterize
+ the argument space during optimization
+
+ CONTROL VARIABLE ctl : (optional). May be a struct or a vector of length
+ ---------------------- 5 or less where NaNs are ignored. Default values
+ are written <value>.
+ FIELD VECTOR
+ NAME POS
+
+ ftol, f N/A : Stop search when value doesn't improve, as tested by
+
+ f > Deltaf/max(|f(x)|,1)
+
+ where Deltaf is the decrease in f observed in the last
+ iteration. <10*sqrt(eps)>
+
+ utol, u N/A : Stop search when updates are small, as tested by
+
+ u > max { dx(i)/max(|x(i)|,1) | i in 1..N }
+
+ where dx is the change in the x that occured in the last
+ iteration. <NaN>
+
+ dtol, d N/A : Stop search when derivative is small, as tested by
+
+ d > norm (dv) <eps>
+
+ crit, c ctl(1) : Set one stopping criterion, 'ftol' (c=1), 'utol' (c=2)
+ or 'dtol' (c=3) to the value of by the 'tol' option. <1>
+
+ tol, t ctl(2) : Threshold in termination test chosen by 'crit' <10*eps>
+
+ narg, n ctl(3) : Position of the minimized argument in args <1>
+ maxev,m ctl(4) : Maximum number of function evaluations <inf>
+ maxout,m : Maximum number of outer loops <inf>
+ id2f, i ctl(5) : 0 if d2f returns the 2nd derivatives, 1 if <0>
+ it returns its pseudo-inverse.
+
+ verbose, v N/A : Be more or less verbose (quiet=0) <0>
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 75
+ [x,v,nev,h,args] = d2_min(f,d2f,args,ctl,code) - Newton-like minimization
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 4
+dcdp
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 268
+ function prt = dcdp (f, p, dp, func[, bounds])
+
+ This is an interface to __dfdp__.m, similar to dfdp.m, but for
+ functions only of parameters 'p', not of independents 'x'. See
+ dfdp.m.
+
+ dfpdp is more general and is meant to be used instead of dcdp in
+ optimization.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 48
+ function prt = dcdp (f, p, dp, func[, bounds])
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+de_min
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 4432
+ de_min: global optimisation using differential evolution
+
+ Usage: [x, obj_value, nfeval, convergence] = de_min(fcn, control)
+
+ minimization of a user-supplied function with respect to x(1:D),
+ using the differential evolution (DE) method based on an algorithm
+ by Rainer Storn (http://www.icsi.berkeley.edu/~storn/code.html)
+ See: http://www.softcomputing.net/tevc2009_1.pdf
+
+
+ Arguments:
+ ---------------
+ fcn string : Name of function. Must return a real value
+ control vector : (Optional) Control variables, described below
+ or struct
+
+ Returned values:
+ ----------------
+ x vector : parameter vector of best solution
+ obj_value scalar : objective function value of best solution
+ nfeval scalar : number of function evaluations
+ convergence : 1 = best below value to reach (VTR)
+ 0 = population has reached defined quality (tol)
+ -1 = some values are close to constraints/boundaries
+ -2 = max number of iterations reached (maxiter)
+ -3 = max number of functions evaluations reached (maxnfe)
+
+ Control variable: (optional) may be named arguments (i.e. "name",value
+ ---------------- pairs), a struct, or a vector, where
+ NaN's are ignored.
+
+ XVmin : vector of lower bounds of initial population
+ *** note: by default these are no constraints ***
+ XVmax : vector of upper bounds of initial population
+ constr : 1 -> enforce the bounds not just for the initial population
+ const : data vector (remains fixed during the minimization)
+ NP : number of population members
+ F : difference factor from interval [0, 2]
+ CR : crossover probability constant from interval [0, 1]
+ strategy : 1 --> DE/best/1/exp 7 --> DE/best/1/bin
+ 2 --> DE/rand/1/exp 8 --> DE/rand/1/bin
+ 3 --> DE/target-to-best/1/exp 9 --> DE/target-to-best/1/bin
+ 4 --> DE/best/2/exp 10--> DE/best/2/bin
+ 5 --> DE/rand/2/exp 11--> DE/rand/2/bin
+ 6 --> DEGL/SAW/exp else DEGL/SAW/bin
+ refresh : intermediate output will be produced after "refresh"
+ iterations. No intermediate output will be produced
+ if refresh is < 1
+ VTR : Stopping criterion: "Value To Reach"
+ de_min will stop when obj_value <= VTR.
+ Use this if you know which value you expect.
+ tol : Stopping criterion: "tolerance"
+ stops if (best-worst)/max(1,worst) < tol
+ This stops basically if the whole population is "good".
+ maxnfe : maximum number of function evaluations
+ maxiter : maximum number of iterations (generations)
+
+ The algorithm seems to work well only if [XVmin,XVmax] covers the
+ region where the global minimum is expected.
+ DE is also somewhat sensitive to the choice of the
+ difference factor F. A good initial guess is to choose F from
+ interval [0.5, 1], e.g. 0.8.
+ CR, the crossover probability constant from interval [0, 1]
+ helps to maintain the diversity of the population and is
+ rather uncritical but affects strongly the convergence speed.
+ If the parameters are correlated, high values of CR work better.
+ The reverse is true for no correlation.
+ Experiments suggest that /bin likes to have a slightly
+ larger CR than /exp.
+ The number of population members NP is also not very critical. A
+ good initial guess is 10*D. Depending on the difficulty of the
+ problem NP can be lower than 10*D or must be higher than 10*D
+ to achieve convergence.
+
+ Default Values:
+ ---------------
+ XVmin = [-2];
+ XVmax = [ 2];
+ constr= 0;
+ const = [];
+ NP = 10 *D
+ F = 0.8;
+ CR = 0.9;
+ strategy = 12;
+ refresh = 0;
+ VTR = -Inf;
+ tol = 1.e-3;
+ maxnfe = 1e6;
+ maxiter = 1000;
+
+
+ Example to find the minimum of the Rosenbrock saddle:
+ ----------------------------------------------------
+ Define f as:
+ function result = f(x);
+ result = 100 * (x(2) - x(1)^2)^2 + (1 - x(1))^2;
+ end
+ Then type:
+
+ ctl.XVmin = [-2 -2];
+ ctl.XVmax = [ 2 2];
+ [x, obj_value, nfeval, convergence] = de_min (@f, ctl);
+
+ Keywords: global-optimisation optimisation minimisation
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 58
+ de_min: global optimisation using differential evolution
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 5
+deriv
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 820
+ -- Function File: DX = deriv (F, X0)
+ -- Function File: DX = deriv (F, X0, H)
+ -- Function File: DX = deriv (F, X0, H, O)
+ -- Function File: DX = deriv (F, X0, H, O, N)
+ Calculate derivate of function F.
+
+ F must be a function handle or the name of a function that takes X0
+ and returns a variable of equal length and orientation. X0 must be
+ a numeric vector or scalar.
+
+ H defines the step taken for the derivative calculation. Defaults
+ to 1e-7.
+
+ O defines the order of the calculation. Supported values are 2
+ (h^2 order) or 4 (h^4 order). Defaults to 2.
+
+ N defines the derivative order. Defaults to the 1st derivative of
+ the function. Can be up to the 4th derivative.
+
+ Reference: Numerical Methods for Mathematics, Science, and
+ Engineering by John H. Mathews.
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 33
+Calculate derivate of function F.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 4
+dfdp
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1050
+ function prt = dfdp (x, f, p, dp, func[, bounds])
+ numerical partial derivatives (Jacobian) df/dp for use with leasqr
+ --------INPUT VARIABLES---------
+ x=vec or matrix of indep var(used as arg to func) x=[x0 x1 ....]
+ f=func(x,p) vector initialsed by user before each call to dfdp
+ p= vec of current parameter values
+ dp= fractional increment of p for numerical derivatives
+ dp(j)>0 central differences calculated
+ dp(j)<0 one sided differences calculated
+ dp(j)=0 sets corresponding partials to zero; i.e. holds p(j) fixed
+ func=function (string or handle) to calculate the Jacobian for,
+ e.g. to calc Jacobian for function expsum prt=dfdp(x,f,p,dp,'expsum')
+ bounds=two-column-matrix of lower and upper bounds for parameters
+ If no 'bounds' options is specified to leasqr, it will call
+ dfdp without the 'bounds' argument.
+----------OUTPUT VARIABLES-------
+ prt= Jacobian Matrix prt(i,j)=df(i)/dp(j)
+================================
+
+ dfxpdp is more general and is meant to be used instead of dfdp in
+ optimization.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 80
+ function prt = dfdp (x, f, p, dp, func[, bounds])
+ numerical partial derivative
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 5
+dfpdp
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1099
+ function jac = dfpdp (p, func[, hook])
+
+ Returns Jacobian of func (p) with respect to p with finite
+ differencing. The optional argument hook is a structure which can
+ contain the following fields at the moment:
+
+ hook.f: value of func(p) for p as given in the arguments
+
+ hook.diffp: positive vector of fractional steps from given p in
+ finite differencing (actual steps may be smaller if bounds are
+ given). The default is .001 * ones (size (p)).
+
+ hook.diff_onesided: logical vector, indexing elements of p for
+ which only one-sided differences should be computed (faster); even
+ if not one-sided, differences might not be exactly central if
+ bounds are given. The default is false (size (p)).
+
+ hook.fixed: logical vector, indexing elements of p for which zero
+ should be returned instead of the guessed partial derivatives
+ (useful in optimization if some parameters are not optimized, but
+ are 'fixed').
+
+ hook.lbound, hook.ubound: vectors of lower and upper parameter
+ bounds (or -Inf or +Inf, respectively) to be respected in finite
+ differencing. The consistency of bounds is not checked.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 40
+ function jac = dfpdp (p, func[, hook])
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+dfxpdp
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1115
+ function jac = dfxpdp (x, p, func[, hook])
+
+ Returns Jacobian of func (p, x) with respect to p with finite
+ differencing. The optional argument hook is a structure which can
+ contain the following fields at the moment:
+
+ hook.f: value of func(p, x) for p and x as given in the arguments
+
+ hook.diffp: positive vector of fractional steps from given p in
+ finite differencing (actual steps may be smaller if bounds are
+ given). The default is .001 * ones (size (p));
+
+ hook.diff_onesided: logical vector, indexing elements of p for
+ which only one-sided differences should be computed (faster); even
+ if not one-sided, differences might not be exactly central if
+ bounds are given. The default is false (size (p)).
+
+ hook.fixed: logical vector, indexing elements of p for which zero
+ should be returned instead of the guessed partial derivatives
+ (useful in optimization if some parameters are not optimized, but
+ are 'fixed').
+
+ hook.lbound, hook.ubound: vectors of lower and upper parameter
+ bounds (or -Inf or +Inf, respectively) to be respected in finite
+ differencing. The consistency of bounds is not checked.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 44
+ function jac = dfxpdp (x, p, func[, hook])
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+expfit
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1621
+ USAGE [alpha,c,rms] = expfit( deg, x1, h, y )
+
+ Prony's method for non-linear exponential fitting
+
+ Fit function: \sum_1^{deg} c(i)*exp(alpha(i)*x)
+
+ Elements of data vector y must correspond to
+ equidistant x-values starting at x1 with stepsize h
+
+ The method is fully compatible with complex linear
+ coefficients c, complex nonlinear coefficients alpha
+ and complex input arguments y, x1, non-zero h .
+ Fit-order deg must be a real positive integer.
+
+ Returns linear coefficients c, nonlinear coefficients
+ alpha and root mean square error rms. This method is
+ known to be more stable than 'brute-force' non-linear
+ least squares fitting.
+
+ Example
+ x0 = 0; step = 0.05; xend = 5; x = x0:step:xend;
+ y = 2*exp(1.3*x)-0.5*exp(2*x);
+ error = (rand(1,length(y))-0.5)*1e-4;
+ [alpha,c,rms] = expfit(2,x0,step,y+error)
+
+ alpha =
+ 2.0000
+ 1.3000
+ c =
+ -0.50000
+ 2.00000
+ rms = 0.00028461
+
+ The fit is very sensitive to the number of data points.
+ It doesn't perform very well for small data sets.
+ Theoretically, you need at least 2*deg data points, but
+ if there are errors on the data, you certainly need more.
+
+ Be aware that this is a very (very,very) ill-posed problem.
+ By the way, this algorithm relies heavily on computing the
+ roots of a polynomial. I used 'roots.m', if there is
+ something better please use that code.
+
+ Demo for a complex fit-function:
+ deg= 2; N= 20; x1= -(1+i), x= linspace(x1,1+i/2,N).';
+ h = x(2) - x(1)
+ y= (2+i)*exp( (-1-2i)*x ) + (-1+3i)*exp( (2+3i)*x );
+ A= 5e-2; y+= A*(randn(N,1)+randn(N,1)*i); % add complex noise
+ [alpha,c,rms]= expfit( deg, x1, h, y )
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 48
+ USAGE [alpha,c,rms] = expfit( deg, x1, h, y )
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 4
+fmin
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 19
+ alias for fminbnd
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 19
+ alias for fminbnd
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 5
+fmins
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1412
+ -- Function File: [X] = fmins(F,X0,OPTIONS,GRAD,P1,P2, ...)
+ Find the minimum of a funtion of several variables. By default
+ the method used is the Nelder&Mead Simplex algorithm
+
+ Example usage: fmins(inline('(x(1)-5).^2+(x(2)-8).^4'),[0;0])
+
+ *Inputs*
+ F
+ A string containing the name of the function to minimize
+
+ X0
+ A vector of initial parameters fo the function F.
+
+ OPTIONS
+ Vector with control parameters (not all parameters are used) options(1) - Show progress (if 1, default is 0, no progress)
+ options(2) - Relative size of simplex (default 1e-3)
+ options(6) - Optimization algorithm
+ if options(6)==0 - Nelder & Mead simplex (default)
+ if options(6)==1 - Multidirectional search Method
+ if options(6)==2 - Alternating Directions search
+ options(5)
+ if options(6)==0 && options(5)==0 - regular simplex
+ if options(6)==0 && options(5)==1 - right-angled simplex
+ Comment: the default is set to "right-angled simplex".
+ this works better for me on a broad range of problems,
+ although the default in nmsmax is "regular simplex"
+ options(10) - Maximum number of function evaluations
+
+ GRAD
+ Unused (For compatibility with Matlab)
+
+ P1,P2, ...
+ Optional parameters for function F
+
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 51
+Find the minimum of a funtion of several variables.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 10
+fminsearch
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 300
+ -- Function File: X = fminsearch (FUN, X0)
+ -- Function File: [X, FVAL] = fminsearch (FUN, X0, OPTIONS, GRAD, P1,
+ P2, ...)
+ Find the minimum of a funtion of several variables. By default
+ the method used is the Nelder&Mead Simplex algorithm.
+
+ See also: fmin, fmins, nmsmax
+
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 51
+Find the minimum of a funtion of several variables.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 14
+fminunc_compat
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1594
+ [x,v,flag,out,df,d2f] = fminunc_compat (f,x,opt,...) - M*tlab-like optimization
+
+ Imitation of m*tlab's fminunc(). The optional 'opt' argument is a struct,
+ e.g. produced by 'optimset()'. 'fminunc_compat' has been deprecated in
+ favor of 'fminunc', which is now part of core Octave. This function
+ will possibly be removed from future versions of the 'optim' package.
+
+ Supported options
+ -----------------
+ Diagnostics, [off|on] : Be verbose
+ Display , [off|iter|notify|final]
+ : Be verbose unless value is "off"
+ GradObj , [off|on] : Function's 2nd return value is derivatives
+ Hessian , [off|on] : Function's 2nd and 3rd return value are
+ derivatives and Hessian.
+ TolFun , scalar : Termination criterion (see 'ftol' in minimize())
+ TolX , scalar : Termination criterion (see 'utol' in minimize())
+ MaxFunEvals, int : Max. number of function evaluations
+ MaxIter , int : Max. number of algorithm iterations
+
+ These non-m*tlab are provided to facilitate porting code to octave:
+ -----------------------
+ "MinEquiv" , [off|on] : Don't minimize 'fun', but instead return the
+ option passed to minimize().
+
+ "Backend" , [off|on] : Don't minimize 'fun', but instead return
+ [backend, opt], the name of the backend
+ optimization function that is used and the
+ optional arguments that will be passed to it. See
+ the 'backend' option of minimize().
+
+ This function is a front-end to minimize().
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 50
+ [x,v,flag,out,df,d2f] = fminunc_compat (f,x,opt,.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 3
+gjp
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 829
+ m = gjp (m, k[, l])
+
+ m: matrix; k, l: row- and column-index of pivot, l defaults to k.
+
+ Gauss-Jordon pivot as defined in Bard, Y.: Nonlinear Parameter
+ Estimation, p. 296, Academic Press, New York and London 1974. In
+ the pivot column, this seems not quite the same as the usual
+ Gauss-Jordan(-Clasen) pivot. Bard gives Beaton, A. E., 'The use of
+ special matrix operators in statistical calculus' Research Bulletin
+ RB-64-51 (1964), Educational Testing Service, Princeton, New Jersey
+ as a reference, but this article is not easily accessible. Another
+ reference, whose definition of gjp differs from Bards by some
+ signs, is Clarke, R. B., 'Algorithm AS 178: The Gauss-Jordan sweep
+ operator with detection of collinearity', Journal of the Royal
+ Statistical Society, Series C (Applied Statistics) (1982), 31(2),
+ 166--168.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 21
+ m = gjp (m, k[, l])
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+jacobs
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1030
+ -- Function File: Df = jacobs (X, F)
+ -- Function File: Df = jacobs (X, F, HOOK)
+ Calculate the jacobian of a function using the complex step method.
+
+ Let F be a user-supplied function. Given a point X at which we
+ seek for the Jacobian, the function `jacobs' returns the Jacobian
+ matrix `d(f(1), ..., df(end))/d(x(1), ..., x(n))'. The function
+ uses the complex step method and thus can be applied to real
+ analytic functions.
+
+ The optional argument HOOK is a structure with additional options.
+ HOOK can have the following fields:
+ * `h' - can be used to define the magnitude of the complex step
+ and defaults to 1e-20; steps larger than 1e-3 are not allowed.
+
+ * `fixed' - is a logical vector internally usable by some
+ optimization functions; it indicates for which elements of X
+ no gradient should be computed, but zero should be returned.
+
+ For example:
+
+ f = @(x) [x(1)^2 + x(2); x(2)*exp(x(1))];
+ Df = jacobs ([1, 2], f)
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 67
+Calculate the jacobian of a function using the complex step method.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+leasqr
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 7546
+function [f,p,cvg,iter,corp,covp,covr,stdresid,Z,r2]=
+ leasqr(x,y,pin,F,{stol,niter,wt,dp,dFdp,options})
+
+ Levenberg-Marquardt nonlinear regression of f(x,p) to y(x).
+
+ Version 3.beta
+ Optional parameters are in braces {}.
+ x = vector or matrix of independent variables.
+ y = vector or matrix of observed values.
+ wt = statistical weights (same dimensions as y). These should be
+ set to be proportional to (sqrt of var(y))^-1; (That is, the
+ covariance matrix of the data is assumed to be proportional to
+ diagonal with diagonal equal to (wt.^2)^-1. The constant of
+ proportionality will be estimated.); default = ones( size (y)).
+ pin = vec of initial parameters to be adjusted by leasqr.
+ dp = fractional increment of p for numerical partial derivatives;
+ default = .001*ones(size(pin))
+ dp(j) > 0 means central differences on j-th parameter p(j).
+ dp(j) < 0 means one-sided differences on j-th parameter p(j).
+ dp(j) = 0 holds p(j) fixed i.e. leasqr wont change initial guess: pin(j)
+ F = name of function in quotes or function handle; the function
+ shall be of the form y=f(x,p), with y, x, p of the form y, x, pin
+ as described above.
+ dFdp = name of partial derivative function in quotes or function
+ handle; default is 'dfdp', a slow but general partial derivatives
+ function; the function shall be of the form
+ prt=dfdp(x,f,p,dp,F[,bounds]). For backwards compatibility, the
+ function will only be called with an extra 'bounds' argument if the
+ 'bounds' option is explicitely specified to leasqr (see dfdp.m).
+ stol = scalar tolerance on fractional improvement in scalar sum of
+ squares = sum((wt.*(y-f))^2); default stol = .0001;
+ niter = scalar maximum number of iterations; default = 20;
+ options = structure, currently recognized fields are 'fract_prec',
+ 'max_fract_change', 'inequc', 'bounds', and 'equc'. For backwards
+ compatibility, 'options' can also be a matrix whose first and
+ second column contains the values of 'fract_prec' and
+ 'max_fract_change', respectively.
+ Field 'options.fract_prec': column vector (same length as 'pin')
+ of desired fractional precisions in parameter estimates.
+ Iterations are terminated if change in parameter vector (chg)
+ relative to current parameter estimate is less than their
+ corresponding elements in 'options.fract_prec' [ie. all (abs
+ (chg) < abs (options.fract_prec .* current_parm_est))] on two
+ consecutive iterations, default = zeros().
+ Field 'options.max_fract_change': column vector (same length as
+ 'pin) of maximum fractional step changes in parameter vector.
+ Fractional change in elements of parameter vector is constrained to
+ be at most 'options.max_fract_change' between sucessive iterations.
+ [ie. abs(chg(i))=abs(min([chg(i)
+ options.max_fract_change(i)*current param estimate])).], default =
+ Inf*ones().
+ Field 'options.inequc': cell-array containing up to four entries,
+ two entries for linear inequality constraints and/or one or two
+ entries for general inequality constraints. Initial parameters
+ must satisfy these constraints. Either linear or general
+ constraints may be the first entries, but the two entries for
+ linear constraints must be adjacent and, if two entries are given
+ for general constraints, they also must be adjacent. The two
+ entries for linear constraints are a matrix (say m) and a vector
+ (say v), specifying linear inequality constraints of the form
+ `m.' * parameters + v >= 0'. If the constraints are just bounds,
+ it is suggested to specify them in 'options.bounds' instead,
+ since then some sanity tests are performed, and since the
+ function 'dfdp.m' is guarantied not to violate constraints during
+ determination of the numeric gradient only for those constraints
+ specified as 'bounds' (possibly with violations due to a certain
+ inaccuracy, however, except if no constraints except bounds are
+ specified). The first entry for general constraints must be a
+ differentiable vector valued function (say h), specifying general
+ inequality constraints of the form `h (p[, idx]) >= 0'; p is the
+ column vector of optimized paraters and the optional argument idx
+ is a logical index. h has to return the values of all constraints
+ if idx is not given, and has to return only the indexed
+ constraints if idx is given (so computation of the other
+ constraints can be spared). If a second entry for general
+ constraints is given, it must be a function (say dh) which
+ returnes a matrix whos rows contain the gradients of the
+ constraint function h with respect to the optimized parameters.
+ It has the form jac_h = dh (vh, p, dp, h, idx[, bounds]); p is
+ the column vector of optimized parameters, and idx is a logical
+ index --- only the rows indexed by idx must be returned (so
+ computation of the others can be spared). The other arguments of
+ dh are for the case that dh computes numerical gradients: vh is
+ the column vector of the current values of the constraint
+ function h, with idx already applied. h is a function h (p) to
+ compute the values of the constraints for parameters p, it will
+ return only the values indexed by idx. dp is a suggestion for
+ relative step width, having the same value as the argument 'dp'
+ of leasqr above. If bounds were specified to leasqr, they are
+ provided in the argument bounds of dh, to enable their
+ consideration in determination of numerical gradients. If dh is
+ not specified to leasqr, numerical gradients are computed in the
+ same way as with 'dfdp.m' (see above). If some constraints are
+ linear, they should be specified as linear constraints (or
+ bounds, if applicable) for reasons of performance, even if
+ general constraints are also specified.
+ Field 'options.bounds': two-column-matrix, one row for each
+ parameter in 'pin'. Each row contains a minimal and maximal value
+ for each parameter. Default: [-Inf, Inf] in each row. If this
+ field is used with an existing user-side function for 'dFdp'
+ (see above) the functions interface might have to be changed.
+ Field 'options.equc': equality constraints, specified the same
+ way as inequality constraints (see field 'options.inequc').
+ Initial parameters must satisfy these constraints.
+ Note that there is possibly a certain inaccuracy in honoring
+ constraints, except if only bounds are specified.
+ _Warning_: If constraints (or bounds) are set, returned guesses
+ of corp, covp, and Z are generally invalid, even if no constraints
+ are active for the final parameters. If equality constraints are
+ specified, corp, covp, and Z are not guessed at all.
+ Field 'options.cpiv': Function for complementary pivot algorithm
+ for inequality constraints, default: cpiv_bard. No different
+ function is supplied.
+
+ OUTPUT VARIABLES
+ f = column vector of values computed: f = F(x,p).
+ p = column vector trial or final parameters. i.e, the solution.
+ cvg = scalar: = 1 if convergence, = 0 otherwise.
+ iter = scalar number of iterations used.
+ corp = correlation matrix for parameters.
+ covp = covariance matrix of the parameters.
+ covr = diag(covariance matrix of the residuals).
+ stdresid = standardized residuals.
+ Z = matrix that defines confidence region (see comments in the source).
+ r2 = coefficient of multiple determination, intercept form.
+
+ Not suitable for non-real residuals.
+
+ References:
+ Bard, Nonlinear Parameter Estimation, Academic Press, 1974.
+ Draper and Smith, Applied Regression Analysis, John Wiley and Sons, 1981.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 80
+function [f,p,cvg,iter,corp,covp,covr,stdresid,Z,r2]=
+ leasqr(
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 8
+line_min
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 806
+ [a,fx,nev] = line_min (f, dx, args, narg, h, nev_max) - Minimize f() along dx
+
+ INPUT ----------
+ f : string : Name of minimized function
+ dx : matrix : Direction along which f() is minimized
+ args : cell : Arguments of f
+ narg : integer : Position of minimized variable in args. Default=1
+ h : scalar : Step size to use for centered finite difference
+ approximation of first and second derivatives. Default=1E-3.
+ nev_max : integer : Maximum number of function evaluations. Default=30
+
+ OUTPUT ---------
+ a : scalar : Value for which f(x+a*dx) is a minimum (*)
+ fx : scalar : Value of f(x+a*dx) at minimum (*)
+ nev : integer : Number of function evaluations
+
+ (*) The notation f(x+a*dx) assumes that args == {x}.
+
+ Reference: David G Luenberger's Linear and Nonlinear Programming
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 79
+ [a,fx,nev] = line_min (f, dx, args, narg, h, nev_max) - Minimize f() along dx
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 7
+linprog
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 591
+ -- Function File: X = linprog (F, A, B)
+ -- Function File: X = linprog (F, A, B, AEQ, BEQ)
+ -- Function File: X = linprog (F, A, B, AEQ, BEQ, LB, UB)
+ -- Function File: [X, FVAL] = linprog (...)
+ Solve a linear problem.
+
+ Finds
+
+ min (f' * x)
+
+ (both f and x are column vectors) subject to
+
+ A * x <= b
+ Aeq * x = beq
+ lb <= x <= ub
+
+ If not specified, AEQ and BEQ default to empty matrices.
+
+ If not specified, the lower bound LB defaults to minus infinite
+ and the upper bound UB defaults to infinite.
+
+ See also: glpk
+
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 23
+Solve a linear problem.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+mdsmax
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 2358
+MDSMAX Multidirectional search method for direct search optimization.
+ [x, fmax, nf] = MDSMAX(FUN, x0, STOPIT, SAVIT) attempts to
+ maximize the function FUN, using the starting vector x0.
+ The method of multidirectional search is used.
+ Output arguments:
+ x = vector yielding largest function value found,
+ fmax = function value at x,
+ nf = number of function evaluations.
+ The iteration is terminated when either
+ - the relative size of the simplex is <= STOPIT(1)
+ (default 1e-3),
+ - STOPIT(2) function evaluations have been performed
+ (default inf, i.e., no limit), or
+ - a function value equals or exceeds STOPIT(3)
+ (default inf, i.e., no test on function values).
+ The form of the initial simplex is determined by STOPIT(4):
+ STOPIT(4) = 0: regular simplex (sides of equal length, the default),
+ STOPIT(4) = 1: right-angled simplex.
+ Progress of the iteration is not shown if STOPIT(5) = 0 (default 1).
+ If a non-empty fourth parameter string SAVIT is present, then
+ `SAVE SAVIT x fmax nf' is executed after each inner iteration.
+ NB: x0 can be a matrix. In the output argument, in SAVIT saves,
+ and in function calls, x has the same shape as x0.
+ MDSMAX(fun, x0, STOPIT, SAVIT, P1, P2,...) allows additional
+ arguments to be passed to fun, via feval(fun,x,P1,P2,...).
+
+ This implementation uses 2n^2 elements of storage (two simplices), where x0
+ is an n-vector. It is based on the algorithm statement in [2, sec.3],
+ modified so as to halve the storage (with a slight loss in readability).
+
+ References:
+ [1] V. J. Torczon, Multi-directional search: A direct search algorithm for
+ parallel machines, Ph.D. Thesis, Rice University, Houston, Texas, 1989.
+ [2] V. J. Torczon, On the convergence of the multidirectional search
+ algorithm, SIAM J. Optimization, 1 (1991), pp. 123-145.
+ [3] N. J. Higham, Optimization by direct search in matrix computations,
+ SIAM J. Matrix Anal. Appl, 14(2): 317-333, 1993.
+ [4] N. J. Higham, Accuracy and Stability of Numerical Algorithms,
+ Second edition, Society for Industrial and Applied Mathematics,
+ Philadelphia, PA, 2002; sec. 20.5.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 70
+MDSMAX Multidirectional search method for direct search optimization.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 8
+minimize
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 4103
+ [x,v,nev,...] = minimize (f,args,...) - Minimize f
+
+ ARGUMENTS
+ f : string : Name of function. Must return a real value
+ args : list or : List of arguments to f (by default, minimize the first)
+ matrix : f's only argument
+
+ RETURNED VALUES
+ x : matrix : Local minimum of f. Let's suppose x is M-by-N.
+ v : real : Value of f in x0
+ nev : integer : Number of function evaluations
+ or 1 x 2 : Number of function and derivative evaluations (if
+ derivatives are used)
+
+
+ Extra arguments are either a succession of option-value pairs or a single
+ list or struct of option-value pairs (for unary options, the value in the
+ struct is ignored).
+
+ OPTIONS : DERIVATIVES Derivatives may be used if one of these options
+ --------------------- uesd. Otherwise, the Nelder-Mean (see
+ nelder_mead_min) method is used.
+
+ 'd2f', d2f : Name of a function that returns the value of f, of its
+ 1st and 2nd derivatives : [fx,dfx,d2fx] = feval (d2f, x)
+ where fx is a real number, dfx is 1x(M*N) and d2fx is
+ (M*N)x(M*N). A Newton-like method (d2_min) will be used.
+
+ 'hess' : Use [fx,dfx,d2fx] = leval (f, args) to compute 1st and
+ 2nd derivatives, and use a Newton-like method (d2_min).
+
+ 'd2i', d2i : Name of a function that returns the value of f, of its
+ 1st and pseudo-inverse of second derivatives :
+ [fx,dfx,id2fx] = feval (d2i, x) where fx is a real
+ number, dfx is 1x(M*N) and d2ix is (M*N)x(M*N).
+ A Newton-like method will be used (see d2_min).
+
+ 'ihess' : Use [fx,dfx,id2fx] = leval (f, args) to compute 1st
+ derivative and the pseudo-inverse of 2nd derivatives,
+ and use a Newton-like method (d2_min).
+
+ NOTE : df, d2f or d2i take the same arguments as f.
+
+ 'order', n : Use derivatives of order n. If the n'th order derivative
+ is not specified by 'df', 'd2f' or 'd2i', it will be
+ computed numerically. Currently, only order 1 works.
+
+ 'ndiff' : Use a variable metric method (bfgs) using numerical
+ differentiation.
+
+ OPTIONS : STOPPING CRITERIA Default is to use 'tol'
+ ---------------------------
+ 'ftol', ftol : Stop search when value doesn't improve, as tested by
+
+ ftol > Deltaf/max(|f(x)|,1)
+
+ where Deltaf is the decrease in f observed in the last
+ iteration. Default=10*eps
+
+ 'utol', utol : Stop search when updates are small, as tested by
+
+ tol > max { dx(i)/max(|x(i)|,1) | i in 1..N }
+
+ where dx is the change in the x that occured in the last
+ iteration.
+
+ 'dtol',dtol : Stop search when derivatives are small, as tested by
+
+ dtol > max { df(i)*max(|x(i)|,1)/max(v,1) | i in 1..N }
+
+ where x is the current minimum, v is func(x) and df is
+ the derivative of f in x. This option is ignored if
+ derivatives are not used in optimization.
+
+ MISC. OPTIONS
+ -------------
+ 'maxev', m : Maximum number of function evaluations <inf>
+
+ 'narg' , narg : Position of the minimized argument in args <1>
+ 'isz' , step : Initial step size (only for 0 and 1st order method) <1>
+ Should correspond to expected distance to minimum
+ 'verbose' : Display messages during execution
+
+ 'backend' : Instead of performing the minimization itself, return
+ [backend, control], the name and control argument of the
+ backend used by minimize(). Minimimzation can then be
+ obtained without the overhead of minimize by calling, if
+ a 0 or 1st order method is used :
+
+ [x,v,nev] = feval (backend, args, control)
+
+ or, if a 2nd order method is used :
+
+ [x,v,nev] = feval (backend, control.d2f, args, control)
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 11
+ [x,v,nev,.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 15
+nelder_mead_min
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 2747
+ [x0,v,nev] = nelder_mead_min (f,args,ctl) - Nelder-Mead minimization
+
+ Minimize 'f' using the Nelder-Mead algorithm. This function is inspired
+ from the that found in the book "Numerical Recipes".
+
+ ARGUMENTS
+ ---------
+ f : string : Name of function. Must return a real value
+ args : list : Arguments passed to f.
+ or matrix : f's only argument
+ ctl : vector : (Optional) Control variables, described below
+ or struct
+
+ RETURNED VALUES
+ ---------------
+ x0 : matrix : Local minimum of f
+ v : real : Value of f in x0
+ nev : number : Number of function evaluations
+
+ CONTROL VARIABLE : (optional) may be named arguments (i.e. "name",value
+ ------------------ pairs), a struct, or a vector of length <= 6, where
+ NaN's are ignored. Default values are written <value>.
+ OPT. VECTOR
+ NAME POS
+ ftol,f N/A : Stopping criterion : stop search when values at simplex
+ vertices are all alike, as tested by
+
+ f > (max_i (f_i) - min_i (f_i)) /max(max(|f_i|),1)
+
+ where f_i are the values of f at the vertices. <10*eps>
+
+ rtol,r N/A : Stop search when biggest radius of simplex, using
+ infinity-norm, is small, as tested by :
+
+ ctl(2) > Radius <10*eps>
+
+ vtol,v N/A : Stop search when volume of simplex is small, tested by
+
+ ctl(2) > Vol
+
+ crit,c ctl(1) : Set one stopping criterion, 'ftol' (c=1), 'rtol' (c=2)
+ or 'vtol' (c=3) to the value of the 'tol' option. <1>
+
+ tol, t ctl(2) : Threshold in termination test chosen by 'crit' <10*eps>
+
+ narg ctl(3) : Position of the minimized argument in args <1>
+ maxev ctl(4) : Maximum number of function evaluations. This number <inf>
+ may be slightly exceeded.
+ isz ctl(5) : Size of initial simplex, which is : <1>
+
+ { x + e_i | i in 0..N }
+
+ Where x == args{narg} is the initial value
+ e_0 == zeros (size (x)),
+ e_i(j) == 0 if j != i and e_i(i) == ctl(5)
+ e_i has same size as x
+
+ Set ctl(5) to the distance you expect between the starting
+ point and the minimum.
+
+ rst ctl(6) : When a minimum is found the algorithm restarts next to
+ it until the minimum does not improve anymore. ctl(6) is
+ the maximum number of restarts. Set ctl(6) to zero if
+ you know the function is well-behaved or if you don't
+ mind not getting a true minimum. <0>
+
+ verbose, v Be more or less verbose (quiet=0) <0>
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 70
+ [x0,v,nev] = nelder_mead_min (f,args,ctl) - Nelder-Mead minimization
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+nmsmax
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1967
+NMSMAX Nelder-Mead simplex method for direct search optimization.
+ [x, fmax, nf] = NMSMAX(FUN, x0, STOPIT, SAVIT) attempts to
+ maximize the function FUN, using the starting vector x0.
+ The Nelder-Mead direct search method is used.
+ Output arguments:
+ x = vector yielding largest function value found,
+ fmax = function value at x,
+ nf = number of function evaluations.
+ The iteration is terminated when either
+ - the relative size of the simplex is <= STOPIT(1)
+ (default 1e-3),
+ - STOPIT(2) function evaluations have been performed
+ (default inf, i.e., no limit), or
+ - a function value equals or exceeds STOPIT(3)
+ (default inf, i.e., no test on function values).
+ The form of the initial simplex is determined by STOPIT(4):
+ STOPIT(4) = 0: regular simplex (sides of equal length, the default)
+ STOPIT(4) = 1: right-angled simplex.
+ Progress of the iteration is not shown if STOPIT(5) = 0 (default 1).
+ STOPIT(6) indicates the direction (ie. minimization or
+ maximization.) Default is 1, maximization.
+ set STOPIT(6)=-1 for minimization
+ If a non-empty fourth parameter string SAVIT is present, then
+ `SAVE SAVIT x fmax nf' is executed after each inner iteration.
+ NB: x0 can be a matrix. In the output argument, in SAVIT saves,
+ and in function calls, x has the same shape as x0.
+ NMSMAX(fun, x0, STOPIT, SAVIT, P1, P2,...) allows additional
+ arguments to be passed to fun, via feval(fun,x,P1,P2,...).
+ References:
+ N. J. Higham, Optimization by direct search in matrix computations,
+ SIAM J. Matrix Anal. Appl, 14(2): 317-333, 1993.
+ C. T. Kelley, Iterative Methods for Optimization, Society for Industrial
+ and Applied Mathematics, Philadelphia, PA, 1999.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 66
+NMSMAX Nelder-Mead simplex method for direct search optimization.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 15
+nonlin_curvefit
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1010
+ -- Function File: [P, FY, CVG, OUTP] = nonlin_curvefit (F, PIN, X, Y)
+ -- Function File: [P, FY, CVG, OUTP] = nonlin_curvefit (F, PIN, X, Y,
+ SETTINGS)
+ Frontend for nonlinear fitting of values, computed by a model
+ function, to observed values.
+
+ Please refer to the description of `nonlin_residmin'. The only
+ differences to `nonlin_residmin' are the additional arguments X
+ (independent values, mostly, but not necessarily, an array of the
+ same dimensions or the same number of rows as Y) and Y (array of
+ observations), the returned value FY (final guess for observed
+ values) instead of RESID, that the model function has a second
+ obligatory argument which will be set to X and is supposed to
+ return guesses for the observations (with the same dimensions),
+ and that the possibly user-supplied function for the jacobian of
+ the model function has also a second obligatory argument which
+ will be set to X.
+
+ See also: nonlin_residmin
+
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 80
+Frontend for nonlinear fitting of values, computed by a model function,
+to obser
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 10
+nonlin_min
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 15376
+ -- Function File: [P, OBJF, CVG, OUTP] = nonlin_min (F, PIN)
+ -- Function File: [P, OBJF, CVG, OUTP] = nonlin_min (F, PIN, SETTINGS)
+ Frontend for constrained nonlinear minimization of a scalar
+ objective function. The functions supplied by the user have a
+ minimal interface; any additionally needed constants can be
+ supplied by wrapping the user functions into anonymous functions.
+
+ The following description applies to usage with vector-based
+ parameter handling. Differences in usage for structure-based
+ parameter handling will be explained in a separate section below.
+
+ F: objective function. It gets a column vector of real parameters
+ as argument. In gradient determination, this function may be
+ called with an informational second argument, whose content
+ depends on the function for gradient determination.
+
+ PIN: real column vector of initial parameters.
+
+ SETTINGS: structure whose fields stand for optional settings
+ referred to below. The fields can be set by `optimset()' with
+ Octave versions 3.3.55 or greater; with older Octave versions, the
+ fields must be set directly as structure-fields in the correct
+ case.
+
+ The returned values are the column vector of final parameters P,
+ the final value of the objective function OBJF, an integer CVG
+ indicating if and how optimization succeeded or failed, and a
+ structure OUTP with additional information, curently with only one
+ field: NITER, the number of iterations. CVG is greater than zero
+ for success and less than or equal to zero for failure; its
+ possible values depend on the used backend and currently can be
+ `0' (maximum number of iterations exceeded), `1' (fixed number of
+ iterations completed, e.g. in stochastic optimizers), `2'
+ (parameter change less than specified precision in two consecutive
+ iterations), `3' (improvement in objective function less than
+ specified), or `-4' (algorithm got stuck).
+
+ SETTINGS:
+
+ `Algorithm': String specifying the backend. Currently available
+ are `"lm_feasible"' (default) and `"siman"'. They are described in
+ separate sections below.
+
+ `objf_grad': Function computing the gradient of the objective
+ function with respect to the parameters, assuming residuals are
+ reshaped to a vector. Default: finite differences. Will be called
+ with the column vector of parameters and an informational structure
+ as arguments. The structure has the fields `f': value of objective
+ function for current parameters, `fixed': logical vector
+ indicating which parameters are not optimized, so these partial
+ derivatives need not be computed and can be set to zero, `diffp',
+ `diff_onesided', `lbound', `ubound': identical to the user
+ settings of this name, `plabels': 1-dimensional cell-array of
+ column-cell-arrays, each column with labels for all parameters,
+ the first column contains the numerical indices of the parameters.
+ The default gradient function will call the objective function
+ with the second argument set with fields `f': as the `f' passed to
+ the gradient function, `plabels': cell-array of 1x1 cell-arrays
+ with the entries of the column-cell-arrays of `plabels' as passed
+ to the jacobian function corresponding to current parameter,
+ `side': `0' for one-sided interval, `1' or `2', respectively, for
+ the sides of a two-sided interval, and `parallel': logical scalar
+ indicating parallel computation of partial derivatives.
+
+ `objf_hessian': Function computing the Hessian of the objective
+ function with respect to the parameters. The default is backend
+ specific. Will be called with the column vector of parameters as
+ argument.
+
+ `diffp': column vector of fractional intervals (doubled for
+ central intervals) supposed to be used by gradient functions
+ performing finite differencing. Default: `.001 * ones (size
+ (parameters))'. The default gradient function will use these as
+ absolute intervals for parameters with value zero.
+
+ `diff_onesided': logical column vector indicating that one-sided
+ intervals should be used by gradient functions performing finite
+ differencing. Default: `false (size (parameters))'.
+
+ `complex_step_derivative_objf', `complex_step_derivative_inequc',
+ `complex_step_derivative_equc': logical scalars, default: false.
+ Estimate gradient of objective function, general inequality
+ constraints, and general equality constraints, respectively, with
+ complex step derivative approximation. Use only if you know that
+ your objective function, function of general inequality
+ constraints, or function of general equality constraints,
+ respectively, is suitable for this. No user function for the
+ respective gradient must be specified.
+
+ `cstep': scalar step size for complex step derivative
+ approximation. Default: 1e-20.
+
+ `fixed': logical column vector indicating which parameters should
+ not be optimized, but kept to their inital value. Fixing is done
+ independently of the backend, but the backend may choose to fix
+ additional parameters under certain conditions.
+
+ `lbound', `ubound': column vectors of lower and upper bounds for
+ parameters. Default: `-Inf' and `+Inf', respectively. The bounds
+ are non-strict, i.e. parameters are allowed to be exactly equal to
+ a bound. The default gradient function will respect bounds (but no
+ further inequality constraints) in finite differencing.
+
+ `inequc': Further inequality constraints. Cell-array containing up
+ to four entries, two entries for linear inequality constraints
+ and/or one or two entries for general inequality constraints.
+ Either linear or general constraints may be the first entries, but
+ the two entries for linear constraints must be adjacent and, if
+ two entries are given for general constraints, they also must be
+ adjacent. The two entries for linear constraints are a matrix (say
+ `m') and a vector (say `v'), specifying linear inequality
+ constraints of the form `m.' * parameters + v >= 0'. The first
+ entry for general constraints must be a differentiable vector
+ valued function (say `h'), specifying general inequality
+ constraints of the form `h (p[, idx]) >= 0'; `p' is the column
+ vector of optimized paraters and the optional argument `idx' is a
+ logical index. `h' has to return the values of all constraints if
+ `idx' is not given. It may choose to return only the indexed
+ constraints if `idx' is given (so computation of the other
+ constraints can be spared); in this case, the additional setting
+ `inequc_f_idx' has to be set to `true'. In gradient determination,
+ this function may be called with an informational third argument,
+ whose content depends on the function for gradient determination.
+ If a second entry for general inequality constraints is given, it
+ must be a function computing the jacobian of the constraints with
+ respect to the parameters. For this function, the description of
+ `dfdp' above applies, with 2 exceptions: 1) it is called with 3
+ arguments since it has an additional argument `idx' -- a logical
+ index -- at second position, indicating which rows of the jacobian
+ must be returned (if the function chooses to return only indexed
+ rows, the additional setting `inequc_df_idx' has to be set to
+ `true'). 2) the default jacobian function calls `h' with 3
+ arguments, since the argument `idx' is also supplied. Note that
+ specifying linear constraints as general constraints will generally
+ waste performance, even if further, non-linear, general constraints
+ are also specified.
+
+ `equc': Equality constraints. Specified the same way as inequality
+ constraints (see `inequc'). The respective additional settings are
+ named `equc_f_idx' and `equc_df_idx'.
+
+ `cpiv': Function for complementary pivoting, usable in algorithms
+ for constraints. Default: cpiv_bard. Only the default function is
+ supplied with the package.
+
+ `TolFun': Minimum fractional improvement in objective function in
+ an iteration (abortion criterium). Default: .0001.
+
+ `MaxIter': Maximum number of iterations (abortion criterium).
+ Default: backend-specific.
+
+ `fract_prec': Column Vector, minimum fractional change of
+ parameters in an iteration (abortion criterium if violated in two
+ consecutive iterations). Default: backend-specific.
+
+ `max_fract_change': Column Vector, enforced maximum fractional
+ change in parameters in an iteration. Default: backend-specific.
+
+ `Display': String indicating the degree of verbosity. Default:
+ `"off"'. Possible values are currently `"off"' (no messages) and
+ `"iter"' (some messages after each iteration). Support of this
+ setting and its exact interpretation are backend-specific.
+
+ `debug': Logical scalar, default: `false'. Will be passed to the
+ backend, which might print debugging information if true.
+
+ Structure-based parameter handling
+
+ The setting `param_order' is a cell-array with names of the
+ optimized parameters. If not given, and initial parameters are a
+ structure, all parameters in the structure are optimized. If
+ initial parameters are a structure, it is an error if
+ `param_order' is not given and there are any non-structure-based
+ configuration items or functions.
+
+ The initial parameters PIN can be given as a structure containing
+ at least all fields named in `param_order'. In this case the
+ returned parameters P will also be a structure.
+
+ Each user-supplied function can be called with the argument
+ containing the current parameters being a structure instead of a
+ column vector. For this, a corresponding setting must be set to
+ `true': `objf_pstruct' (objective function), `objf_grad_pstruct'
+ (gradient of objective function), `objf_hessian_pstruct' (hessian
+ of objective function), `f_inequc_pstruct' (general inequality
+ constraints), `df_inequc_pstruct' (jacobian of general inequality
+ constraints), `f_equc_pstruct' (general equality constraints), and
+ `df_equc_pstruct' (jacobian of general equality constraints). If a
+ gradient (jacobian) function is configured in such a way, it must
+ return the entries (columns) of the gradient (jacobian) as fields
+ of a structure under the respective parameter names. If the
+ hessian function is configured in such a way, it must return a
+ structure (say `h') with fields e.g. as `h.a.b = value' for
+ `value' being the 2nd partial derivative with respect to `a' and
+ `b'. There is no need to also specify the field `h.b.a' in this
+ example.
+
+ Similarly, for specifying linear constraints, instead of the matrix
+ (called `m' above), a structure containing the rows of the matrix
+ in fields under the respective parameter names can be given. In
+ this case, rows containing only zeros need not be given.
+
+ The vector-based settings `lbound', `ubound', `fixed', `diffp',
+ `diff_onesided', `fract_prec', and `max_fract_change' can be
+ replaced by the setting `param_config'. It is a structure that can
+ contain fields named in `param_order'. For each such field, there
+ may be subfields with the same names as the above vector-based
+ settings, but containing a scalar value for the respective
+ parameter. If `param_config' is specified, none of the above
+ vector/matrix-based settings may be used.
+
+ Additionally, named parameters are allowed to be non-scalar real
+ arrays. In this case, their dimensions are given by the setting
+ `param_dims', a cell-array of dimension vectors, each containing
+ at least two dimensions; if not given, dimensions are taken from
+ the initial parameters, if these are given in a structure. Any
+ vector-based settings or not structure-based linear constraints
+ then must correspond to an order of parameters with all parameters
+ reshaped to vectors and concatenated in the user-given order of
+ parameter names. Structure-based settings or structure-based
+ initial parameters must contain arrays with dimensions reshapable
+ to those of the respective parameters.
+
+ Description of backends
+
+ "lm_feasible"
+
+ A Levenberg/Marquardt-like optimizer, attempting to honour
+ constraints throughout the course of optimization. This means that
+ the initial parameters must not violate constraints (to find an
+ initial feasible set of parameters, e.g. Octaves `sqp' can be
+ used, by specifying an objective function which is constant or
+ which returns the quadratic distance to the initial values). If the
+ constraints need only be honoured in the result of the
+ optimization, Octaves `sqp' may be preferable. The Hessian is
+ either supplied by the user or is approximated by the BFGS
+ algorithm.
+
+ Returned value CVG will be `2' or `3' for success and `0' or `-4'
+ for failure (see above for meaning).
+
+ Backend-specific defaults are: `MaxIter': 20, `fract_prec': `zeros
+ (size (parameters))', `max_fract_change': `Inf' for all parameters.
+
+ Interpretation of `Display': if set to `"iter"', currently only
+ information on applying `max_fract_change' is printed.
+
+ "siman"
+
+ A simulated annealing (stochastic) optimizer, changing all
+ parameters at once in a single step, so being suitable for
+ non-bound constraints.
+
+ No gradient or hessian of the objective function is used. The
+ settings `MaxIter', `fract_prec', `TolFun', and `max_fract_change'
+ are not honoured.
+
+ Accepts the additional settings `T_init' (initial temperature,
+ default 0.01), `T_min' (final temperature, default 1.0e-5), `mu_T'
+ (factor of temperature decrease, default 1.005), `iters_fixed_T'
+ (iterations within one temperature step, default 10),
+ `max_rand_step' (column vector or structure-based configuration of
+ maximum random steps for each parameter, default 0.005 * PIN),
+ `stoch_regain_constr' (if `true', regain constraints after a
+ random step, otherwise take new random value until constraints are
+ met, default false), `trace_steps' (set field `trace' of OUTP with
+ a matrix with a row for each step, first column iteration number,
+ second column repeat number within iteration, third column value
+ of objective function, rest columns parameter values, default
+ false), and `siman_log' (set field `log' of OUTP with a matrix
+ with a row for each iteration, first column temperature, second
+ column value of objective function, rest columns numbers of tries
+ with decrease, no decrease but accepted, and no decrease and
+ rejected.
+
+ Steps with increase `diff' of objective function are accepted if
+ `rand (1) < exp (- diff / T)', where `T' is the temperature of the
+ current iteration.
+
+ If regaining of constraints failed, optimization will be aborted
+ and returned value of CVG will be `0'. Otherwise, CVG will be `1'.
+
+ Interpretation of `Display': if set to `"iter"', an informational
+ line is printed after each iteration.
+
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 79
+Frontend for constrained nonlinear minimization of a scalar objective
+function.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 15
+nonlin_residmin
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 12987
+ -- Function File: [P, RESID, CVG, OUTP] = nonlin_residmin (F, PIN)
+ -- Function File: [P, RESID, CVG, OUTP] = nonlin_residmin (F, PIN,
+ SETTINGS)
+ Frontend for nonlinear minimization of residuals returned by a
+ model function.
+
+ The functions supplied by the user have a minimal interface; any
+ additionally needed constants (e.g. observed values) can be
+ supplied by wrapping the user functions into anonymous functions.
+
+ The following description applies to usage with vector-based
+ parameter handling. Differences in usage for structure-based
+ parameter handling will be explained in a separate section below.
+
+ F: function returning the array of residuals. It gets a column
+ vector of real parameters as argument. In gradient determination,
+ this function may be called with an informational second argument,
+ whose content depends on the function for gradient determination.
+
+ PIN: real column vector of initial parameters.
+
+ SETTINGS: structure whose fields stand for optional settings
+ referred to below. The fields can be set by `optimset()' with
+ Octave versions 3.3.55 or greater; with older Octave versions, the
+ fields must be set directly as structure-fields in the correct
+ case.
+
+ The returned values are the column vector of final parameters P,
+ the final array of residuals RESID, an integer CVG indicating if
+ and how optimization succeeded or failed, and a structure OUTP
+ with additional information, curently with only one field: NITER,
+ the number of iterations. CVG is greater than zero for success and
+ less than or equal to zero for failure; its possible values depend
+ on the used backend and currently can be `0' (maximum number of
+ iterations exceeded), `2' (parameter change less than specified
+ precision in two consecutive iterations), or `3' (improvement in
+ objective function -- e.g. sum of squares -- less than specified).
+
+ SETTINGS:
+
+ `Algorithm': String specifying the backend. Default:
+ `"lm_svd_feasible"'. The latter is currently the only backend
+ distributed with this package. It is described in a separate
+ section below.
+
+ `dfdp': Function computing the jacobian of the residuals with
+ respect to the parameters, assuming residuals are reshaped to a
+ vector. Default: finite differences. Will be called with the column
+ vector of parameters and an informational structure as arguments.
+ The structure has the fields `f': value of residuals for current
+ parameters, reshaped to a column vector, `fixed': logical vector
+ indicating which parameters are not optimized, so these partial
+ derivatives need not be computed and can be set to zero, `diffp',
+ `diff_onesided', `lbound', `ubound': identical to the user
+ settings of this name, `plabels': 1-dimensional cell-array of
+ column-cell-arrays, each column with labels for all parameters,
+ the first column contains the numerical indices of the parameters.
+ The default jacobian function will call the model function with
+ the second argument set with fields `f': as the `f' passed to the
+ jacobian function, `plabels': cell-array of 1x1 cell-arrays with
+ the entries of the column-cell-arrays of `plabels' as passed to
+ the jacobian function corresponding to current parameter, `side':
+ `0' for one-sided interval, `1' or `2', respectively, for the
+ sides of a two-sided interval, and `parallel': logical scalar
+ indicating parallel computation of partial derivatives.
+
+ `diffp': column vector of fractional intervals (doubled for
+ central intervals) supposed to be used by jacobian functions
+ performing finite differencing. Default: `.001 * ones (size
+ (parameters))'. The default jacobian function will use these as
+ absolute intervals for parameters with value zero.
+
+ `diff_onesided': logical column vector indicating that one-sided
+ intervals should be used by jacobian functions performing finite
+ differencing. Default: `false (size (parameters))'.
+
+ `complex_step_derivative_f', `complex_step_derivative_inequc',
+ `complex_step_derivative_equc': logical scalars, default: false.
+ Estimate Jacobian of model function, general inequality
+ constraints, and general equality constraints, respectively, with
+ complex step derivative approximation. Use only if you know that
+ your model function, function of general inequality constraints,
+ or function of general equality constraints, respectively, is
+ suitable for this. No user function for the respective Jacobian
+ must be specified.
+
+ `cstep': scalar step size for complex step derivative
+ approximation. Default: 1e-20.
+
+ `fixed': logical column vector indicating which parameters should
+ not be optimized, but kept to their inital value. Fixing is done
+ independently of the backend, but the backend may choose to fix
+ additional parameters under certain conditions.
+
+ `lbound', `ubound': column vectors of lower and upper bounds for
+ parameters. Default: `-Inf' and `+Inf', respectively. The bounds
+ are non-strict, i.e. parameters are allowed to be exactly equal to
+ a bound. The default jacobian function will respect bounds (but no
+ further inequality constraints) in finite differencing.
+
+ `inequc': Further inequality constraints. Cell-array containing up
+ to four entries, two entries for linear inequality constraints
+ and/or one or two entries for general inequality constraints.
+ Either linear or general constraints may be the first entries, but
+ the two entries for linear constraints must be adjacent and, if
+ two entries are given for general constraints, they also must be
+ adjacent. The two entries for linear constraints are a matrix (say
+ `m') and a vector (say `v'), specifying linear inequality
+ constraints of the form `m.' * parameters + v >= 0'. The first
+ entry for general constraints must be a differentiable vector
+ valued function (say `h'), specifying general inequality
+ constraints of the form `h (p[, idx]) >= 0'; `p' is the column
+ vector of optimized paraters and the optional argument `idx' is a
+ logical index. `h' has to return the values of all constraints if
+ `idx' is not given. It may choose to return only the indexed
+ constraints if `idx' is given (so computation of the other
+ constraints can be spared); in this case, the additional setting
+ `inequc_f_idx' has to be set to `true'. In gradient determination,
+ this function may be called with an informational third argument,
+ whose content depends on the function for gradient determination.
+ If a second entry for general inequality constraints is given, it
+ must be a function computing the jacobian of the constraints with
+ respect to the parameters. For this function, the description of
+ `dfdp' above applies, with 2 exceptions: 1) it is called with 3
+ arguments since it has an additional argument `idx' -- a logical
+ index -- at second position, indicating which rows of the jacobian
+ must be returned (if the function chooses to return only indexed
+ rows, the additional setting `inequc_df_idx' has to be set to
+ `true'). 2) the default jacobian function calls `h' with 3
+ arguments, since the argument `idx' is also supplied. Note that
+ specifying linear constraints as general constraints will generally
+ waste performance, even if further, non-linear, general constraints
+ are also specified.
+
+ `equc': Equality constraints. Specified the same way as inequality
+ constraints (see `inequc').
+
+ `cpiv': Function for complementary pivoting, usable in algorithms
+ for constraints. Default: cpiv_bard. Only the default function is
+ supplied with the package.
+
+ `weights': Array of weights for the residuals. Dimensions must
+ match.
+
+ `TolFun': Minimum fractional improvement in objective function
+ (e.g. sum of squares) in an iteration (abortion criterium).
+ Default: .0001.
+
+ `MaxIter': Maximum number of iterations (abortion criterium).
+ Default: backend-specific.
+
+ `fract_prec': Column Vector, minimum fractional change of
+ parameters in an iteration (abortion criterium if violated in two
+ consecutive iterations). Default: backend-specific.
+
+ `max_fract_change': Column Vector, enforced maximum fractional
+ change in parameters in an iteration. Default: backend-specific.
+
+ `Display': String indicating the degree of verbosity. Default:
+ `"off"'. Possible values are currently `"off"' (no messages) and
+ `"iter"' (some messages after each iteration). Support of this
+ setting and its exact interpretation are backend-specific.
+
+ `plot_cmd': Function enabling backend to plot results or
+ intermediate results. Will be called with current computed
+ residuals. Default: plot nothing.
+
+ `debug': Logical scalar, default: `false'. Will be passed to the
+ backend, which might print debugging information if true.
+
+ Structure-based parameter handling
+
+ The setting `param_order' is a cell-array with names of the
+ optimized parameters. If not given, and initial parameters are a
+ structure, all parameters in the structure are optimized. If
+ initial parameters are a structure, it is an error if
+ `param_order' is not given and there are any non-structure-based
+ configuration items or functions.
+
+ The initial parameters PIN can be given as a structure containing
+ at least all fields named in `param_order'. In this case the
+ returned parameters P will also be a structure.
+
+ Each user-supplied function can be called with the argument
+ containing the current parameters being a structure instead of a
+ column vector. For this, a corresponding setting must be set to
+ `true': `f_pstruct' (model function), `dfdp_pstruct' (jacobian of
+ model function), `f_inequc_pstruct' (general inequality
+ constraints), `df_inequc_pstruct' (jacobian of general inequality
+ constraints), `f_equc_pstruct' (general equality constraints), and
+ `df_equc_pstruct' (jacobian of general equality constraints). If a
+ jacobian-function is configured in such a way, it must return the
+ columns of the jacobian as fields of a structure under the
+ respective parameter names.
+
+ Similarly, for specifying linear constraints, instead of the matrix
+ (called `m' above), a structure containing the rows of the matrix
+ in fields under the respective parameter names can be given. In
+ this case, rows containing only zeros need not be given.
+
+ The vector-based settings `lbound', `ubound', `fixed', `diffp',
+ `diff_onesided', `fract_prec', and `max_fract_change' can be
+ replaced by the setting `param_config'. It is a structure that can
+ contain fields named in `param_order'. For each such field, there
+ may be subfields with the same names as the above vector-based
+ settings, but containing a scalar value for the respective
+ parameter. If `param_config' is specified, none of the above
+ vector/matrix-based settings may be used.
+
+ Additionally, named parameters are allowed to be non-scalar real
+ arrays. In this case, their dimensions are given by the setting
+ `param_dims', a cell-array of dimension vectors, each containing
+ at least two dimensions; if not given, dimensions are taken from
+ the initial parameters, if these are given in a structure. Any
+ vector-based settings or not structure-based linear constraints
+ then must correspond to an order of parameters with all parameters
+ reshaped to vectors and concatenated in the user-given order of
+ parameter names. Structure-based settings or structure-based
+ initial parameters must contain arrays with dimensions reshapable
+ to those of the respective parameters.
+
+ Description of backends (currently only one)
+
+ "lm_svd_feasible"
+
+ A Levenberg/Marquardt algorithm using singular value decomposition
+ and featuring constraints which must be met by the initial
+ parameters and are attempted to be kept met throughout the
+ optimization.
+
+ Parameters with identical lower and upper bounds will be fixed.
+
+ Returned value CVG will be `0', `2', or `3'.
+
+ Backend-specific defaults are: `MaxIter': 20, `fract_prec': `zeros
+ (size (parameters))', `max_fract_change': `Inf' for all parameters.
+
+ Interpretation of `Display': if set to `"iter"', currently
+ `plot_cmd' is evaluated for each iteration, and some further
+ diagnostics may be printed.
+
+ Specific option: `lm_svd_feasible_alt_s': if falling back to
+ nearly gradient descent, do it more like original
+ Levenberg/Marquardt method, with descent in each gradient
+ component; for testing only.
+
+ See also: nonlin_curvefit
+
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 78
+Frontend for nonlinear minimization of residuals returned by a model
+function.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 3
+nrm
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 153
+ -- Function File: XMIN = nrm(F,X0)
+ Using X0 as a starting point find a minimum of the scalar function
+ F. The Newton-Raphson method is used.
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 69
+Using X0 as a starting point find a minimum of the scalar function F.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 14
+optim_problems
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 64
+ Problems for testing optimizers. Documentation is in the code.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 33
+ Problems for testing optimizers.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 15
+optimset_compat
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1208
+ opt = optimset_compat (...) - manipulate m*tlab-style options structure
+
+ This function returns a m*tlab-style options structure that can be used
+ with the fminunc() function. 'optimset_compat' has been deprecated in
+ favor of 'optimset', which is now part of core Octave. This function
+ will possibly be removed from future versions of the 'optim' package.
+
+ INPUT : Input consist in one or more structs followed by option-value
+ pairs. The option that can be passed are those of m*tlab's 'optimset'.
+ Whether fminunc() accepts them is another question (see fminunc()).
+
+ Two extra options are supported which indicate how to use directly octave
+ optimization tools (such as minimize() and other backends):
+
+ "MinEquiv", [on|off] : Tell 'fminunc()' not to minimize 'fun', but
+ instead return the option passed to minimize().
+
+ "Backend", [on|off] : Tell 'fminunc()' not to minimize 'fun', but
+ instead return the [backend, opt], the name of the
+ backend optimization function that is used and the
+ optional arguments that will be passed to it. See
+ the 'backend' option of minimize().
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 25
+ opt = optimset_compat (.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 9
+poly_2_ex
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 353
+ ex = poly_2_ex (l, f) - Extremum of a 1-var deg-2 polynomial
+
+ l : 3 : Values of variable at which polynomial is known.
+ f : 3 : f(i) = Value of the degree-2 polynomial at l(i).
+
+ ex : 1 : Value for which f reaches its extremum
+
+ Assuming that f(i) = a*l(i)^2 + b*l(i) + c = P(l(i)) for some a, b, c,
+ ex is the extremum of the polynome P.
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 69
+ ex = poly_2_ex (l, f) - Extremum of a 1-var deg-2 polynomial
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 8
+polyconf
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1439
+ [y,dy] = polyconf(p,x,s)
+
+ Produce prediction intervals for the fitted y. The vector p
+ and structure s are returned from polyfit or wpolyfit. The
+ x values are where you want to compute the prediction interval.
+
+ polyconf(...,['ci'|'pi'])
+
+ Produce a confidence interval (range of likely values for the
+ mean at x) or a prediction interval (range of likely values
+ seen when measuring at x). The prediction interval tells
+ you the width of the distribution at x. This should be the same
+ regardless of the number of measurements you have for the value
+ at x. The confidence interval tells you how well you know the
+ mean at x. It should get smaller as you increase the number of
+ measurements. Error bars in the physical sciences usually show
+ a 1-alpha confidence value of erfc(1/sqrt(2)), representing
+ one standandard deviation of uncertainty in the mean.
+
+ polyconf(...,1-alpha)
+
+ Control the width of the interval. If asking for the prediction
+ interval 'pi', the default is .05 for the 95% prediction interval.
+ If asking for the confidence interval 'ci', the default is
+ erfc(1/sqrt(2)) for a one standard deviation confidence interval.
+
+ Example:
+ [p,s] = polyfit(x,y,1);
+ xf = linspace(x(1),x(end),150);
+ [yf,dyf] = polyconf(p,xf,s,'ci');
+ plot(xf,yf,'g-;fit;',xf,yf+dyf,'g.;;',xf,yf-dyf,'g.;;',x,y,'xr;data;');
+ plot(x,y-polyval(p,x),';residuals;',xf,dyf,'g-;;',xf,-dyf,'g-;;');
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 26
+ [y,dy] = polyconf(p,x,s)
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 10
+polyfitinf
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 4746
+ function [A,REF,HMAX,H,R,EQUAL] = polyfitinf(M,N,K,X,Y,EPSH,MAXIT,REF0)
+
+ Best polynomial approximation in discrete uniform norm
+
+ INPUT VARIABLES:
+
+ M : degree of the fitting polynomial
+ N : number of data points
+ X(N) : x-coordinates of data points
+ Y(N) : y-coordinates of data points
+ K : character of the polynomial:
+ K = 0 : mixed parity polynomial
+ K = 1 : odd polynomial ( X(1) must be > 0 )
+ K = 2 : even polynomial ( X(1) must be >= 0 )
+ EPSH : tolerance for leveling. A useful value for 24-bit
+ mantissa is EPSH = 2.0E-7
+ MAXIT : upper limit for number of exchange steps
+ REF0(M2): initial alternating set ( N-vector ). This is an
+ OPTIONAL argument. The length M2 is given by:
+ M2 = M + 2 , if K = 0
+ M2 = integer part of (M+3)/2 , if K = 1
+ M2 = 2 + M/2 (M must be even) , if K = 2
+
+ OUTPUT VARIABLES:
+
+ A : polynomial coefficients of the best approximation
+ in order of increasing powers:
+ p*(x) = A(1) + A(2)*x + A(3)*x^2 + ...
+ REF : selected alternating set of points
+ HMAX : maximum deviation ( uniform norm of p* - f )
+ H : pointwise approximation errors
+ R : total number of iterations
+ EQUAL : success of failure of algorithm
+ EQUAL=1 : succesful
+ EQUAL=0 : convergence not acheived
+ EQUAL=-1: input error
+ EQUAL=-2: algorithm failure
+
+ Relies on function EXCH, provided below.
+
+ Example:
+ M = 5; N = 10000; K = 0; EPSH = 10^-12; MAXIT = 10;
+ X = linspace(-1,1,N); % uniformly spaced nodes on [-1,1]
+ k=1; Y = abs(X).^k; % the function Y to approximate
+ [A,REF,HMAX,H,R,EQUAL] = polyfitinf(M,N,K,X,Y,EPSH,MAXIT);
+ p = polyval(A,X); plot(X,Y,X,p) % p is the best approximation
+
+ Note: using an even value of M, e.g., M=2, in the example above, makes
+ the algorithm to fail with EQUAL=-2, because of collocation, which
+ appears because both the appriximating function and the polynomial are
+ even functions. The way aroung it is to approximate only the right half
+ of the function, setting K = 2 : even polynomial. For example:
+
+ N = 10000; K = 2; EPSH = 10^-12; MAXIT = 10; X = linspace(0,1,N);
+ for i = 1:2
+ k = 2*i-1; Y = abs(X).^k;
+ for j = 1:4
+ M = 2^j;
+ [~,~,HMAX] = polyfitinf(M,N,K,X,Y,EPSH,MAXIT);
+ approxerror(i,j) = HMAX;
+ end
+ end
+ disp('Table 3.1 from Approximation theory and methods, M.J.D.POWELL, p. 27');
+ disp(' ');
+ disp(' n K=1 K=3');
+ disp(' '); format short g;
+ disp([(2.^(1:4))' approxerror']);
+
+ ALGORITHM:
+
+ Computation of the polynomial that best approximates the data (X,Y)
+ in the discrete uniform norm, i.e. the polynomial with the minimum
+ value of max{ | p(x_i) - y_i | , x_i in X } . That polynomial, also
+ known as minimax polynomial, is obtained by the exchange algorithm,
+ a finite iterative process requiring, at most,
+ n
+ ( ) iterations ( usually p = M + 2. See also function EXCH ).
+ p
+ since this number can be very large , the routine may not converge
+ within MAXIT iterations . The other possibility of failure occurs
+ when there is insufficient floating point precision for the input
+ data chosen.
+
+ CREDITS: This routine was developed and modified as
+ computer assignments in Approximation Theory courses by
+ Prof. Andrew Knyazev, University of Colorado Denver, USA.
+
+ Team Fall 98 (Revision 1.0):
+ Chanchai Aniwathananon
+ Crhistopher Mehl
+ David A. Duran
+ Saulo P. Oliveira
+
+ Team Spring 11 (Revision 1.1): Manuchehr Aminian
+
+ The algorithm and the comments are based on a FORTRAN code written
+ by Joseph C. Simpson. The code is available on Netlib repository:
+ http://www.netlib.org/toms/501
+ See also: Communications of the ACM, V14, pp.355-356(1971)
+
+ NOTES:
+
+ 1) A may contain the collocation polynomial
+ 2) If MAXIT is exceeded, REF contains a new reference set
+ 3) M, EPSH and REF can be altered during the execution
+ 4) To keep consistency to the original code , EPSH can be
+ negative. However, the use of REF0 is *NOT* determined by
+ EPSH< 0, but only by its inclusion as an input parameter.
+
+ Some parts of the code can still take advantage of vectorization.
+
+ Revision 1.0 from 1998 is a direct human translation of
+ the FORTRAN code http://www.netlib.org/toms/501
+ Revision 1.1 is a clean-up and technical update.
+ Tested on MATLAB Version 7.11.0.584 (R2010b) and
+ GNU Octave Version 3.2.4
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 73
+ function [A,REF,HMAX,H,R,EQUAL] = polyfitinf(M,N,K,X,Y,EPSH,MAXIT,REF0)
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+powell
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 2656
+ -- Function File: [ P, OBJ_VALUE, CONVERGENCE, ITERS, NEVS] = powell
+ (F, P0, CONTROL)
+ Multidimensional minimization (direction-set method). Implements a
+ direction-set (Powell's) method for multidimensional minimization
+ of a function without calculation of the gradient [1, 2]
+
+Arguments
+---------
+
+ * F: name of function to minimize (string or handle), which
+ should accept one input variable (see example for how to pass
+ on additional input arguments)
+
+ * P0: An initial value of the function argument to minimize
+
+ * OPTIONS: an optional structure, which can be generated by
+ optimset, with some or all of the following fields:
+ - MaxIter: maximum iterations (positive integer, or -1
+ or Inf for unlimited (default))
+
+ - TolFun: minimum amount by which function value must
+ decrease in each iteration to continue (default is 1E-8)
+
+ - MaxFunEvals: maximum function evaluations (positive
+ integer, or -1 or Inf for unlimited (default))
+
+ - SearchDirections: an n*n matrix whose columns contain
+ the initial set of (presumably orthogonal) directions to
+ minimize along, where n is the number of elements in the
+ argument to be minimized for; or an n*1 vector of
+ magnitudes for the initial directions (defaults to the
+ set of unit direction vectors)
+
+Examples
+--------
+
+ y = @(x, s) x(1) ^ 2 + x(2) ^ 2 + s;
+ o = optimset('MaxIter', 100, 'TolFun', 1E-10);
+ s = 1;
+ [x_optim, y_min, conv, iters, nevs] = powell(@(x) y(x, s), [1 0.5], o); %pass y wrapped in an anonymous function so that all other arguments to y, which are held constant, are set
+ %should return something like x_optim = [4E-14 3E-14], y_min = 1, conv = 1, iters = 2, nevs = 24
+
+
+Returns:
+--------
+
+ * P: the minimizing value of the function argument
+
+ * OBJ_VALUE: the value of F() at P
+
+ * CONVERGENCE: 1 if normal convergence, 0 if not
+
+ * ITERS: number of iterations performed
+
+ * NEVS: number of function evaluations
+
+References
+----------
+
+ 1. Powell MJD (1964), An efficient method for finding the
+ minimum of a function of several variables without
+ calculating derivatives, `Computer Journal', 7 :155-162
+
+ 2. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP
+ (1992). `Numerical Recipes in Fortran: The Art of Scientific
+ Computing' (2nd Ed.). New York: Cambridge University Press
+ (Section 10.5)
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 53
+Multidimensional minimization (direction-set method).
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 13
+residmin_stat
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 3188
+ -- Function File: INFO = residmin_stat (F, P, SETTINGS)
+ Frontend for computation of statistics for a residual-based
+ minimization.
+
+ SETTINGS is a structure whose fields can be set by `optimset' with
+ Octave versions 3.3.55 or greater; with older Octave versions, the
+ fields must be set directly and in the correct case. With SETTINGS
+ the computation of certain statistics is requested by setting the
+ fields `ret_<name_of_statistic>' to `true'. The respective
+ statistics will be returned in a structure as fields with name
+ `<name_of_statistic>'. Depending on the requested statistic and on
+ the additional information provided in SETTINGS, F and P may be
+ empty. Otherwise, F is the model function of an optimization (the
+ interface of F is described e.g. in `nonlin_residmin', please see
+ there), and P is a real column vector with parameters resulting
+ from the same optimization.
+
+ Currently, the following statistics (or general information) can be
+ requested:
+
+ `dfdp': Jacobian of model function with respect to parameters.
+
+ `covd': Covariance matrix of data (typically guessed by applying a
+ factor to the covariance matrix of the residuals).
+
+ `covp': Covariance matrix of final parameters.
+
+ `corp': Correlation matrix of final parameters.
+
+ Further SETTINGS
+
+ The functionality of the interface is similar to
+ `nonlin_residmin'. In particular, structure-based, possibly
+ non-scalar, parameters and flagging parameters as fixed are
+ possible. The following settings have the same meaning as in
+ `nonlin_residmin' (please refer to there): `param_order',
+ `param_dims', `f_pstruct', `dfdp_pstruct', `diffp',
+ `diff_onesided', `complex_step_derivative', `cstep', `fixed', and
+ `weights'. Similarly, `param_config' can be used, but only with
+ fields corresponding to the settings `fixed', `diffp', and
+ `diff_onesided'.
+
+ `dfdp' can be set in the same way as in `nonlin_residmin', but
+ alternatively may already contain the computed Jacobian of the
+ model function at the final parameters in matrix- or
+ structure-form. Users may pass information on the result of the
+ optimization in `residuals' (self-explaining) and `covd'
+ (covariance matrix of data). In many cases the type of objective
+ function of the optimization must be specified in `objf';
+ currently, there is only a backend for the type "wls" (weighted
+ least squares).
+
+ Backend-specific information
+
+ The backend for `objf == "wls"' (currently the only backend)
+ computes `cord' (due to user request or as a prerequisite for
+ `covp' and `corp') as a diagonal matrix by assuming that the
+ variances of data points are proportional to the reciprocal of the
+ squared `weights' and guessing the factor of proportionality from
+ the residuals. If `covp' is not defined (e.g. because the Jacobian
+ has no full rank), it makes an attempt to still compute its
+ uniquely defined elements, if any, and to find the additional
+ defined elements (being `1' or `-1'), if any, in `corp'.
+
+ See also: curvefit_stat
+
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 73
+Frontend for computation of statistics for a residual-based
+minimization.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 10
+rosenbrock
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 195
+ Rosenbrock function - used to create example obj. fns.
+
+ Function value and gradient vector of the rosenbrock function
+ The minimizer is at the vector (1,1,..,1),
+ and the minimized value is 0.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 50
+ Rosenbrock function - used to create example obj.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 13
+samin_example
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 16
+ dimensionality
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 16
+ dimensionality
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 14
+test_fminunc_1
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 92
+ Plain run, just to make sure ######################################
+ Minimum wrt 'x' is y0
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 80
+ Plain run, just to make sure ######################################
+ Minimum wr
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 10
+test_min_1
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 63
+ [x,v,niter] = feval (optim_func, "testfunc","dtestf", xinit);
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 63
+ [x,v,niter] = feval (optim_func, "testfunc","dtestf", xinit);
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 10
+test_min_2
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 60
+ [xlev,vlev,nlev] = feval(optim_func, "ff", "dff", xinit) ;
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 60
+ [xlev,vlev,nlev] = feval(optim_func, "ff", "dff", xinit) ;
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 10
+test_min_3
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 166
+ [xlev,vlev,nlev] = feval (optim_func, "ff", "dff", xinit, "extra", extra) ;
+ [xlev,vlev,nlev] = feval \
+ (optim_func, "ff", "dff", list (xinit, obsmat, obses));
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 80
+ [xlev,vlev,nlev] = feval (optim_func, "ff", "dff", xinit, "extra", extra) ;
+ [x
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 10
+test_min_4
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 173
+ Plain run, just to make sure ######################################
+ Minimum wrt 'x' is y0
+ [xlev,vlev,nlev] = feval (optim_func, "ff", "dff", {x0,y0,1});
+ ctl.df = "dff";
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 80
+ Plain run, just to make sure ######################################
+ Minimum wr
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 15
+test_minimize_1
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 92
+ Plain run, just to make sure ######################################
+ Minimum wrt 'x' is y0
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 80
+ Plain run, just to make sure ######################################
+ Minimum wr
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 22
+test_nelder_mead_min_1
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 29
+ Use vanilla nelder_mead_min
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 29
+ Use vanilla nelder_mead_min
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 22
+test_nelder_mead_min_2
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 70
+
+ Test using volume #################################################
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 70
+
+ Test using volume #################################################
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 13
+test_wpolyfit
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 34
+ x y dy
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 34
+ x y dy
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+vfzero
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1964
+ -- Function File: vfzero (FUN, X0)
+ -- Function File: vfzero (FUN, X0, OPTIONS)
+ -- Function File: [X, FVAL, INFO, OUTPUT] = vfzero (...)
+ A variant of `fzero'. Finds a zero of a vector-valued multivariate
+ function where each output element only depends on the input
+ element with the same index (so the Jacobian is diagonal).
+
+ FUN should be a handle or name of a function returning a column
+ vector. X0 should be a two-column matrix, each row specifying two
+ points which bracket a zero of the respective output element of
+ FUN.
+
+ If X0 is a single-column matrix then several nearby and distant
+ values are probed in an attempt to obtain a valid bracketing. If
+ this is not successful, the function fails. OPTIONS is a structure
+ specifying additional options. Currently, `vfzero' recognizes
+ these options: `"FunValCheck"', `"OutputFcn"', `"TolX"',
+ `"MaxIter"', `"MaxFunEvals"'. For a description of these options,
+ see *note optimset: doc-optimset.
+
+ On exit, the function returns X, the approximate zero and FVAL,
+ the function value thereof. INFO is a column vector of exit flags
+ that can have these values:
+
+ * 1 The algorithm converged to a solution.
+
+ * 0 Maximum number of iterations or function evaluations has
+ been reached.
+
+ * -1 The algorithm has been terminated from user output
+ function.
+
+ * -5 The algorithm may have converged to a singular point.
+
+ OUTPUT is a structure containing runtime information about the
+ `fzero' algorithm. Fields in the structure are:
+
+ * iterations Number of iterations through loop.
+
+ * nfev Number of function evaluations.
+
+ * bracketx A two-column matrix with the final bracketing of the
+ zero along the x-axis.
+
+ * brackety A two-column matrix with the final bracketing of the
+ zero along the y-axis.
+
+ See also: optimset, fsolve
+
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 21
+A variant of `fzero'.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 8
+wpolyfit
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 2922
+ -- Function File: [P, S] = wpolyfit (X, Y, DY, N)
+ Return the coefficients of a polynomial P(X) of degree N that
+ minimizes `sumsq (p(x(i)) - y(i))', to best fit the data in the
+ least squares sense. The standard error on the observations Y if
+ present are given in DY.
+
+ The returned value P contains the polynomial coefficients suitable
+ for use in the function polyval. The structure S returns
+ information necessary to compute uncertainty in the model.
+
+ To compute the predicted values of y with uncertainty use
+ [y,dy] = polyconf(p,x,s,'ci');
+ You can see the effects of different confidence intervals and
+ prediction intervals by calling the wpolyfit internal plot
+ function with your fit:
+ feval('wpolyfit:plt',x,y,dy,p,s,0.05,'pi')
+ Use DY=[] if uncertainty is unknown.
+
+ You can use a chi^2 test to reject the polynomial fit:
+ p = 1-chi2cdf(s.normr^2,s.df);
+ p is the probability of seeing a chi^2 value higher than that which
+ was observed assuming the data are normally distributed around the
+ fit. If p < 0.01, you can reject the fit at the 1% level.
+
+ You can use an F test to determine if a higher order polynomial
+ improves the fit:
+ [poly1,S1] = wpolyfit(x,y,dy,n);
+ [poly2,S2] = wpolyfit(x,y,dy,n+1);
+ F = (S1.normr^2 - S2.normr^2)/(S1.df-S2.df)/(S2.normr^2/S2.df);
+ p = 1-f_cdf(F,S1.df-S2.df,S2.df);
+ p is the probability of observing the improvement in chi^2 obtained
+ by adding the extra parameter to the fit. If p < 0.01, you can
+ reject the lower order polynomial at the 1% level.
+
+ You can estimate the uncertainty in the polynomial coefficients
+ themselves using
+ dp = sqrt(sumsq(inv(s.R'))'/s.df)*s.normr;
+ but the high degree of covariance amongst them makes this a
+ questionable operation.
+
+ -- Function File: [P, S, MU] = wpolyfit (...)
+ If an additional output `mu = [mean(x),std(x)]' is requested then
+ the X values are centered and normalized prior to computing the
+ fit. This will give more stable numerical results. To compute a
+ predicted Y from the returned model use `y = polyval(p,
+ (x-mu(1))/mu(2)'
+
+ -- Function File: wpolyfit (...)
+ If no output arguments are requested, then wpolyfit plots the data,
+ the fitted line and polynomials defining the standard error range.
+
+ Example
+ x = linspace(0,4,20);
+ dy = (1+rand(size(x)))/2;
+ y = polyval([2,3,1],x) + dy.*randn(size(x));
+ wpolyfit(x,y,dy,2);
+
+ -- Function File: wpolyfit (..., 'origin')
+ If 'origin' is specified, then the fitted polynomial will go
+ through the origin. This is generally ill-advised. Use with
+ caution.
+
+ Hocking, RR (2003). Methods and Applications of Linear Models.
+ New Jersey: John Wiley and Sons, Inc.
+
+
+ See also: polyfit, polyconf
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 80
+Return the coefficients of a polynomial P(X) of degree N that minimizes
+`sumsq (
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 11
+wrap_f_dfdp
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 387
+ [ret1, ret2] = wrap_f_dfdp (f, dfdp, varargin)
+
+ f and dftp should be the objective function (or "model function" in
+ curve fitting) and its jacobian, respectively, of an optimization
+ problem. ret1: f (varagin{:}), ret2: dfdp (varargin{:}). ret2 is
+ only computed if more than one output argument is given. This
+ manner of calling f and dfdp is needed by some optimization
+ functions.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 48
+ [ret1, ret2] = wrap_f_dfdp (f, dfdp, varargin)
+
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 6
+wsolve
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 1736
+ [x,s] = wsolve(A,y,dy)
+
+ Solve a potentially over-determined system with uncertainty in
+ the values.
+
+ A x = y +/- dy
+
+ Use QR decomposition for increased accuracy. Estimate the
+ uncertainty for the solution from the scatter in the data.
+
+ The returned structure s contains
+
+ normr = sqrt( A x - y ), weighted by dy
+ R such that R'R = A'A
+ df = n-p, n = rows of A, p = columns of A
+
+ See polyconf for details on how to use s to compute dy.
+ The covariance matrix is inv(R'*R). If you know that the
+ parameters are independent, then uncertainty is given by
+ the diagonal of the covariance matrix, or
+
+ dx = sqrt(N*sumsq(inv(s.R'))')
+
+ where N = normr^2/df, or N = 1 if df = 0.
+
+ Example 1: weighted system
+
+ A=[1,2,3;2,1,3;1,1,1]; xin=[1;2;3];
+ dy=[0.2;0.01;0.1]; y=A*xin+randn(size(dy)).*dy;
+ [x,s] = wsolve(A,y,dy);
+ dx = sqrt(sumsq(inv(s.R'))');
+ res = [xin, x, dx]
+
+ Example 2: weighted overdetermined system y = x1 + 2*x2 + 3*x3 + e
+
+ A = fullfact([3,3,3]); xin=[1;2;3];
+ y = A*xin; dy = rand(size(y))/50; y+=dy.*randn(size(y));
+ [x,s] = wsolve(A,y,dy);
+ dx = s.normr*sqrt(sumsq(inv(s.R'))'/s.df);
+ res = [xin, x, dx]
+
+ Note there is a counter-intuitive result that scaling the
+ uncertainty in the data does not affect the uncertainty in
+ the fit. Indeed, if you perform a monte carlo simulation
+ with x,y datasets selected from a normal distribution centered
+ on y with width 10*dy instead of dy you will see that the
+ variance in the parameters indeed increases by a factor of 100.
+ However, if the error bars really do increase by a factor of 10
+ you should expect a corresponding increase in the scatter of
+ the data, which will increase the variance computed by the fit.
+
+
+
+# name: <cell-element>
+# type: sq_string
+# elements: 1
+# length: 24
+ [x,s] = wsolve(A,y,dy)
+
+
+
+
+
+