This is a compilation of answers to questions frequently asked by

`sparseLM`users.

*Q1 -- What is*`sparseLM`?*Q2 -- What types of minimization problems can*`sparseLM`solve?*Q3 -- Where can I find more information on nonlinear regression/data fitting?**Q4 -- Who is using*`sparseLM`?*Q5 -- What do I need to use*`sparseLM`?*Q6 -- How can I compile*`sparseLM`?*Q7 -- Where are the arguments of the various routines explained in detail?**Q8 -- How can I choose the direct sparse solver to be used by*`sparseLM`?*Q9 -- How are Jacobian matrices stored?**Q11 -- How can I pass my own data for use by the objective function and its Jacobian?**Q10 -- Is there a simple interface for directly assigning non-zero Jacobian elements?**Q12 -- Can I use*`sparseLM`for problems whose Jacobian is unavailable or difficult/expensive to compute?*Q13 -- How can I obtain an analytic Jacobian for my minimization problem??**Q14 -- How can I verify the correctness of a Jacobian?**Q15 -- How should I choose the starting point for a minimization?**Q16 -- How can I force an early termination of the minimization?**Q17 -- How can I force certain variables to remain fixed during the minimization?**Q18 -- How can I deal with badly scaled problems?**Q19 -- How can I solve a weighted non-linear least squares problem using*`sparseLM`?*Q20 -- Where can I find an example of using*`sparseLM`for curve fitting?*Q21 -- Why am I getting a link error for*`MAIN__`?*Q22 -- How can I determine what kind of BLAS library I have installed?**Q23 -- Can I use*`sparseLM`from matlab?*Q24 -- How can I compile*`sparseLM`'s MEX-file?*Q25 -- Why does*`sparseLM`'s MEX-file complain about the number of nonzeros in the Jacobian?*Q26 -- Can I include*`sparseLM`in a commercial product?

-- [top]**Q1 -- What is**`sparseLM`?`sparseLM`is a C software package that implements a variant of the Levenberg-Marquardt algorithm that can deal with arbitrarily sparse non-linear least squares minimization.`sparseLM`is distributed under the GNU GPL.-- [top]**Q2 -- What types of minimization problems can**`sparseLM`solve?`sparseLM`supports arbitrarily sparse unconstrained nonlinear least squares minimization with analytic or approximate Jacobians. Such problems typically arise from the solution of systems of nonlinear equations or in data fitting applications.-- [top]**Q3 -- Where can I find more information on nonlinear regression/data fitting?**Nonlinear regression, i.e. the problem of fitting a nonlinear model to your data, is far too broad to be covered here. A useful introduction can be found in these notes from TU Denmark. Another free source is http://www.curvefit.com/; yet another option is to consult sections 15.4 and 15.5 of the Numerical Recipes book.

-- [top]**Q4 -- Who is using**`sparseLM`?To be added...

-- [top]**Q5 -- What do I need to use**`sparseLM`?To be able to use

`sparseLM`, you should have installed LAPACK/BLAS or an equivalent vendor library. On Intel processors, MKL is highly recommended. Netlib has an f2c'ed free version at http://www.netlib.org/clapack. It has been reported that the precompiled MSWin libraries at that site are broken, hence it is suggested that you build them yourself. Alternatively, precompiled MSWin LAPACK/BLAS libraries can be found here or here. In addition to LAPACK,`sparseLM`requires at least one direct sparse linear solver. The recommended choice for such a solver is the CHOLMOD sparse Cholesky package from UF. CHOLMOD comes with UF's SuiteSparse archive and should be installed before attempting to build`sparseLM`. Instructions for building CHOLMOD under MS Windows are here whereas precompiled libraries are here.

If you are using Intel's MKL, the PARDISO solver it includes is another excellent choice that has the additional advantage of not requiring any extra compilation. Please refer to`README.txt`for more details.-- [top]**Q6 -- How can I compile**`sparseLM`?First, in file

`solvers.mk`you should specify the direct sparse solvers you have available. Note that in order to avoid copyright complications, no direct sparse solver is distributed with`sparseLM`. Thus, you should download and install the solver(s) of your choice using the links below:- CHOLMOD (recommended solver)
- CSparse
- LDL
- UMFPACK
- MA77, MA57, MA47 or MA27 from the HSL library (requires registration)
- PARDISO (commercial solver, also in Intel's MKL)
- DSS (simplified interface to PARDISO in Intel's MKL, commercial)
- SuperLU
- TAUCS
- SPOOLES
- MUMPS (available on email request)

Unless stated otherwise, the solvers listed above are free for academic/personnal use. Following the instalation of chosen solvers, their include paths should be defined in the Makefile. To successfully link your application against

`sparseLM`, you should also provide the appropriate paths and linker switches for the libraries corresponding to the chosen sparse solvers.`sparseLM`'s tarfile contains makefiles for Unix/Linux using gcc (Makefile) or Intel's icc (Makefile.icc) and MSWin using Visual Studio/Visual Studio Express (Makefile.vc). Please read the comments within those files for more information.Starting with version 1.3,

`sparseLM`also includes configuration files for use with the cross-platform CMake build system.-- [top]**Q7 -- Where are the arguments of the various routines explained in detail?**The source code of each function is preceeded by detailed comments explaining the role of each argument. See file

`splm.c`.-- [top]**Q8 -- How can I choose the direct sparse solver to be used by**`sparseLM`?By setting the last element of the

`opts`argument to one of`SPLM_CHOLMOD`,`SPLM_CSPARSE`,`SPLM_LDL`,`SPLM_UMFPACK`,`SPLM_MA77`,`SPLM_MA57`,`SPLM_MA47`,`SPLM_MA27`,`SPLM_PARDISO`,`SPLM_DSS`,`SPLM_SuperLU`,`SPLM_TAUCS`,`SPLM_SPOOLES`or`SPLM_MUMPS`, the corresponding direct sparse solver will be used for solving the sparse augmented normal equations. Notice, however, that for a sparse solver to be usable, it should first be installed and made known to`sparseLM`during the installation of the latter. See also Q6.-- [top]**Q9 -- How are Jacobian matrices stored?**Sparse Jacobians can be provided either in (CRS) or (CCS) format with zero-based indexing. A simpler (albeit less efficient) approach is to create the Jacobian in sparse triplet format and convert it to CCS using the auxiliary function

`splm_stm2ccsm()`(see also Q10)..-- [top]**Q10 -- Is there a simple interface for directly assigning non-zero Jacobian elements?**If you do not want to deal with the details of the CRS or CCS format for defining the Jacobian,

`sparseLM`offers the alternative of specifying the latter in the sparse triplet format. This format represents a sparse matrix with a list of triplets (i, j, v), each of which specifies a non-zero element with value v at row i and column j.`sparseLM`offers several utility functions whose names start with the`splm_stm_`prefix for dealing with matrices in sparse triplet format.`splmdemo.c`includes working examples of using the sparse triplet format functions in practice. Note though that use of the sparse matrix format comes with some performance penalty as the Jacobian still needs to be converted to the CCS format eventually.-- [top]**Q11 -- How can I pass my own data for use by the objective function and its Jacobian?**Assume that you want to pass two arrays, one of doubles and one of integers. A quick but not very elegant solution is to use global variables. The simplest way to avoid using globals is to declare a structure as

struct mydata{ double dar[XXX]; int iar[YYY]; };

where XXX and YYY denote the appropriate array sizes. Then, define a structure variable withstruct mydata data;

and fill it:data.dar[0]=7.0; data.iar[0]=-17; // etc

Following this, call the appropriate`sparseLm`routine passing it the address of data as the adata argument, e.g.ret=sparselm_dercrs(func, fjac, ..., (void *)&data);

Your func and fjac routines should interpret the supplied data using type casting:struct mydata *dptr; dptr=(struct mydata *)adata; // adata is passed as void * // supplied data can now be accessed as dptr->dar[0], etc

-- [top]**Q12 -- Can I use**`sparseLM`for problems whose Jacobian is unavailable or difficult/expensive to compute?Yes.

`sparseLM`includes routines that approximate the Jacobian using finite differences. These routines have`_dif`in their names, as opposed to those using analytic Jacobians that have`_der`. Their caller has the option of a) providing the non-zero pattern of the Jacobian (i.e., which observation depends on which variable) or b) instruct them to discover the Jacobian's non-zero pattern. The second option cannot be made completely foolproof, hence its use is generally discouraged. It should also be noted that even when the non-zero pattern is supplied by the caller, minimizing with approximate Jacobians typically takes longer to converge compared to when the Jacobian is supplied analytically (see also Q13).-- [top]**Q13 -- How can I obtain an analytic Jacobian for my minimization problem?**Employing analytic Jacobians is generally recommended due to faster convergence. When the objective function is too complicated for hand-coding its Jacobian, the latter can be generated through the use of automatic differentiation tools such as those listed here, that work by systematically applying the chain rule to a given code segment.

-- [top]**Q14 -- How can I verify the correctness of a Jacobian?**`sparseLM`includes the functions`sparselm_chkjaccrs()`and`sparselm_chkjacccs()`which allow you to check the Jacobian of a function for consistency with the function itself. A working example of using them is within comments in`splmdemo.c`whereas more details can be found in the comments preceding`sparselm_chkjac_core()`in`splm_misc.c`.-- [top]**Q15 -- How should I choose the starting point for a minimization?**Selecting the starting point for a particular minimization problem is largely problem-specific. In some cases, your background knowledge of the problem may guide you to select reasonable initial values for the problem's parameters. In other cases, you might be able to define a simplified linear version of the non-linear problem under consideration, whose solution will serve as the starting point for the non-linear problem. You might even use random values for your starting point. Beware, however, that the success of this approach is highly dependent on the nature of your problem: If it is highly non-linear or it has several local minima, it is very likely that you will end up with unexpected surprises.

-- [top]**Q16 -- How can I force an early termination of the minimization?**In certain applications, a minimization problem can be considered to be satisfactory solved as soon as the least squares error becomes smaller than some value. Despite that more iterations can reduce this error further, the reduction is not so critical for the problem at hand and might take more computation time to be achieved. In such cases, by setting

`opts[3]`to a suitable value, the user can instruct`sparseLM`to return (with code`6`) as soon as the least squares error becomes sufficiently small.-- [top]**Q17 -- How can I force certain variables to remain fixed during the minimization?**It is sometimes convenient to treat certain parameters of a minimization problem as constants, in other words leave them unchanged during the minimization.

`sparseLM`offers two alternatives for achieving this:Assuming that the parameter vector is arranged so that its

`N`elements that should remain fixed appear in the front, passing`N`as the value of argument`nconvars`in the minimization routines will instruct`sparseLM`to refrain from modifying the first`N`parameters.In cases where rearranging the parameter vector is not practical or desirable, one can do the following. For each parameter that should remain fixed, the corresponding column of the Jacobian should be made to contain exactly one element (at any row) whose value equals zero. In effect, this sets the column in question equal to zero; of course, more than one elements could be specified as being zero, although this would only increase computations. Note that specifying a zero column in the Jacobian as just explained is not equivalent to specifying an empty column, i.e. omitting all its elements. In the latter case,

`sparseLM`will complain with an error message and terminate. Compared to the first alternative, the second one entails higher computational cost as the zero elements participate in the calculations involving the Jacobian whereas in the first case the parameter vector is internally truncated so that zero values are ignored.

-- [top]**Q18 -- How can I deal with badly scaled problems?**Practical applications of minimization often involve variables that vary greatly in magnitude. Such minimization problems are characterized as being badly scaled and, unless adjusted, may lead to poor convergence.

`sparseLM`does not include an explicit mechanism for dealing with badly scaled problems since such mechanisms are, to a large extent, problem-specific. Nevertheless, the user can account for badly scaled problems by using her knowledge of the problem domain to rescale the independent variables, i.e. change their units so that they all have approximately the same range. See chapter 7 in Dennis-Schnabel for more details.-- [top]**Q19 -- How can I solve a weighted non-linear least squares problem using**`sparseLM`?Currently

`sparseLM`does not directly offer support for weighted non-linear least squares problems. However, it is quite straightforward to program this for a particular problem, as follows: Let us assume that you want to minimize ||e||_C = e^T*C*e with e being your error vector (e=x-f(p)) and C a symmetric positive-definite weight matrix. Then, you can equivalently minimize s^T*s without weights, where s=W*e=W*x - W*f(p) and W is the Cholesky decomposition of C, i.e. C=W^T*W. This is because s^T*s=e^t*W^t*W*e=e^T*C*e=||e||_C. In the case that all non-diagonal elements of C are zero, then W is simply the diagonal matrix resulting after taking the sqrt of C's diagonal elements. See also the discussion here.

-- [top]**Q20 -- Where can I find an example of using**`sparseLM`for curve fitting?The demo program

`splmdemo.c`includes several working examples of sparse non-linear least squares minimization.

More to be added...-- [top]**Q21 -- Why am I getting a link error for**`MAIN__`?When linking

`sparseLM`into your application, certain LAPACK/BLAS libraries cause the linker to fail with an error like*undefined reference to `MAIN__'*, or similar. This is due to these libraries superfluously calling a Fortran main() routine for initialization. There are several workarounds to this problem:- If using
`gcc`, pass the option -u MAIN__ to the linker. - If compiling under Windows, link against these precompiled libraries.
- Define a dummy MAIN__ function as
*int MAIN__(int argc, char **argv) { abort(); return 1; }*

See also the discussion here.

- If using
-- [top]**Q22 -- How can I determine what kind of BLAS library I have installed?**On unix, try

`nm libblas.a | grep gemm`. Under Windows, use`dumpbin /all blas.lib | more`. Inspect the output of those commands for prefixes like`f2c_`and`cblas_`to the`gemm`routines.-- [top]**Q23 -- Can I use**`sparseLM`from matlab?Yes.

`sparseLM`includes a MEX-file that can be used to interface it with matlab. See`sparselm.c`in the`matlab`directory of`sparseLM`'s source distribution. Documentation for using`sparseLM`with matlab is provided in file`sparselm.m`. Assuming that the MEX-file has been compiled (see also Q24), file`splmdemo.m`contains examples of its use with matlab. Try it with the command`matlab < splmdemo.m`. The MEX-file might also work with GNU octave, but I have not verified this myself; if you succeed with it, please let me know.-- [top]**Q24 -- How can I compile**`sparseLM`'s MEX-file?The answer depends on your operating system/compiler. The

`Makefile`in the`matlab`directory works with GCC under Linux, whereas`Makefile.win`works with MSVC under Windows. You might also need to run "mex -setup" from the matlab or OS command prompt to configure your system to build MEX-files. In any case, the provided makefiles can be used as a starting point for compiling on your own system. If building under Windows, you might also consider using gnumex.

Starting with version 7.3, matlab can handle 64 bit addressing and its implementation of sparse matrices has been changed accordingly. Therefore, the variable*A word of caution:*`HAVE_LARGE_ARRAYS`should be set to`yes`in the employed makefile. Also, since`sparseLM`converts row and column sparse matrix indices to signed integers for use by the sparse linear solvers, this can potentially cause index wrap-around problems when working with very large sparse matrices.-- [top]**Q25 -- Why does**`sparseLM`'s MEX-file complain about the number of nonzeros in the Jacobian?Assuming that the case of a user error in the coding of the Jacobian has been excluded, this error is due to the following reason. Matlab's implementation of sparse matrices is such that if a previously nonzero element becomes numerically equal to zero, the matrix's structure is automatically adjusted so that no storage is wasted for representating the zero element.

`sparseLM`, on the other hand, assumes a static sparse structure for the Jacobian throughout a minimization for efficiency reasons. Therefore, in case that at least one element of the Jacobian changes to zero during the minimization, the MEX-file will complain about not finding a constant number of nonzeros with a message like`MATLAB Jacobian does not have the expected number of nonzeros`. To overcome this issue, the MEX-file supports the`'anzp'`Jacobian type, which apart from allowing the user to provide a function for analytically computing the Jacobian, it also allows him to specify a read-only sparse matrix that explicitly defines the Jacobian's fixed nonzero pattern. More details can be found in`sparselm.m`.-- [top]**Q26 -- Can I include**`sparseLM`in a commercial product?`sparseLM`is distributed under GPL, which allows you to modify the original code into a new program and then sell copies of the modified program commercially, but only under the terms of the GPL. Thus, you must make the source code available to the users of the program as described in the GPL, and they must be allowed to redistribute and modify it as described in the GPL. Clearly, such requirements are in conflict with using`sparseLM`as part of a proprietary application. However, it is possible to obtain a paid license for proprietary commercial use under terms different than those of GPL. Please contact the author for further information on such licensing. More answers to questions related to the GPL license can be found here.

Note also that most sparse linear solvers used by`sparseLM`are free for personal/academic use only. Their commercial application is not free and cannot be covered by`sparseLM`'s commercial license. It is thus essential that for any sparse linear solvers to be combined with`sparseLM`, a separate commercial license is negotiated with its copyright holders. Please refer to the solvers documentation for more details.