NAG Library Function Document
nag_opt_sparse_convex_qp_solve (e04nqc)
Note: this function uses optional parameters to define choices in the problem specification and in the details of the algorithm. If you wish to use default 
settings for all of the optional parameters, you need only read Sections 1 to 10 of this document.  If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm, to Section 12 for a detailed description of the specification of the optional parameters and to Section 13 for a detailed description of the monitoring information produced by the function.
 
 
1
 Purpose
nag_opt_sparse_convex_qp_solve (e04nqc) solves sparse linear programming or convex quadratic programming problems. The initialization function 
nag_opt_sparse_convex_qp_init (e04npc) must have been called before calling 
nag_opt_sparse_convex_qp_solve (e04nqc).
 
 
2
 Specification
| 
| #include <nag.h> |  
| #include <nage04.h> |  
| void | nag_opt_sparse_convex_qp_solve (Nag_Start start,
Integer m,
Integer n,
Integer ne,
Integer nname,
Integer lenc,
Integer ncolh,
Integer iobj,
double objadd,
const char *prob,
const double acol[],
const Integer inda[],
const Integer loca[],
const double bl[],
const double bu[],
const double c[],
const char *names[],
const Integer helast[],
Integer hs[],
double x[],
double pi[],
double rc[],
Integer *ns,
Integer *ninf,
double *sinf,
double *obj,
Nag_E04State *state,
Nag_Comm *comm, 
NagError *fail) |  | 
Before calling 
nag_opt_sparse_convex_qp_solve (e04nqc) or one of the option setting functions 
nag_opt_sparse_convex_qp_init (e04npc) must be called.
| 
| #include <nag.h> |  
| #include <nage04.h> |  
| void | nag_opt_sparse_convex_qp_init (Nag_E04State *state,
NagError *fail) |  | 
After calling 
nag_opt_sparse_convex_qp_solve (e04nqc) you can call one or both of the functions  
 to obtain the current value of an optional parameter.
 
3
 Description
nag_opt_sparse_convex_qp_solve (e04nqc) is designed to solve large-scale 
linear or 
quadratic programming problems of the form:
where 
 is an 
-vector of variables, 
 and 
 are constant lower and upper bounds, 
 is an 
 by 
 sparse matrix and 
 is a linear or quadratic objective function that may be specified in a variety of ways, depending upon the particular problem being solved. The optional parameter 
 may be used to specify a problem in which 
 is maximized instead of minimized.
 Upper and lower bounds are specified for all variables and constraints. This form allows full generality in specifying various types of constraint. In particular, the th constraint may be defined as an equality by setting . If certain bounds are not present, the associated elements of  or  may be set to special values that are treated as  or .
The possible forms for the function 
 are summarised in 
Table 1. The most general form for 
 is
where 
 is a constant, 
 is a constant 
-vector and 
 is a constant symmetric 
 by 
 matrix with elements 
. In this form, 
 is a quadratic function of 
 and 
(1) is known as a 
quadratic program (QP). 
nag_opt_sparse_convex_qp_solve (e04nqc) is suitable for all 
convex quadratic programs. The defining feature of a 
convex QP is that the matrix 
 must be 
positive semidefinite, i.e., it must satisfy 
 for all 
. If not, 
 is nonconvex and 
nag_opt_sparse_convex_qp_solve (e04nqc) will terminate with the error indicator 
 NE_HESS_INDEF. If 
 is nonconvex it may be more appropriate to call 
nag_opt_sparse_nlp_solve (e04vhc) instead.
 
  
  
  
  
   
    | Problem type | Objective function | Hessian matrix | 
    | FP | Not applicable |  | 
    | LP |  |  | 
    | QP |  | Symmetric positive semidefinite | 
Table 1
Choices for the objective function 
 
If 
, then 
 and the problem is known as a 
linear program (LP). In this case, rather than defining an 
 with zero elements, you can define 
 to have no columns by setting 
 (see 
Section 5).
If , , and , there is no objective function and the problem is a feasible point problem (FP), which is equivalent to finding a point that satisfies the constraints on . In the situation where no feasible point exists, several options are available for finding a point that minimizes the constraint violations (see the description of the optional parameter ).
nag_opt_sparse_convex_qp_solve (e04nqc) is suitable for large LPs and QPs in which the matrix 
 is 
sparse, i.e., when the number of zero elements is sufficiently large that it is worthwhile using algorithms which avoid computations and storage involving zero elements. The matrix 
 is input to 
nag_opt_sparse_convex_qp_solve (e04nqc) by means of the three array arguments 
acol, 
inda and 
loca. This allows you to specify the pattern of nonzero elements in 
.
 nag_opt_sparse_convex_qp_solve (e04nqc) exploits structure in  by requiring  to be defined implicitly in a function 
that computes the product  for any given vector . In many cases, the product  can be computed very efficiently for any given , e.g.,  may be a sparse matrix, or a sum of matrices of rank-one.
For problems in which 
 can be treated as a 
dense matrix, it is usually more efficient to use 
nag_opt_lp (e04mfc), 
nag_opt_lin_lsq (e04ncc) or 
nag_opt_qp (e04nfc).
There is considerable flexibility allowed in the definition of 
 in 
Table 1. The vector 
 defining the linear term 
 can be input in three ways: as a sparse row of 
; as an explicit dense vector 
; or as both a sparse row and an explicit vector (in which case, 
 will be the sum of two linear terms). When stored in 
, 
 is the 
iobjth row of 
, which is known as the 
objective row. The objective row must always be a 
free row of 
 in the sense that its lower and upper bounds must be 
 and 
. Storing 
 as part of 
 is recommended if 
 is a sparse vector. Storing 
 as an explicit vector is recommended for a sequence of problems, each with a different objective (see arguments 
c and 
lenc).
The upper and lower bounds on the 
 elements of 
 are said to define the 
general constraints of the problem.  Internally, 
nag_opt_sparse_convex_qp_solve (e04nqc) converts the general constraints to equalities by introducing a set of 
slack variables , where 
.  For example, the linear constraint 
 is replaced by 
, together with the bounded slack 
.  The problem defined by 
(1) can therefore be re-written in the following equivalent form:
Since the slack variables 
 are subject to the same upper and lower bounds as the elements of 
, the bounds on 
 and 
 can simply be thought of as bounds on the combined vector 
.  (In order to indicate their special role in QP problems, the original variables 
 are sometimes known as ‘column variables’, and the slack variables 
 are known as ‘row variables’.)
Each LP or QP problem is solved using a two-phase iterative procedure (in which the general constraints are satisfied throughout): a feasibility phase (Phase 1), in which the sum of infeasibilities with respect to the bounds on  and  is minimized to find a feasible point that satisfies all constraints within a specified feasibility tolerance; and an optimality phase (Phase 2), in which  is minimized (or maximized) by constructing a sequence of iterates that lies within the feasible region.
Phase 1 involves solving a linear program of the form
  
  
  
   
    | Phase 1 |  | 
    |  |  | 
   
    |  |  | 
 
which is equivalent to minimizing the sum of the constraint violations. If the constraints are feasible (i.e., at least one feasible point exists), eventually a point will be found at which both 
 and 
 are zero. Then the associated value of 
 satisfies the original constraints and is used as the starting point for the Phase 2 iterations for minimizing 
.
If the constraints are infeasible (i.e., 
 or 
 at the end of Phase 1), no solution exists for 
(1) and you have the option of either terminating or continuing in so-called  
elastic mode (see the discussion of the optional parameter 
). In elastic mode, a ‘relaxed’ or ‘perturbed’ problem is solved in which 
 is minimized while allowing some of the bounds to become ‘elastic’, i.e., to change from their specified values. Variables subject to elastic bounds are known as 
elastic variables. An elastic variable is free to violate one or both of its original upper or lower bounds. You are able to assign which bounds will become elastic if elastic mode is ever started (see the argument 
helast in 
Section 5).
To make the relaxed problem meaningful, 
nag_opt_sparse_convex_qp_solve (e04nqc) minimizes 
 while (in some sense) finding the ‘smallest’ violation of the elastic variables. In the situation where all the variables are elastic, the relaxed problem has the form
  
  
  
   
    | Phase 2 () |  | 
    |  |  | 
   
    |  | , | 
 
where 
 is a non-negative argument known as the 
elastic weight (see the description of the optional parameter 
), and 
 is called the 
composite objective. In the more general situation where only a subset of the bounds are elastic, the 
's and 
's for the non-elastic bounds are fixed at zero.
The elastic weight can be chosen to make the composite objective behave like the original objective , the sum of infeasibilities, or anything in-between. If , nag_opt_sparse_convex_qp_solve (e04nqc) will attempt to minimize  subject to the (true) upper and lower bounds on the non-elastic variables (and declare the problem infeasible if the non-elastic variables cannot be made feasible).
At the other extreme, choosing  sufficiently large will have the effect of minimizing the sum of the violations of the elastic variables subject to the original constraints on the non-elastic variables. Choosing a large value of the elastic weight is useful for defining a ‘least-infeasible’ point for an infeasible problem.
In Phase 1 and elastic mode, all calculations involving  and  are done implicitly in the sense that an elastic variable  is allowed to violate its lower bound (say) and an explicit value of  can be recovered as .
A constraint is said to be active or binding at  if the associated element of either  or  is equal to one of its upper or lower bounds.  Since an active constraint in  has its associated slack variable at a bound, the status of both simple and general upper and lower bounds can be conveniently described in terms of the status of the variables .  A variable is said to be nonbasic if it is temporarily fixed at its upper or lower bound.  It follows that regarding a general constraint as being active is equivalent to thinking of its associated slack as being nonbasic.
At each iteration of an active-set method, the constraints 
 are (conceptually) partitioned into the form
where 
 consists of the nonbasic elements of 
 and the 
basis matrix  is square and nonsingular.  The elements of 
 and 
 are called the 
basic and 
superbasic variables respectively; with 
 they are a permutation of the elements of 
 and 
.  At a QP solution, the basic and superbasic variables will lie somewhere between their upper or lower bounds, while the nonbasic variables will be equal to one of their bounds.  At each iteration, 
 is regarded as a set of independent variables that are free to move in any desired direction, namely one that will improve the value of the objective function (or sum of infeasibilities).  The basic variables are then adjusted in order to ensure that 
 continues to satisfy 
.  The number of superbasic variables (
 say) therefore indicates the number of degrees of freedom remaining after the constraints have been satisfied.  In broad terms, 
 is a measure of 
how nonlinear the problem is.  In particular, 
 will always be zero for FP and LP problems.
If it appears that no improvement can be made with the current definition of ,  and , a nonbasic variable is selected to be added to , and the process is repeated with the value of  increased by one.  At all stages, if a basic or superbasic variable encounters one of its bounds, the variable is made nonbasic and the value of  is decreased by one.
Associated with each of the  equality constraints  is a dual variable .  Similarly, each variable in  has an associated reduced gradient  (also known as a reduced cost).  The reduced gradients for the variables  are the quantities , where  is the gradient of the QP objective function, and the reduced gradients for the slack variables  are the dual variables .  The QP subproblem is optimal if  for all nonbasic variables at their lower bounds,  for all nonbasic variables at their upper bounds and  for all superbasic variables.  In practice, an approximate QP solution is found by slightly relaxing these conditions on  (see the description of the optional parameter ).
The process of computing and comparing reduced gradients is known as 
pricing (a term first introduced in the context of the simplex method for linear programming).  To ‘price’ a nonbasic variable 
 means that the reduced gradient 
 associated with the relevant active upper or lower bound on 
 is computed via the formula 
, where 
 is the 
th column of 
.  (The variable selected by such a process and the corresponding value of 
 (i.e., its reduced gradient) are the quantities 
+SBS and 
dj in the monitoring file output; see 
Section 9.1.)  If 
 has significantly more columns than rows (i.e., 
), pricing can be computationally expensive.  In this case, a strategy known as 
partial pricing can be used to compute and compare only a subset of the 
s.
nag_opt_sparse_convex_qp_solve (e04nqc) is based on SQOPT, which is part of the SNOPT package described in 
Gill et al. (2005a).  It uses stable numerical methods throughout and includes a reliable basis package (for maintaining sparse 
 factors of the basis matrix 
), a practical anti-degeneracy procedure, efficient handling of linear constraints and bounds on the variables (by an active-set strategy), as well as automatic scaling of the constraints.  Further details can be found in 
Section 11.
 
 
4
 References
Fourer R (1982)  Solving staircase linear programs by the simplex method Math. Programming 23 274–313 
Gill P E and Murray W (1978)  Numerically stable methods for quadratic programming Math. Programming 14 349–372 
Gill P E, Murray W and Saunders M A (1995)  User's guide for QPOPT 1.0: a Fortran package for quadratic programming Report SOL 95-4 Department of Operations Research, Stanford University 
Gill P E, Murray W and Saunders M A (2005a)  Users' guide for SQOPT 7: a Fortran package for large-scale linear and quadratic programming 
Report NA 05-1 Department of Mathematics, University of California, San Diego 
http://www.ccom.ucsd.edu/~peg/papers/sqdoc7.pdfGill P E, Murray W and Saunders M A (2005b)  Users' guide for SNOPT 7.1: a Fortran package for large-scale linear nonlinear programming 
Report NA 05-2 Department of Mathematics, University of California, San Diego 
http://www.ccom.ucsd.edu/~peg/papers/sndoc7.pdfGill P E, Murray W, Saunders M A and Wright M H (1987)  Maintaining LU factors of a general sparse matrix Linear Algebra and its Applics. 88/89 239–270 
Gill P E, Murray W, Saunders M A and Wright M H (1989)  A practical anti-cycling procedure for linearly constrained optimization Math. Programming 45 437–474 
Gill P E, Murray W, Saunders M A and Wright M H (1991)  Inertia-controlling methods for general quadratic programming SIAM Rev. 33 1–36 
Hall J A J and McKinnon K I M (1996)  The simplest examples where the simplex method cycles and conditions where EXPAND fails to prevent cycling Report MS 96–100 Department of Mathematics and Statistics, University of Edinburgh 
 
5
 Arguments
The first 
 entries of the arguments 
bl, 
bu, 
hs and 
x refer to the variables 
. The last 
 entries refer to the slacks 
.
- 1:
  
      – Nag_StartInput
- 
On entry: indicates how a starting basis (and certain other items) will be obtained. 
 
- Requests that an internal Crash procedure be used to choose an initial basis, unless a Basis file is provided via optional parameters ,  or .
- Is the same as  but is more meaningful when a Basis file is given.
- Means that a basis is already defined in hs and a start point is already defined in x (probably from an earlier call).
 
 Constraint:
  ,  or .
 
- 2:
  
      – function, supplied by the userExternal Function
- 
For QP problems, you must supply a version of  qphx to compute the matrix product   for a given vector  . If   has rows and columns of zeros, it is most efficient to order   so that the nonlinear variables appear first. For example, if   and only   enters the objective quadratically then 
 
 In this case,  ncolh should be the dimension of  , and  qphx should compute  . For FP and LP problems,  qphx will never be called by  nag_opt_sparse_convex_qp_solve (e04nqc) and hence  qphx may be  specified as  NULLFN.  
The specification of  qphx is: 
- 1:
  
      – IntegerInput
- 
On entry: this is the same argument  ncolh as supplied to  nag_opt_sparse_convex_qp_solve (e04nqc). 
 
- 2:
  
      – const doubleInput
- 
On entry: the first  ncolh elements of the vector  . 
 
- 3:
  
      – doubleOutput
- 
On exit: the product  . If  ncolh is less than the input argument  n,   is really the product   in  (2). 
 
- 4:
  
      – IntegerInput
- 
On entry: allows you to save computation time if certain data must be read or calculated only once. To preserve this data for a subsequent calculation place it in    comm. 
 
- nag_opt_sparse_convex_qp_solve (e04nqc) is calling qphx for the first time.
- There is nothing special about the current call of qphx.
- nag_opt_sparse_convex_qp_solve (e04nqc) is calling qphx for the last time. This argument setting allows you to perform some additional computation on the final solution. 
- The current  is optimal.
- The problem appears to be infeasible.
- The problem appears to be unbounded.
- The iterations limit was reached.
 
 
 
- 5:
  
      – Nag_Comm *
- Pointer to structure of type Nag_Comm; the following members are relevant to  qphx- . 
- user – double *
- iuser – Integer *
- p – Pointer 
- The type Pointer will be  void *- .  Before calling  nag_opt_sparse_convex_qp_solve (e04nqc)-  you may allocate memory and initialize these pointers with various quantities for use by  qphx-  when called from  nag_opt_sparse_convex_qp_solve (e04nqc)-  (see  Section 3.3.1.1-  in How to Use the NAG Library and its Documentation). 
 
 
 Note: qphx should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by  nag_opt_sparse_convex_qp_solve (e04nqc). If your code inadvertently  does return any NaNs or infinities,  nag_opt_sparse_convex_qp_solve (e04nqc) is likely to produce unexpected results. 
 
- 3:
  
      – IntegerInput
- 
On entry:  , the number of general linear constraints (or slacks). This is the number of rows in the linear constraint matrix  , including the free row (if any; see  iobj). Note that   must have at least one row. If your problem has no constraints, or only upper or lower bounds on the variables, then you must include a dummy row with sufficiently wide upper and lower bounds (see also  acol,  inda and  loca). 
 Constraint:
  .
 
- 4:
  
      – IntegerInput
- 
On entry: , the number of variables (excluding slacks). This is the number of columns in the linear constraint matrix . Constraint:
  .
 
- 5:
  
      – IntegerInput
- 
On entry: the number of nonzero elements in . Constraint:
  .
 
- 6:
  
      – IntegerInput
- 
On entry: the number of column (i.e., variable) and row names supplied in the array   names. 
 
- There are no names. Default names will be used in the printed output.
- All names must be supplied.
 
 Constraint:
   or .
 
- 7:
  
      – IntegerInput
- 
On entry: the number of elements in the constant objective vector  .
 If  , the first  lenc elements of   belong to variables corresponding to the constant objective term  . 
 Constraint:
  .
 
- 8:
  
      – IntegerInput
- 
On entry:  , the number of leading nonzero columns of the Hessian matrix  . For FP and LP problems,  ncolh must be set to zero.
 The first  ncolh elements of   belong to variables corresponding to the nonzero block of the QP Hessian. 
 Constraint:
  .
 
- 9:
  
      – IntegerInput
- 
On entry: if  , row  iobj of   is a free row containing the nonzero elements of the vector   appearing in the linear objective term  .
 If  , there is no free row, and the linear objective vector should be supplied in array  c. 
 Constraint:
  .
 
- 10:
  
    – doubleInput
- 
On entry: the constant , to be added to the objective for printing purposes. Typically  .
 
- 11:
  
    – const char *Input
- 
On entry: the name for the problem. It is used in the printed solution and in some functions that output Basis files.  Only the first eight characters of  prob are significant.
 
 
- 12:
  
    – const doubleInput
- 
On entry: the nonzero elements of , ordered by increasing column index. Note that all elements must be assigned a value in the calling program. 
- 13:
  
    – const IntegerInput
- 
On entry:   must contain the row index of the nonzero element stored in  , for  . Thus a pair of values   contains a matrix element and its corresponding row index.  If  , the first  lenc elements of  acol and  inda belong to variables corresponding to the constant objectiver term  . 
If the problem has a quadratic objective, the first  ncolh columns of  acol and  inda belong to variables corresponding to the nonzero block of the   Hessian. Function  qphx knows about these variables. 
Note that the row indices for a column must lie in the range   to  m, and may be supplied in any order. 
 Constraint:
  , for .
 
- 14:
  
    – const IntegerInput
- 
On entry:   must contain the value  , where   is the index in  acol and  inda of the start of the  th column, for  . Thus, the entries of column   are held in   , and their corresponding row indices are in   , for  , where   and  . To specify the  th column as empty, set  . Note that the first and last elements of  loca must be   and  . If your problem has no constraints, or just bounds on the variables, you may include a dummy ‘free’ row with a single (zero) element by setting  ,  ,  ,  , and  , for   . This row is made ‘free’ by setting its bounds to be   and  , where   is the value of the optional parameter  . 
 Constraints:
      
- ;
- , for ;
- ;
- , for .
 
 
- 15:
  
    – const doubleInput
- 
On entry:  , the lower bounds for all the variables and general constraints, in the following order. The first  n elements of  bl must contain the bounds on the variables  , and the next  m elements the bounds for the general linear constraints   (which, equivalently, are the bounds for the slacks,  ) and the free row (if any). To fix the  th variable, set  , say, where  . To specify a nonexistent lower bound (i.e.,  ), set  . Here,   is the value of the optional parameter  . To specify the  th constraint as an  equality, set  , say, where  . Note that the lower bound corresponding to the free row must be set to   and stored in  . 
 Constraint:
  
 if  ,  (See also the description for  bu.) 
 
- 16:
  
    – const doubleInput
- 
On entry:  , the upper bounds for all the variables and general constraints, in the following order. The first  n elements of  bu must contain the bounds on the variables  , and the next  m elements the bounds for the general linear constraints   (which, equivalently, are the bounds for the slacks,  ) and the free row (if any). To specify a nonexistent upper bound (i.e.,  ), set  . Note that the upper bound corresponding to the free row must be set to   and stored in  . 
 Constraints:
      
-  if , ;
- otherwise .
 
 
- 17:
  
    – const doubleInput
- 
On entry: contains the explicit objective vector   (if any). If   ,  c is not referenced and may be  NULL.
 
 
- 18:
  
    – const char *Input
- 
On entry: the optional column and row names, respectively.
 If  ,  names is not referenced and the printed output will use default names for the columns and rows. 
If  , the first  n elements must contain the names for the columns and the next  m elements must contain the names for the rows. Note that the name for the free row (if any) must be stored in   . 
Note: that only the first eight characters of the strings in  names are significant. 
 
 
- 19:
  
    – const IntegerInput
- 
On entry: defines which variables are to be treated as being elastic in elastic mode. The allowed values of  helast are: 
 
     |  | Status in elastic mode |   |  | Variable  is non-elastic and cannot be infeasible |   |  | Variable  can violate its lower bound |   |  | Variable  can violate its upper bound |   |  | Variable  can violate either its lower or upper bound |  
 
 helast need not be assigned if optional parameter  . 
 Constraint:
   if , , for .
 
- 20:
  
    – IntegerInput/Output
- 
On entry: if   or  , and a Basis file of some sort is to be input (see the description of the optional parameters  ,   or  ), then  hs and  x need not be set at all.
 If   or   and there is no Basis file, the first  n elements of  hs and  x must specify the initial states and values, respectively, of the variables  . (The slacks   need not be initialized.) An internal Crash procedure is then used to select an initial basis matrix  . The initial basis matrix will be triangular (neglecting certain small elements in each column). It is chosen from various rows and columns of  . Possible values for   are as follows: 
 
     |  | State of  during Crash procedure |   | or | Eligible for the basis |   |  | Ignored |   |  | Eligible for the basis (given preference over  or ) |   | or | Ignored |  
 
 
If nothing special is known about the problem, or there is no wish to provide special information, you may set
 and , for . All variables will then be eligible for the initial basis. Less trivially, to say that the th variable will probably be equal to one of its bounds, set  and  or  and  as appropriate. Following the Crash procedure, variables for which  are made superbasic. Other variables not selected for the basis are then made nonbasic at the value  if , or at the value  or  closest to . If  ,  hs and  x must specify the initial states and values, respectively, of the variables and slacks  . If  nag_opt_sparse_convex_qp_solve (e04nqc) has been called previously with the same values of  n and  m,  hs already contains satisfactory information. 
 Constraints:
      
-  if  or , , for ;
-  if , , for .
 
 On exit: the final states of the variables and slacks  . The significance of each possible value of   is as follows: 
 
      |  | State of variable | Normal value of |   |  | Nonbasic |  |   |  | Nonbasic |  |   |  | Superbasic | Between  and |   |  | Basic | Between  and |  
 
 If , basic and superbasic variables may be outside their bounds by as much as the value of the optional parameter . Note that unless the optional parameter  is specified, the optional parameter  applies to the variables of the scaled problem. In this case, the variables of the original problem may be as much as  outside their bounds, but this is unlikely unless the problem is very badly scaled. Very occasionally some nonbasic variables may be outside their bounds by as much as the optional parameter , and there may be some nonbasic variables for which  lies strictly between its bounds. If  , some basic and superbasic variables may be outside their bounds by an arbitrary amount (bounded by  sinf if  ). 
 
- 21:
  
    – doubleInput/Output
- 
On entry: the initial values of the variables  , and, if  , the slacks  , i.e.,  . (See the description for argument  hs.) 
 On exit: the final values of the variables and slacks . 
- 22:
  
    – doubleOutput
- 
On exit: contains the dual variables  (a set of Lagrange multipliers (shadow prices) for the general constraints). 
- 23:
  
    – doubleOutput
- 
On exit: contains the reduced costs,  . The vector   is the gradient of the objective if  x is feasible, otherwise it is the gradient of the Phase 1 objective. In the former case,  , for  , hence  . 
 
- 24:
  
    – Integer *Input/Output
- 
On entry:  , the number of superbasics. For QP problems,  ns need not be specified if  , but must retain its value from a previous call when  . For FP and LP problems,  ns need not be initialized. 
 On exit: the final number of superbasics. This will be zero for FP and LP problems. 
- 25:
  
    – Integer *Output
- 
On exit: the number of infeasibilities. 
- 26:
  
    – double *Output
- 
On exit: the sum of the scaled infeasibilities. This will be zero if , and is most meaningful when . 
- 27:
  
    – double *Output
- 
On exit: the value of the objective function.
 If  ,  obj includes the quadratic objective term   (if any). 
If  ,  obj is just the linear objective term   (if any). 
For FP problems,  obj is set to zero. 
Note that  obj does not include contributions from the constant term  objadd or the objective row, if any. 
 
- 28:
  
    – Nag_E04State *Communication Structure
- 
state contains internal information required for functions in this suite. It must not be modified in any way. 
 
- 29:
  
    – Nag_Comm *
- 
The NAG communication argument (see  Section 3.3.1.1 in How to Use the NAG Library and its Documentation). 
- 30:
  
    – NagError *Input/Output
- 
The NAG error argument (see  Section 3.7 in How to Use the NAG Library and its Documentation). 
nag_opt_sparse_convex_qp_solve (e04nqc) returns with   NE_NOERROR if the reduced gradient ( rgNorm; see  Section 9.1) is negligible, the Lagrange multipliers ( Lagr Mult; see  Section 9.1) are optimal,   satisfies the constraints to the accuracy requested by the value of the optional parameter   and the reduced Hessian factor   (see  Section 11.2) is nonsingular. 
 
 
6
 Error Indicators and Warnings
- NE_ALLOC_FAIL
- 
Dynamic memory allocation failed.
       
      See  Section 2.3.1.2  in How to Use the NAG Library and its Documentation for further information. 
Internal memory allocation failed when attempting to obtain workspace sizes  ,   and  . Please contact  NAG.
 
- NE_ALLOC_INSUFFICIENT
- 
Internal memory allocation was insufficient. Please contact  NAG.
 
- NE_ARRAY_INPUT
- 
On entry, , , .
 Constraint:  or .
 
On entry, row index  in  is outside the range  to .
 
- NE_BAD_PARAM
- 
Basis file dimensions do not match this problem.
 On entry, argument   had an illegal value. 
- NE_BASIS_FAILURE
- 
An error has occurred in the basis package, perhaps indicating incorrect setup of arrays  inda and  loca. Set the optional parameter   and examine the output carefully for further information.
 
- NE_BASIS_ILL_COND
- 
Numerical difficulties have been encountered and no further progress can be made.
 Numerical error in trying to satisfy the general constraints. The basis is very ill-conditioned. An  factorization of the basis has just been obtained and used to recompute the basic variables , given the present values of the superbasic and nonbasic variables. However, a row check has revealed that the resulting solution does not satisfy the current constraints  sufficiently well. This probably means that the current basis is very ill-conditioned. Request the  if there are any linear constraints and variables. For certain highly structured basis matrices (notably those with band structure), a systematic growth may occur in the factor . Consult the description of Umax, Umin and Growth in Section 13, and set the optional parameter  to  (or possibly even smaller, but not less than ). 
- NE_BASIS_SINGULAR
- 
The basis is singular after several attempts to factorize it (and add slacks where necessary).
 Either the problem is badly scaled or the value of the optional parameter  is too large. 
- NE_E04NPC_NOT_INIT
- 
The initialization function  nag_opt_sparse_convex_qp_init (e04npc) has not been called.
 
- NE_HESS_INDEF
- 
Error in  qphx: the QP Hessian is indefinite.
 
An indefinite matrix was detected during the computation of the reduced Hessian factor  (see Section 11.2). This may be caused by  being indefinite. Check also that qphx has been coded correctly and that all relevant elements of  have been assigned their correct values. If qphx is coded correctly and  is positive semidefinite, the failure may be caused by ill conditioning. Try reducing the values of the optional parameters  and . If there are very large values in , check the scaling of the variables and constraints. 
- NE_HESS_TOO_BIG
- 
The value of the optional parameter  is too small.
 The current set of basic and superbasic variables have been optimized as much as possible and a pricing operation is necessary to continue, but there are already  superbasics (and no room for any more). In general, raise the   by a reasonable amount, bearing in mind the storage needed for reduced Hessian (see Section 11.2). (The   will also increase to  unless specified otherwise, and the associated storage will be about  words.) In some cases you may have to set  to conserve storage, but beware that the rate of convergence will probably fall off severely. 
- NE_INT
- 
On entry, .
 Constraint: .
 On entry, .
 Constraint: .
 
- NE_INT_2
- 
On entry,  and .
 Constraint: .
 On entry,  and .
 Constraint: .
 
On entry,  and .
 Constraint: .
 
On entry,  ne is not equal to the number of nonzeros in  acol.  , nonzeros in  .
 
- NE_INT_3
- 
On entry, ,  and .
 Constraint: .
 On entry, ,  and .
 Constraint:  or .
 
On entry, ,  and .
 Constraint: .
 
On entry, ,  and .
 Constraint:  or .
 
- NE_INTERNAL_ERROR
- 
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact  NAG for assistance. 
	See  Section 2.7.6  in How to Use the NAG Library and its Documentation for further information. 
An unexpected error has occurred. Set the optional parameter  and examine the output carefully for further information.
 
- NE_NO_LICENCE
- 
Your licence key may have expired or may not have been installed correctly.
       
      See  Section 2.7.5 in How to Use the NAG Library and its Documentation for further information. 
- NE_NOT_REQUIRED_ACC
- 
The requested accuracy could not be achieved.
 
- NE_REAL_2
- 
On entry, bounds  bl and  bu for   are equal and infinite:   and  .
 
On entry, bounds  bl and  bu for   are equal and infinite.   and  .
 
On entry, bounds for  are inconsistent.  and .
 
- NE_UNBOUNDED
- 
The problem appears to be unbounded. The constraint violation limit has been reached.
 
The problem appears to be unbounded. The objective function is unbounded.
 The problem is unbounded (or badly scaled). For a minimization problem, the objective function is not bounded below in the feasible region. For linear problems, unboundedness is detected by the simplex method when a nonbasic variable can be increased or decreased by an arbitrary amount without causing a basic variable to violate a bound. Consider adding an upper or lower bound to the variable. Also, examine the constraints that have nonzeros in the associated column, to see if they have been formulated as intended. Very rarely, the scaling of the problem could be so poor that numerical error will give an erroneous indication of unboundedness. Consider using the optional parameter . 
- NW_NOT_FEASIBLE
- 
The linear constraints appear to be infeasible.
 
The problem appears to be infeasible. Infeasibilites have been minimized.
 
The problem appears to be infeasible. Nonlinear infeasibilites have been minimized.
 
The problem appears to be infeasible. The linear equality constraints could not be satisfied.
 The problem is infeasible. The general constraints cannot all be satisfied simultaneously to within the value of the optional parameter . Feasibility is measured with respect to the upper and lower bounds on the variables and slacks. The message tells us that among all the points satisfying the general constraints , there is apparently no point that satisfies the bounds on  and . Violations as small as the  are ignored, but at least one component of  or  violates a bound by more than the tolerance. Note:  although the objective function is the sum of infeasibilities (when ), this sum will not necessarily have been minimized when . If , nag_opt_sparse_convex_qp_solve (e04nqc) will optimize the QP objective and the sum of infeasibilities, suitably weighted using the optional parameter . The function will tend to determine a ‘good’ infeasible point if the elastic weight is sufficiently large. 
- NW_SOLN_NOT_UNIQUE
- 
Weak solution found – the solution is not unique.
 
- NW_TOO_MANY_ITER
- 
Iteration limit reached.
 
Major iteration limit reached.
 Too many iterations. The value of the optional parameter  is too small. The Iterations limit was exceeded before the required solution could be found. Check the iteration log to be sure that progress was being made. If so, restart the run using a Basis file that was saved at the end of the run. 
 
7
 Accuracy
nag_opt_sparse_convex_qp_solve (e04nqc) implements a numerically stable active-set strategy and returns solutions that are as accurate as the condition of the problem warrants on the machine.
 
8
 Parallelism and Performance
nag_opt_sparse_convex_qp_solve (e04nqc) makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the 
x06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the 
Users' Note for your implementation for any additional implementation-specific information.
This section contains a description of the printed output.
 
9.1
 Description of the Printed Output
If , one line of information is output to the  every th iteration, where  is the specified . A heading is printed before the first such line following a basis factorization. The heading contains the items described below. In this description, a pricing operation is defined to be the process by which one or more nonbasic variables are selected to become superbasic (in addition to those already in the superbasic set). The variable selected will be denoted by jq. If the problem is purely linear, variable jq will usually become basic immediately (unless it should happen to reach its opposite bound and return to the nonbasic set).
If optional parameter 
 is in effect, variable 
jq is selected from 
 or 
, the 
ppth segments of the constraint matrix 
.
| Label | Description | 
| Itn | is the iteration count. | 
| pp | is the partial-price indicator.  The variable selected by the last pricing operation came from the ppth partition of  and .  Note that pp is reset to zero whenever the basis is refactorized. | 
| dj | is the value of the reduced gradient (or reduced cost) for the variable selected by the pricing operation at the start of the current iteration. Algebraically, dj is , for , where  is the gradient of the current objective function,  is the vector of dual variables, and  is the th column of the constraint matrix . Note that dj is the norm of the reduced-gradient vector at the start of the iteration, just after the pricing operation. | 
| +SBS | is the variable jq selected by the pricing operation to be added to the superbasic set. | 
| -SBS | is the variable chosen to leave the superbasic set. It has become basic if the entry under -B is nonzero, otherwise it becomes nonbasic. | 
| -BS | is the variable removed from the basis to become nonbasic. | 
| Step | is the value of the step length  taken along the current search direction .  The variables  have just been changed to .  If a variable is made superbasic during the current iteration (i.e., +SBS is positive), Step will be the step to the nearest bound. During the optimality phase, the step can be greater than unity only if the reduced Hessian is not positive definite. | 
| Pivot | is the th element of a vector  satisfying  whenever  (the th column of the constraint matrix  replaces the th column of the basis matrix .  Wherever possible, Step is chosen so as to avoid extremely small values of Pivot (since they may cause the basis to be nearly singular).  In extreme cases, it may be necessary to increase the value of the optional parameter  to exclude very small elements of  from consideration during the computation of Step. | 
| nInf | is the number of violated constraints (infeasibilities) before the present iteration.  This number will not increase unless iterations are in elastic mode. | 
| sInf | is the sum of infeasibilities before the present iteration.  It will usually decrease at each nonzero step, but if nInf decreases by  or more, sInf may occasionally increase.  However, in elastic mode it will decrease monotonically. | 
| Objective | is the value of the current objective function after the present iteration.  Note, if  is , the heading is Composite Obj. | 
| L+U | L is the number of nonzeros in the basis factor .  Immediately after a basis factorization , L contains lenL (see Section 13).  Further nonzeros are added to L when various columns of  are later replaced.  (Thus, L increases monotonically.) U is the number of nonzeros in the basis factor .  Immediately after a basis factorization , U contains lenU (see Section 13).  As columns of  are replaced, the matrix  is maintained explicitly (in sparse form).  The value of U may fluctuate up or down; in general, it will tend to increase. | 
| ncp | is the number of compressions required to recover workspace in the data structure for .  This includes the number of compressions needed during the previous basis factorization.  Normally, ncp should increase very slowly. | 
The following will be output if the problem is QP or if the superbasic set is non-empty.
| Label | Description | 
| rgNorm | is the largest reduced-gradient among the superbasic variables after the current iteration. During the optimality phase, this will be approximately zero after a unit step. | 
| nS | is the current number of superbasic variables. | 
| condHz | is a lower bound on the condition number of the reduced Hessian (see Section 11.2).  The larger this number, the more difficult the problem. Attention should be given to the scaling of the variables and the constraints to guard against high values of condHz. | 
 
10
 Example
This example minimizes the quadratic function 
, where
subject to the bounds
and to the linear constraints
The initial point, which is infeasible, is
The optimal solution (to five figures) is
One bound constraint and four linear constraints are active at the solution.  Note that the Hessian matrix 
 is positive semidefinite.
 
10.1
 Program Text
Program Text (e04nqce.c)
 
10.2
 Program Data
Program Data (e04nqce.d)
 
10.3
 Program Results
Program Results (e04nqce.r)
Note: the remainder of this document is intended for more advanced users.  Section 11 contains a detailed description of the algorithm which may be needed in order to understand Sections 12 and 13.  Section 12 describes the optional parameters which may be set by calls to nag_opt_sparse_convex_qp_option_set_file (e04nrc), nag_opt_sparse_convex_qp_option_set_string (e04nsc), nag_opt_sparse_convex_qp_option_set_integer (e04ntc) and/or nag_opt_sparse_convex_qp_option_set_double (e04nuc).  Section 13 describes the quantities which can be requested to monitor the course of the computation.
 
 
11
 Algorithmic Details
This section contains a detailed description of the method used by nag_opt_sparse_convex_qp_solve (e04nqc).
 
11.1
 Overview
nag_opt_sparse_convex_qp_solve (e04nqc) is based on an inertia-controlling method that maintains a Cholesky factorization of the reduced Hessian (see below).  The method is similar to that of 
Gill and Murray (1978), and is described in detail by 
Gill et al. (1991).  Here we briefly summarise the main features of the method.  Where possible, explicit reference is made to the names of variables that are arguments of the function or appear in the printed output.
 The method used has two distinct phases: finding an initial feasible point by minimizing the sum of infeasibilities (the 
feasibility phase), and minimizing the quadratic objective function within the feasible region (the 
optimality phase).  The computations in both phases are performed by the same functions.  The two-phase nature of the algorithm is reflected by changing the function being minimized from the sum of infeasibilities (the printed quantity 
sInf; see 
Section 9.1) to the quadratic objective function (the printed quantity 
Objective; see 
Section 9.1).
In general, an iterative process is required to solve a quadratic program.  Given an iterate 
 in both the original variables 
 and the slack variables 
, a new iterate 
 is defined by
where the 
step length
 is a non-negative scalar (the printed quantity 
Step; see 
Section 13), and 
 is called the 
search direction.  (For simplicity, we shall consider a typical iteration and avoid reference to the index of the iteration.)  Once an iterate is feasible (i.e., satisfies the constraints), all subsequent iterates remain feasible.
 
11.2
 Definition of the Working Set and Search Direction
At each iterate , a working set of constraints is defined to be a linearly independent subset of the constraints that are satisfied ‘exactly’ (to within the value of the optional parameter ).  The working set is the current prediction of the constraints that hold with equality at a solution of the LP or QP problem.  Let  denote the number of constraints in the working set (including bounds), and let  denote the associated  by  working set matrix consisting of the  gradients of the working set constraints.
The search direction is defined so that constraints in the working set remain 
unaltered for any value of the step length.  It follows that 
 must satisfy the identity
This characterisation allows 
 to be computed using any 
 by 
 full-rank matrix 
 that spans the null space of 
.  (Thus, 
 and 
.)  The null space matrix 
 is defined from a sparse 
 factorization of part of 
 (see 
(7) and 
(8)).  The direction 
 will satisfy 
(4) if
where 
 is any 
-vector.
The working set contains the constraints  and a subset of the upper and lower bounds on the variables .  Since the gradient of a bound constraint  or  is a vector of all zeros except for  in position , it follows that the working set matrix contains the rows of  and the unit rows associated with the upper and lower bounds in the working set.
The working set matrix 
 can be represented in terms of a certain column partition of the matrix 
 by (conceptually) partitioning the constraints 
 so that
where 
 is a square nonsingular basis and 
, 
 and 
 are the basic, superbasic and nonbasic variables respectively.  The nonbasic variables are equal to their upper or lower bounds at 
, and the superbasic variables are independent variables that are chosen to improve the value of the current objective function.  The number of superbasic variables is 
 (the printed quantity 
nS; see 
Section 9.1).  Given values of 
 and 
, the basic variables 
 are adjusted so that 
 satisfies 
(6).
If 
 is a permutation matrix such that 
, then 
 satisfies
where 
 is the identity matrix with the same number of columns as 
.
The null space matrix 
 is defined from a sparse 
 factorization of part of 
.  In particular, 
 is maintained in ‘reduced gradient’ form, using the LUSOL package (see 
Gill et al. (1991)) to maintain sparse 
 factors of the basis matrix 
 as the 
 partition changes.  Given the permutation 
, the null space basis is given by
This matrix is used only as an operator, i.e., it is never computed explicitly.  Products of the form 
 and 
 are obtained by solving with 
 or 
.  This choice of 
 implies that 
, the number of ‘degrees of freedom’ at 
, is the same as 
, the number of superbasic variables.
Let 
 and 
 denote the 
reduced gradient and 
reduced Hessian of the objective function:
where 
 is the objective gradient at 
.  Roughly speaking, 
 and 
 describe the first and second derivatives of an 
-dimensional 
unconstrained problem for the calculation of 
.  (The condition estimator of 
 is the quantity 
condHz in the monitoring file output; see 
Section 9.1.)
At each iteration, an upper triangular factor  is available such that .  Normally,  is computed from  at the start of the optimality phase and then updated as the QP working set changes.  For efficiency, the dimension of  should not be excessive (say, ).  This is guaranteed if the number of nonlinear variables is ‘moderate’.
If the QP problem contains linear variables, 
 is positive semidefinite and 
 may be singular with at least one zero diagonal element.  However, an inertia-controlling strategy is used to ensure that only the last diagonal element of 
 can be zero.  (See 
Gill et al. (1991) for a discussion of a similar strategy for indefinite quadratic programming.)
If the initial  is singular, enough variables are fixed at their current value to give a nonsingular .  This is equivalent to including temporary bound constraints in the working set.  Thereafter,  can become singular only when a constraint is deleted from the working set (in which case no further constraints are deleted until  becomes nonsingular).
 
11.3
 Main Iteration
If the reduced gradient is zero, 
 is a constrained stationary point on the working set.  During the feasibility phase, the reduced gradient will usually be zero only at a vertex (although it may be zero elsewhere in the presence of constraint dependencies).  During the optimality phase, a zero reduced gradient implies that 
 minimizes the quadratic objective function when the constraints in the working set are treated as equalities.  At a constrained stationary point, Lagrange multipliers 
 are defined from the equations
A Lagrange multiplier, 
, corresponding to an inequality constraint in the working set is said to be 
optimal if 
 when the associated constraint is at its 
upper bound, or if 
 when the associated constraint is at its 
lower bound, where 
 depends on the value of the optional parameter 
.  If a multiplier is nonoptimal, the objective function (either the true objective or the sum of infeasibilities) can be reduced by continuing the minimization with the corresponding constraint excluded from the working set.  (This step is sometimes referred to as ‘deleting’ a constraint from the working set.)  If optimal multipliers occur during the feasibility phase but the sum of infeasibilities is nonzero, there is no feasible point and the function terminates immediately with 
 NE_NOT_REQUIRED_ACC.
The special form 
(7) of the working set allows the multiplier vector 
, the solution of 
(10), to be written in terms of the vector
where 
 satisfies the equations 
, and 
 denotes the basic elements of 
.  The elements of 
 are the Lagrange multipliers 
 associated with the equality constraints 
.  The vector 
 of nonbasic elements of 
 consists of the Lagrange multipliers 
 associated with the upper and lower bound constraints in the working set.  The vector 
 of superbasic elements of 
 is the reduced gradient 
 in 
(9).  The vector 
 of basic elements of 
 is zero, by construction.  (The Euclidean norm of 
 and the final values of 
, 
 and 
 are the quantities 
rgNorm, 
Reduced Gradnt, 
Obj Gradient and 
Dual Activity in the monitoring file output; see 
Section 13.)
If the reduced gradient is not zero, Lagrange multipliers need not be computed and the search direction is given by 
 (see 
(8) and 
(12)).  The step length is chosen to maintain feasibility with respect to the satisfied constraints.
There are two possible choices for 
, depending on whether or not 
 is singular.  If 
 is nonsingular, 
 is nonsingular and 
 in 
(5) is computed from the equations
where 
 is the reduced gradient at 
.  In this case, 
 is the minimizer of the objective function subject to the working set constraints being treated as equalities.  If 
 is feasible, 
 is defined to be unity.  In this case, the reduced gradient at 
 will be zero, and Lagrange multipliers are computed at the next iteration.  Otherwise, 
 is set to 
, the step to the ‘nearest’ constraint along 
.  This constraint is then added to the working set at the next iteration.
If 
 is singular, then 
 must also be singular, and an inertia-controlling strategy is used to ensure that only the last diagonal element of 
 is zero.  (See 
Gill et al. (1991) for a discussion of a similar strategy for indefinite quadratic programming.)  In this case, 
 satisfies
which allows the objective function to be reduced by any step of the form 
, where 
.  The vector 
 is a direction of unbounded descent for the QP problem in the sense that the QP objective is linear and decreases without bound along 
.  If no finite step of the form 
 (where 
) reaches a constraint not in the working set, the QP problem is unbounded and the function terminates immediately with 
 NE_UNBOUNDED.  Otherwise, 
 is defined as the maximum feasible step along 
 and a constraint active at 
 is added to the working set for the next iteration.
nag_opt_sparse_convex_qp_solve (e04nqc) makes explicit allowance for infeasible constraints.  Infeasible linear constraints are detected first by solving a problem of the form
where 
.  This is equivalent to minimizing the sum of the general linear constraint violations subject to the simple bounds.  (In the linear programming literature, the approach is often called 
elastic programming.)
 
 
11.4
 Miscellaneous
If the basis matrix is not chosen carefully, the condition of the null space matrix 
 in 
(8) could be arbitrarily high.  To guard against this, the function implements a ‘basis repair’ feature in which the LUSOL package (see 
Gill et al. (1991)) is used to compute the rectangular factorization
returning just the permutation 
 that makes 
 unit lower triangular.  The pivot tolerance is set to require 
, and the permutation is used to define 
 in 
(7).  It can be shown that 
 is likely to be little more than unity.  Hence, 
 should be well-conditioned 
regardless of the condition of
.  This feature is applied at the beginning of the optimality phase if a potential 
 ordering is known.
The EXPAND procedure (see 
Gill et al. (1989)) is used to reduce the possibility of cycling at a point where the active constraints are nearly linearly dependent.  Although there is no absolute guarantee that cycling will not occur, the probability of cycling is extremely small (see 
Hall and McKinnon (1996)).  The main feature of EXPAND is that the feasibility tolerance is increased at the start of every iteration.  This allows a positive step to be taken at every iteration, perhaps at the expense of violating the bounds on 
 by a small amount.
Suppose that the value of the optional parameter  is .  Over a period of  iterations (where  is the value of the optional parameter ), the feasibility tolerance actually used by the function (i.e., the working feasibility tolerance) increases from  to  (in steps of ).
At certain stages the following ‘resetting procedure’ is used to remove small constraint infeasibilities.  First, all nonbasic variables are moved exactly onto their bounds.  A count is kept of the number of nontrivial adjustments made.  If the count is nonzero, the basic variables are recomputed.  Finally, the working feasibility tolerance is reinitialized to .
If a problem requires more than  iterations, the resetting procedure is invoked and a new cycle of iterations is started.  (The decision to resume the feasibility phase or optimality phase is based on comparing any constraint infeasibilities with .)
The resetting procedure is also invoked when the function reaches an apparently optimal, infeasible or unbounded solution, unless this situation has already occurred twice.  If any nontrivial adjustments are made, iterations are continued.
The EXPAND procedure not only allows a positive step to be taken at every iteration, but also provides a potential choice of constraints to be added to the working set.  All constraints at a distance  (where ) along  from the current point are then viewed as acceptable candidates for inclusion in the working set.  The constraint whose normal makes the largest angle with the search direction is added to the working set.  This strategy helps keep the basis matrix  well-conditioned.
 
12
 Optional Parameters
Several optional parameters in nag_opt_sparse_convex_qp_solve (e04nqc) define choices in the problem specification or the algorithm logic.  In order to reduce the number of formal arguments of nag_opt_sparse_convex_qp_solve (e04nqc) these optional parameters have associated default values that are appropriate for most problems.  Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in 
Section 12.1.
nag_opt_sparse_convex_qp_option_set_file (e04nrc) reads options from an external options file, with 
Begin and 
End as the first and last lines respectively and each intermediate line defining a single optional parameter.  For example,
Begin
   Print Level = 5
End
 The call
e04nrc (ioptns, &state, &fail);
can then be used to read the file on 
descriptor 
ioptns. 
 NE_NOERROR  
on successful exit.  
nag_opt_sparse_convex_qp_option_set_file (e04nrc) should be consulted for a full description of this method of supplying optional parameters.
All optional parameters not specified by you are set to their default values.  Optional parameters specified by you are unaltered by nag_opt_sparse_convex_qp_solve (e04nqc) (unless they define invalid values) and so remain in effect for subsequent calls unless altered by you.
 
12.1
 Description of the Optional Parameters
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
- the keywords;
- a parameter value, 
where the letters ,  and  denote options that take character, integer and real values respectively;
- the default value is used whenever the condition  is satisfied and where the symbol  is a generic notation for machine precision (see nag_machine_precision (X02AJC));
- The variable  holds the value of .
Keywords and character values are case and white space insensitive.
Optional parameters used to specify files (e.g., optional parameters 
 and 
) have type Nag_FileID (see 
Section 3.3.1.1 in How to Use the NAG Library and its Documentation).  This ID value must either be set to 
 (the default value) in which case there will be no output, or will be as returned by a call of 
nag_open_file (x04acc).
| Check Frequency |  | Default | 
Every th iteration after the most recent basis factorization, a numerical test is made to see if the current solution  satisfies the linear constraints .  If the largest element of the residual vector  is judged to be too large, the current basis is refactorized and the basic variables recomputed to satisfy the constraints more accurately.  If , the value  is used and effectively no checks are made.
 is useful for debugging purposes, but otherwise this option should not be needed.
| Crash Tolerance |  | Default | 
Note that these options do not apply when 
 (see 
Section 5).
If 
, an internal Crash procedure is used to select an initial basis from various rows and columns of the constraint matrix 
.  The value of 
 determines which rows and columns of 
 are initially eligible for the basis, and how many times the Crash procedure is called. Columns of 
 are used to pad the basis where necessary.
|  | Meaning | 
|  | The initial basis contains only slack variables: . | 
|  | The Crash procedure is called once, looking for a triangular basis in all rows and columns of the matrix . | 
|  | The Crash procedure is called once, looking for a triangular basis in rows. | 
|  | The Crash procedure is called twice, treating linear equalities and linear inequalities separately. | 
 
If , certain slacks on inequality rows are selected for the basis first.  (If , numerical values are used to exclude slacks that are close to a bound.)  The Crash procedure then makes several passes through the columns of , searching for a basis matrix that is essentially triangular.  A column is assigned to ‘pivot’ on a particular row if the column contains a suitably large element in a row that has not yet been assigned.  (The pivot elements ultimately form the diagonals of the triangular basis.)  For remaining unassigned rows, slack variables are inserted to complete the basis.
The  allows the Crash procedure to ignore certain ‘small’ nonzero elements in each column of . If  is the largest element in column , other nonzeros  in the column are ignored if . (To be meaningful,  should be in the range .)
When , the basis obtained by the Crash procedure may not be strictly triangular, but it is likely to be nonsingular and almost triangular. The intention is to obtain a starting basis containing more columns of  and fewer (arbitrary) slacks. A feasible solution may be reached sooner on some problems.
For example, suppose the first  columns of  form the matrix shown under ; i.e., a tridiagonal matrix with entries , , . To help the Crash procedure choose all  columns for the initial basis, we would specify a  of  for some value of .
This special keyword may be used to reset all optional parameters to their default values.
(See 
Section 3.3.1.1 in How to Use the NAG Library and its Documentation for further information on NAG data types.)
Optional parameters 
 and 
 are similar to optional parameters 
 and 
, but they record solution information in a manner that is more direct and more easily modified.  A full description of information recorded in optional parameters 
 and 
 is given in 
Gill et al. (2005a).
If , the last solution obtained will be output to the file .
If , the  containing basis information will be read.
The file will usually have been output previously as a .  The file will not be accessed if optional parameters  or  are specified.
This argument determines if (and when) elastic mode is to be started. Three elastic modes are available as follows:
|  | Meaning | 
|  | Elastic mode is never invoked. nag_opt_sparse_convex_qp_solve (e04nqc) will terminate as soon as infeasibility is detected. There may be other points with significantly smaller sums of infeasibilities. | 
|  | Elastic mode is invoked only if the constraints are found to be infeasible (the default). If the constraints are infeasible, continue in elastic mode with the composite objective determined by the values of the optional parameters  and . | 
|  | The iterations start and remain in elastic mode. This option allows you to minimize the composite objective function directly without first performing Phase 1 iterations. The success of this option will depend critically on your choice of . If  is sufficiently large and the constraints are feasible, the minimizer of the composite objective and the solution of the original problem are identical. However, if the  is not sufficiently large, the minimizer of the composite function may be infeasible, even if a feasible point exists. | 
 
| Elastic Objective |  | Default | 
This determines the form of the composite objective 
 in Phase 2 (
). Three types of composite objectives are available.
|  | Meaning | 
|  | Include only the true objective  in the composite objective. This option sets  in the composite objective and allows nag_opt_sparse_convex_qp_solve (e04nqc) to ignore the elastic bounds and find a solution that minimizes  subject to the non-elastic constraints. This option is useful if there are some ‘soft’ constraints that you would like to ignore if the constraints are infeasible. | 
|  | Use a composite objective defined with  determined by the value of . This value is intended to be used in conjunction with . | 
|  | Include only the elastic variables in the composite objective. The elastics are weighted by . This choice minimizes the violations of the elastic variables at the expense of possibly increasing the true objective. This option can be used to find a point that minimizes the sum of the violations of a subset of constraints specified by the input array helast. | 
 
| Elastic Weight |  | Default | 
This defines the value of  in the composite objective in Phase 2 ().
At each iteration of elastic mode, the composite objective is defined to be
where 
 for 
, 
 for 
, and 
 is the quadratic objective.
Note that the effect of  is not disabled once a feasible point is obtained.
| Expand Frequency |  | Default | 
This option is part of an anti-cycling procedure (see 
Section 11.4) designed to allow progress even on highly degenerate problems.
The strategy is to force a positive step at every iteration, at the expense of violating the constraints by a small amount.  Suppose that the value of the optional parameter  is .  Over a period of  iterations, the feasibility tolerance actually used by nag_opt_sparse_convex_qp_solve (e04nqc) (i.e., the working feasibility tolerance) increases from  to  (in steps of ).
Increasing the value of  helps reduce the number of slightly infeasible nonbasic variables (most of which are eliminated during the resetting procedure).  However, it also diminishes the freedom to choose a large pivot element (see the description of the optional parameter ).
If , the value  is used and effectively no anti-cycling procedure is invoked.
| Factorization Frequency |  | Default  or | 
If , at most  basis changes will occur between factorizations of the basis matrix.
For LP problems, the basis factors are usually updated at every iteration.  Higher values of  may be more efficient on problems that are extremely sparse and well scaled.
For QP problems, fewer basis updates will occur as the solution is approached.  The number of iterations between basis factorizations will therefore increase.  During these iterations a test is made regularly according to the value of optional parameter  to ensure that the linear constraints  are satisfied.  Occasionally, the basis will be refactorized before the limit of  updates is reached.  If , the default value is used.
| Feasibility Tolerance |  | Default | 
A feasible problem is one in which all variables satisfy their upper and lower bounds to within the absolute tolerance . (This includes slack variables. Hence, the general constraints are also satisfied to within .)
nag_opt_sparse_convex_qp_solve (e04nqc) attempts to find a feasible solution before optimizing the objective function.  If the sum of infeasibilities cannot be reduced to zero, the problem is assumed to be infeasible.  Let sInf be the corresponding sum of infeasibilities.  If sInf is quite small, it may be appropriate to raise  by a factor of  or .  Otherwise, some error in the data should be suspected.
Note that if sInf is not small and you have not asked nag_opt_sparse_convex_qp_solve (e04nqc) to minimize the violations of the elastic variables (i.e., you have not specified ), there may be other points that have a significantly smaller sum of infeasibilities. nag_opt_sparse_convex_qp_solve (e04nqc) will not attempt to find the solution that minimizes the sum unless .
If the constraints and variables have been scaled (see the description of the optional parameter ), then feasibility is defined in terms of the scaled problem (since it is more likely to be meaningful).
 
| Infinite Bound Size |  | Default | 
If ,  defines the ‘infinite’ bound  in the definition of the problem constraints.  Any upper bound greater than or equal to  will be regarded as  (and similarly any lower bound less than or equal to  will be regarded as ).  If , the default value is used.
| Iterations Limit |  | Default | 
The value of  specifies the maximum number of iterations allowed before termination.  Setting  and  means that: the workspace needed to start solving the problem will be computed and printed; and feasibility and optimality will be checked. No iterations will be performed.  If , the default value is used.
| LU Density Tolerance |  | Default | 
| LU Singularity Tolerance |  | Default | 
The density tolerance  is used during  factorization of the basis matrix. Columns of  and rows of  are formed one at a time, and the remaining rows and columns of the basis are altered appropriately. At any stage, if the density of the remaining matrix exceeds , the Markowitz strategy for choosing pivots is terminated. The remaining matrix is factored by a dense  procedure. Raising the density tolerance towards  may give slightly sparser  factors, with a slight increase in factorization time.
If ,  defines the singularity tolerance used to guard against ill-conditioned basis matrices. After  is refactorized, the diagonal elements of  are tested as follows. If  or , the th column of the basis is replaced by the corresponding slack variable. If , the default value is used.
| LU Factor Tolerance |  | Default | 
| LU Update Tolerance |  | Default | 
The values of 
 and 
 affect the stability and sparsity of the basis factorization 
, during refactorization and updates respectively.  The lower triangular matrix 
 is a product of matrices of the form 
 where the multipliers 
 will satisfy 
.  The default values of 
 and 
 usually strike a good compromise between stability and sparsity.  They must satisfy 
, 
.
For large and relatively dense problems,  (say) may give a useful improvement in stability without impairing sparsity to a serious degree.
For certain very regular structures (e.g., band matrices) it may be necessary to reduce 
 in order to achieve stability.  For example, if the columns of 
 include a sub-matrix of the form 
 one should set both 
 and 
 to values in the range 
.
| LU Partial Pivoting |  | Default | 
The  factorization implements a Markowitz-type search for pivots that locally minimize the fill-in subject to a threshold pivoting stability criterion. The default option is to use threshold partial pivoting. The options  and  are more expensive but more stable and better at revealing rank, as long as the  is not too large (say ).
This option specifies the required direction of the optimization.  It applies to both linear and nonlinear terms (if any) in the objective function.  Note that if two problems are the same except that one minimizes 
 and the other maximizes 
, their solutions will be the same but the signs of the dual variables 
 and the reduced gradients 
 (see 
Section 11.3) will be reversed.
The option  means ‘ignore the objective function, while finding a feasible point for the linear constraints’.  It can be used to check that the constraints are feasible without altering the call to nag_opt_sparse_convex_qp_solve (e04nqc).
| New Basis File |  | Default | 
| Backup Basis File |  | Default | 
| Save Frequency |  | Default | 
(See 
Section 3.3.1.1 in How to Use the NAG Library and its Documentation for further information on NAG data types.)
Optional parameters  and  are sometimes referred to as basis maps.  They contain the most compact representation of the state of each variable.  They are intended for restarting the solution of a problem at a point that was reached by an earlier run.  For nontrivial problems, it is advisable to save basis maps at the end of a run, in order to restart the run if necessary.
If , a basis map will be saved on file  every th iteration, where  is the .
The first record of the file will contain the word PROCEEDING if the run is still in progress.  A basis map will also be saved at the end of a run, with some other word indicating the final solution status.
If ,  is intended as a safeguard against losing the results of a long run.  Suppose that a  is being saved every  () iterations, and that nag_opt_sparse_convex_qp_solve (e04nqc) is about to save such a basis at iteration .  It is conceivable that the run may be interrupted during the next few milliseconds (in the middle of the save).  In this case the Basis file will be corrupted and the run will have been essentially wasted.
To eliminate this risk, both a 
 and a 
 may be specified.  The following would be suitable for the above example: 
Backup Basis FileID1
New Basis FileID2
where 
FileID1 and 
FileID2 are returned by 
nag_open_file (x04acc).
The current basis will then be saved every  iterations, first on FileID2 and then immediately on FileID1.  If the run is interrupted at iteration  during the save on FileID2, there will still be a usable basis on FileID1 (corresponding to iteration ).
Note that a new basis will be saved in  at the end of a run if it terminates normally, but it will not be saved in .  In the above example, if an optimum solution is found at iteration  (or if the iteration limit is ), the final basis on FileID2 will correspond to iteration , but the last basis saved on FileID1 will be the one for iteration .
A full description of information recorded in 
 and 
 is given in 
Gill et al. (2005a).
Normally each optional parameter specification is printed to unit  as it is supplied. Optional parameter  may be used to suppress the printing and optional parameter  may be used to restore printing.
| Old Basis File |  | Default | 
(See 
Section 3.3.1.1 in How to Use the NAG Library and its Documentation for further information on NAG data types.)
If 
, the basis maps information will be obtained from the file associated with ID 
.
The file will usually have been output previously as a 
 or 
.
A full description of information recorded in 
 and 
 is given in 
Gill et al. (2005a).  
The file will not be acceptable if the number of rows or columns in the problem has been altered.
| Optimality Tolerance |  | Default | 
This is used to judge the size of the reduced gradients , where  is the th component of the gradient,  is the associated column of the constraint matrix , and  is the set of dual variables.
By construction, the reduced gradients for basic variables are always zero.  The problem will be declared optimal if the reduced gradients for nonbasic variables at their lower or upper bounds satisfy 
 respectively, and if 
 for superbasic variables.
In the above tests, 
 is a measure of the size of the dual variables.  It is included to make the tests independent of a scale factor on the objective function. The quantity 
 actually used is defined by 
 so that only large scale factors are allowed for.
If the objective is scaled down to be very small, the optimality test reduces to comparing  against .  
| Partial Price |  | Default  or | 
This option is recommended for large FP or LP problems that have significantly more variables than constraints (i.e., ).  It reduces the work required for each pricing operation (i.e., when a nonbasic variable is selected to enter the basis).  If , all columns of the constraint matrix  are searched.  If ,  and  are partitioned to give  roughly equal segments , for  (modulo ).  If the previous pricing search was successful on , the next search begins on the segments  and .  If a reduced gradient is found that is larger than some dynamic tolerance, the variable with the largest such reduced gradient (of appropriate sign) is selected to enter the basis.  If nothing is found, the search continues on the next segments , and so on.  If , the default value is used.
| Pivot Tolerance |  | Default | 
Broadly speaking, the pivot tolerance is used to prevent columns entering the basis if they would cause the basis to become almost singular.
When  changes to  for some search direction , a ‘ratio test’ determines which component of  reaches an upper or lower bound first. The corresponding element of  is called the pivot element. Elements of  are ignored (and therefore cannot be pivot elements) if they are smaller than the pivot tolerance .
It is common for two or more variables to reach a bound at essentially the same time. In such cases, the optional parameter  (say ) provides some freedom to maximize the pivot element and thereby improve numerical stability. Excessively small values of  should therefore not be specified. To a lesser extent, the optional parameter  (say ) also provides some freedom to maximize the pivot element. Excessively large values of  should therefore not be specified.
(See 
Section 3.3.1.1 in How to Use the NAG Library and its Documentation for further information on NAG data types.)
If 
, the following information is output to 
 during the solution of each problem: 
| – | a listing of the optional parameters; | 
| – | some statistics about the problem; | 
| – | the amount of storage available for the  factorization of the basis matrix; | 
| – | notes about the initial basis resulting from a Crash procedure or a Basis file; | 
| – | the iteration log; | 
| – | basis factorization statistics; | 
| – | the exit fail condition and some statistics about the solution obtained; | 
| – | the printed solution, if requested. | 
 The last four items are described in 
Sections 9 and 
13.  Further brief output may be directed to the 
.
| Print Frequency |  | Default | 
If , one line of the iteration log will be printed every th iteration.  A value such as  is suggested for those interested only in the final solution. If , the value of  is used and effectively no checks are made.
This controls the amount of printing produced by 
nag_opt_sparse_convex_qp_solve (e04nqc) as follows.
  
  
  
   
    |  | Meaning | 
    | 0 | No output except error messages. If you want to suppress all output, set . | 
    |  | The set of selected options, problem statistics, summary of the scaling procedure, information about the initial basis resulting from a Crash or a Basis file, a single line of output at each iteration (controlled by the optional parameter ), and the exit condition with a summary of the final solution. | 
    |  | Basis factorization statistics. | 
 
(See 
Section 3.3.1.1 in How to Use the NAG Library and its Documentation for further information on NAG data types.)
These files provide compatibility with commercial mathematical programming systems.  The 
 from a previous run may be used as an 
 for a later run on the same problem.  A full description of information recorded in 
 and 
 is given in 
Gill et al. (2005a).
If , the final solution obtained will be output to file .
For linear programs, this format is compatible with various commercial systems.
If ,
the  containing basis information will be read.  The file will usually have been output previously as a .  The file will not be accessed if  is specified.
| QPSolver Cholesky |  | Default | 
Specifies the active-set algorithm used to solve the quadratic program in Phase 2 ().  holds the full Cholesky factor  of the reduced Hessian . As the QP iterations proceed, the dimension of  changes with the number of superbasic variables. If the number of superbasic variables needs to increase beyond the value of , the reduced Hessian cannot be stored and the solver switches to . The Cholesky solver is reactivated if the number of superbasics stabilizes at a value less than .
 solves the QP using a quasi-Newton method. In this case,  is the factor of a quasi-Newton approximate Hessian.
 uses an active-set method similar to , but uses the conjugate-gradient method to solve all systems involving the reduced Hessian.
The Cholesky QP solver is the most robust, but may require a significant amount of computation if there are many superbasics.
The quasi-Newton QP solver does not require computation of the exact  at the start of Phase 2 (). It may be appropriate when the number of superbasics is large but relatively few iterations are needed to reach a solution (e.g., if nag_opt_sparse_convex_qp_solve (e04nqc) is called with a Warm Start).
The conjugate-gradient QP solver is appropriate for problems with many degrees of freedom (say, more than  superbasics).
| Reduced Hessian Dimension |  | Default | 
This specifies that an  by  triangular matrix  (to define the reduced Hessian according to ). is to be available for use by the Cholesky QP solver.
| Scale Tolerance |  | Default | 
Three scale options are available as follows: 
|  | Meaning | 
| 0 | No scaling.  This is recommended if it is known that  and the constraint matrix never have very large elements (say, larger than ). | 
| 1 | The constraints and variables are scaled by an iterative procedure that attempts to make the matrix coefficients as close as possible to  (see Fourer (1982)).  This will sometimes improve the performance of the solution procedures. | 
| 2 | The constraints and variables are scaled by the iterative procedure.  Also, a certain additional scaling is performed that may be helpful if the right-hand side  or the solution  is large.  This takes into account columns of  that are fixed or have positive lower bounds or negative upper bounds. | 
 
Optional parameter 
 affects how many passes might be needed through the constraint matrix.  On each pass, the scaling procedure computes the ratio of the largest and smallest nonzero coefficients in each column: 
 If 
 is less than 
 times its previous value, another scaling pass is performed to adjust the row and column scales.  Raising 
 from 
 to 
 (say) usually increases the number of scaling passes through 
.  At most 
 passes are made. The value of 
 should lie in the range 
.
 causes the row scales  and column scales  to be printed to , if  has been specified.  The scaled matrix coefficients are , and the scaled bounds on the variables and slacks are , , where  if .
This option determines if the final obtained solution is to be output to the
. Note that the  option operates independently.
(See 
Section 3.3.1.1 in How to Use the NAG Library and its Documentation for further information on NAG data types.)
If , the final solution will be output to file  (whether optimal or not).
To see more significant digits in the printed solution, it will sometimes be useful to make 
.
| Summary Frequency |  | Default | 
(See 
Section 3.3.1.1 in How to Use the NAG Library and its Documentation for further information on NAG data types.)
If 
, a brief log will be output to file 
, including one line of information every 
th iteration.
In an interactive environment, it is useful to direct this output to the terminal, to allow a run to be monitored online.  (If something looks wrong, the run can be manually terminated.) Further details are given in 
Section 13. If 
, the value of 
 is used and effectively no checks are made.
| Superbasics Limit |  | Default | 
This places a limit on the storage allocated for superbasic variables.  Ideally,  should be set slightly larger than the ‘number of degrees of freedom’ expected at an optimal solution.
For linear programs, an optimum is normally a basic solution with no degrees of freedom.  (The number of variables lying strictly between their bounds is no more than , the number of general constraints.) The default value of  is therefore .
For quadratic problems, the number of degrees of freedom is often called the ‘number of independent variables’. Normally,  need not be greater than , where  is the number of leading nonzero columns of . For many problems,  may be considerably smaller than .  This will save storage if  is very large.
Normally nag_opt_sparse_convex_qp_solve (e04nqc) prints the options file as it is being read, and then prints a complete list of the available keywords and their final values.  The optional parameter  tells nag_opt_sparse_convex_qp_solve (e04nqc) not to print the full list.
| System Information No |  | Default | 
This option prints additional information on the progress of major and minor iterations, and Crash statistics. See 
Section 13.
If , some timing information will be output to the Print file, if .
| Unbounded Step Size |  | Default | 
If ,  specifies the magnitude of the change in variables that will be considered a step to an unbounded solution.  (Note that an unbounded solution can occur only when the Hessian is not positive definite.)  If the change in  during an iteration would exceed the value of , the objective function is considered to be unbounded below in the feasible region.  If , the default value is used. See  for the definition of .
 
13
 Description of Monitoring Information
This section describes the intermediate printout and final printout which constitutes the monitoring information produced by nag_opt_sparse_convex_qp_solve (e04nqc).  (See also the description of the optional parameters  and .)  You can control the level of printed output.
 
13.1
 Crash Statistics
When 
, 
 and 
 has been specified, the following lines of intermediate printout (less than 
 characters) are produced on the unit number specified by optional parameter 
 whenever 
 (see 
Section 5).  They refer to the number of columns selected by the Crash procedure during each of several passes through 
, whilst searching for a triangular basis matrix.
| Label | Description | 
| Slacks | is the number of slacks selected initially. | 
| Free cols | is the number of free columns in the basis, including those whose bounds are rather far apart. | 
| Preferred | is the number of ‘preferred’ columns in the basis (i.e.,  for some ).  It will be a subset of the columns for which  was specified. | 
| Unit | is the number of unit columns in the basis. | 
| Double | is the number of double columns in the basis. | 
| Triangle | is the number of triangular columns in the basis. | 
| Pad | is the number of slacks used to pad the basis (to make it a nonsingular triangle). | 
 
13.2
 Basis Factorization Statistics
When 
 and 
, the first seven items of intermediate printout in the list below are produced on the unit number specified by optional parameter 
 whenever the matrix 
 or 
 is factorized.  Gaussian elimination is used to compute an 
 factorization of 
 or 
, where 
 is a lower triangular matrix and 
 is an upper triangular matrix for some permutation matrices 
 and 
.  The factorization is stabilized in the manner described under the optional parameter 
. In addition, if 
 has been specified, the entries from 
Elems onwards are also output.
| Label | Description | 
| Factor | the number of factorizations since the start of the run. | 
| Demand | a code giving the reason for the present factorization, as follows: 
  
  
  
   
    | Code | Meaning |  
    | 0 | First  factorization. |  
    | 1 | The number of updates reached the . |  
    | 2 | The nonzeros in the updated factors have increased significantly. |  
    | 7 | Not enough storage to update factors. |  
    | 10 | Row residuals are too large (see the description of the optional parameter ). |  
    | 11 | Ill-conditioning has caused inconsistent results. |  | 
| Itn | is the current minor iteration number. | 
| Nonlin | is the number of nonlinear variables in the current basis . | 
| Linear | is the number of linear variables in . | 
| Slacks | is the number of slack variables in . | 
| B, BR, BS or BT factorize | is the type of  factorization. 
  
  
  
   
    | B | periodic factorization of the basis . |  
    | BR | more careful rank-revealing factorization of  using threshold rook pivoting.  This occurs mainly at the start, if the first basis factors seem singular or ill-conditioned. Followed by a normal B factorize. |  
    | BS | is factorized to choose a well-conditioned  from the current .  Followed by a normal B factorize. |  
    | BT | same as BS except the current  is tried first and accepted if it appears to be not much more ill-conditioned than after the previous BS factorize. |  | 
| m | is the number of rows in  or . | 
| n | is the number of columns in  or . Preceded by ‘=’ or ‘>’ respectively. | 
| Elems | is the number of nonzero elements in  or . | 
| Amax | is the largest nonzero in  or . | 
| Density | is the percentage nonzero density of  or . | 
| Merit/MerRP/MerCP | Merit is the average Markowitz merit count for the elements chosen to be the diagonals of .  Each merit count is defined to be  where  and  are the number of nonzeros in the column and row containing the element at the time it is selected to be the next diagonal.  Merit is the average of n such quantities.  It gives an indication of how much work was required to preserve sparsity during the factorization. If  or  has been selected, this heading is changed to MerCP, respectively MerRP. | 
| lenL | is the number of nonzeros in . | 
| L+U | is the number of nonzeros representing the basis factors  and .  Immediately after a basis factorization , this is lenL+lenU, the number of subdiagonal elements in the columns of a lower triangular matrix and the number of diagonal and superdiagonal elements in the rows of an upper-triangular matrix.  Further nonzeros are added to L when various columns of  are later replaced.  As columns of  are replaced, the matrix  is maintained explicitly (in sparse form).  The value of L will steadily increase, whereas the value of U may fluctuate up or down.  Thus the value of L+U may fluctuate up or down (in general, it will tend to increase). | 
| Cmpressns | is the number of times the data structure holding the partially factored matrix needed to be compressed to recover unused storage.  Ideally this number should be zero.  If it is more than  or , the amount of workspace available to nag_opt_sparse_convex_qp_solve (e04nqc) should be increased for efficiency. | 
| Incres | is the percentage increase in the number of nonzeros in  and  relative to the number of nonzeros in  or . | 
| Utri | is the number of triangular rows of  or  at the top of . | 
| lenU | the number of nonzeros in , including its diagonals. | 
| Ltol | is the largest subdiagonal element allowed in .  This is the specified  or a smaller value that is currently being used for greater stability. | 
| Umax | the maximum nonzero element in . | 
| Ugrwth | is the ratio , which ideally should not be substantially larger than  or .  If it is orders of magnitude larger, it may be advisable to reduce the  to , ,  or , say (but bigger than ). As long as Lmax is not large (say  or less),  gives an estimate of the condition number .  If this is extremely large, the basis is nearly singular.  Slacks are used to replace suspect columns of  and the modified basis is refactored. | 
| Ltri | is the number of triangular columns of  or  at the left of . | 
| dense1 | is the number of columns remaining when the density of the basis matrix being factorized reached . | 
| Lmax | is the actual maximum subdiagonal element in  (bounded by Ltol). | 
| Akmax | is the largest nonzero generated at any stage of the  factorization.  (Values much larger than Amax indicate instability.) Akmax is not printed if  is selected. | 
| Agrwth | is the ratio .  Values much larger than  (say) indicate instability. Agrwth is not printed if  is selected. | 
| bump | is the size of the block to be factorized nontrivially after the triangular rows and columns of  or  have been removed. | 
| dense2 | is the number of columns remaining when the density of the basis matrix being factorized reached .  (The Markowitz pivot strategy searches fewer columns at that stage.) | 
| DUmax | is the largest diagonal of . | 
| DUmin | is the smallest diagonal of . | 
| condU | the ratio , which estimates the condition number of  (and of  if Ltol is less than , say). | 
 
13.3
 Basis Map
When 
 and 
, the following lines of intermediate printout (less than 
 characters) are produced on the unit number specified by optional parameter 
.  They refer to the elements of the 
names
array (see 
Section 5).
| Label | Description | 
| Name | gives the name for the problem (blank if problem unnamed). | 
| Infeasibilities | gives the number of infeasibilities. Printed only if the final point is infeasible. | 
| Objective Value | gives the objective value at the final point (or the value of the sum of infeasibilities). Printed only if the final point is feasible. | 
| Status | gives the exit status for the problem (i.e., Optimal soln, Weak soln, Unbounded, Infeasible, Excess itns, Error condn or Feasble soln) followed by details of the direction of the optimization (i.e., (Min) or (Max)). | 
| Iteration | gives the iteration number when the file was created. | 
| Superbasics | gives the number of superbasic variables. | 
| Objective | gives the name of the free row for the problem (blank if objective unnamed). | 
| RHS | gives the name of the constraint right-hand side for the problem (blank if objective unnamed). | 
| Ranges | gives the name of the ranges for the problem (blank if objective unnamed). | 
| Bounds | gives the name of the bounds for the problem (blank if objective unnamed). | 
 
13.4
 Solution Output
At the end of a run, the final solution will be output to the Print file. Some header information appears first to identify the problem and the final state of the optimization procedure. A ROWS section and a COLUMNS section then follow, giving one line of information for each row and column.
 
13.4.1
 The ROWS section
General constraints take the form 
. The 
th constraint is therefore of the form 
where 
 is the 
th row of 
.
Internally, the constraints take the form 
, where 
 is the set of slack variables (which happen to satisfy the bounds 
). For the 
th constraint, the slack variable 
 is directly available, and it is sometimes convenient to refer to its state. It should satisfy 
. A fullstop (.) is printed for any numerical value that is exactly zero.
| Label | Description | 
| Number | is the value of .  (This is used internally to refer to  in the intermediate output.) | 
| Row | gives the name of . | 
| State | the state of  (the state of  relative to the bounds  and ). The various states possible are as follows: 
| LL | is nonbasic at its lower limit, . |  
| UL | is nonbasic at its upper limit, . |  
| EQ | is nonbasic and fixed at the value . |  
| FR | is nonbasic and currently zero, even though it is free to take any value between its bounds  and . |  
| BS | is basic. |  
| SBS | is superbasic. | 
 
A key is sometimes printed before State .
 Note that unless the optional parameter   is specified, the tests for assigning a key are applied to the variables of the scaled problem.
  
| A | Alternative optimum possible.  The variable is nonbasic, but its reduced gradient is essentially zero.  This means that if the variable were allowed to start moving away from its bound, there would be no change in the value of the objective function.  The values of the other free variables might change, giving a genuine alternative solution.  However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately.  In either case, the values of the Lagrange multipliers might also change. |  
| D | Degenerate.  The variable is basic or superbasic, but it is equal (or very close) to one of its bounds. |  
| I | Infeasible.  The variable is basic or superbasic and is currently violating one of its bounds by more than the value of the . |  
| N | Not precisely optimal.  If the slack is superbasic, the dual variable  is not sufficiently small, as measured by the .  If the slack is nonbasic,  is not sufficiently positive or negative.  If a loose  has been used, or if iterations were terminated before optimality, this key might be helpful in deciding whether or not to restart the run. | 
 | 
| Activity | is the value of  at the final iterate. | 
| Slack Activity | is the value by which the row differs from its nearest bound.  (For the free row (if any), it is set to Activity.) | 
| Lower Limit | is , the lower bound specified for the variable .  None indicates that . | 
| Upper Limit | is , the upper bound specified for the variable .  None indicates that . | 
| Dual Activity | is the value of the dual variable  (the Lagrange multiplier for ; see Section 11.3).  For FP problems,  is set to zero. | 
| i | gives the index  of the th row. | 
 
13.4.2
 The COLUMNS Section
Let the 
th component of 
 be the variable 
 and assume that it satisfies the bounds 
. A fullstop (.) is printed for any numerical value that is exactly zero.
| Label | Description | 
| Number | is the column number .  (This is used internally to refer to  in the intermediate output.) | 
| Column | gives the name of . | 
| State | the state of  relative to the bounds  and . The various states possible are as follows: 
| LL | is nonbasic at its lower limit, . |  
| UL | is nonbasic at its upper limit, . |  
| EQ | is nonbasic and fixed at the value . |  
| FR | is nonbasic and currently zero, even though it is free to take any value between its bounds  and . |  
| BS | is basic. |  
| SBS | is superbasic. | 
 
A key is sometimes printed before State .
 Note that unless the optional parameter   is specified, the tests for assigning a key are applied to the variables of the scaled problem.
  
| A | Alternative optimum possible.  The variable is nonbasic, but its reduced gradient is essentially zero.  This means that if the variable were allowed to start moving away from its bound, there would be no change in the value of the objective function.  The values of the other free variables might change, giving a genuine alternative solution.  However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately.  In either case, the values of the Lagrange multipliers might also change. |  
| D | Degenerate.  The variable is basic or superbasic, but it is equal (or very close) to one of its bounds. |  
| I | Infeasible.  The variable is basic or superbasic and is currently violating one of its bounds by more than the value of the . |  
| N | Not precisely optimal.  If the slack is superbasic, the dual variable  is not sufficiently small, as measured by the .  If the slack is nonbasic,  is not sufficiently positive or negative.  If a loose  has been used, or if iterations were terminated before optimality, this key might be helpful in deciding whether or not to restart the run. | 
 | 
| Activity | is the value of  at the final iterate. | 
| Obj Gradient | is the value of  at the final iterate.  For FP problems,  is set to zero. | 
| Lower Limit | is the lower bound specified for the variable.  None indicates that . | 
| Upper Limit | is the upper bound specified for the variable.  None indicates that . | 
| Reduced Gradnt | is the value of  at the final iterate (see Section 11.3).  For FP problems,  is set to zero. | 
| m + j | is the value of . | 
Note:  if two problems are the same except that one minimizes  and the other maximizes , their solutions will be the same but the signs of the dual variables  and the reduced gradients  will be reversed.
 
13.5
 The Solution File
If ,
the information contained in a printed solution may also be output to the relevant file (which may be the Print file if so desired). Infinite Upper and Lower limits appear as  rather than None.
The maximum line length is  characters.
A Solution file is intended to be read from disk by a self-contained program that extracts and saves certain values as required for possible further computation. Typically the first 
 lines would be ignored.
 
The end of the ROWS section is marked by a line that starts with a 
 and is otherwise blank. If this and the next 
 lines are skipped, the COLUMNS section (see 
Section 13.4.2) can then be read under the same format.
 
13.6
 The Summary File
If , certain brief information will be output to file. A disk file should be used to retain a concise log of each run if desired. (A  is more easily perused than the associated ).
The following information is included:
| 1. | The optional parameters supplied via the option setting functions, if any; | 
| 2. | The Basis file loaded, if any; | 
| 3. | The status of the solution after each basis factorization (whether feasible; the objective value; the number of function calls so far); | 
| 4. | The same information every th iteration, where  is the specified ; | 
| 5. | Warnings and error messages; | 
| 6. | The exit condition and a summary of the final solution. | 
Item 
4 is preceded by a blank line, but item 
5 is not.
The meaning of the printout for linear constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, 
 replaced by 
, 
 replaced by 
, 
 and 
 are replaced by 
 and 
 respectively, and with the following change in the heading:
| Constrnt | gives the name of the linear constraint. | 
Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Residual column to become positive.
Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.