Math 516A
Numerical Optimization

James Burke

Spring 2003, Tuesday/Thursday 8:30-9:50


An optimization problem is one in which a given function is either minimized or maximized relative to a restricted range of choices available in a given situation. Optimization problems arise in engineering design, portfolio design, system management, parameter estimation, statistics, and in the modeling physical and behavioral phenomena. AMath/Math 516 is an introductory course in numerical methods for continuous optimization in finite dimensions. The course is designed as a sequel to AMath/Math/IND E 515 (Fundamentals of Optimization). In 515 the basic theory for continuous optimization problems is developed: optimality conditions for both constrained and unconstrained problems, regularity conditions (or constraint qualifications) , and the role of convexity and polyhedrality. Some numerical techniques are also examined in 515. But this is done primarily to illustrate and motivate the theoretical development.

AMath/Math 516 begins with a detailed discussion of the basic convergence theory for numerical methods of optimization. Both global and local theories are developed. Special attention is given to descent methods based on exact first-order information and approximate second-order information. The basic line-search and trust region methodologies are examined as well as matrix secant approximations of curvature. Generalizations to nondifferentiable functions in composite format are also studied. The study of methods for constrained problems begins with a review of interior-point path-following methodology for linear and quadratic programs. We then proceed to successively more general classes of nonlinearly constrained problems and examine gradient projection, penalty, multiplier, and sequential quadratic programming algorithms. In the study of algorithms emphasis is placed on practical issues of implementation. In this regard, some familiarity with the programming language MATLAB is helpful.

TOPICS

Review of Math 515 Essentials:
Basic linear algebra, continuity and differentiability, convexity, optimality conditions
Basic Convergence Results:
line search methods, backtracking, trust-region methods, convergence rates (linear, super-linear, quadratic), Newton's method for equations and optimization, dogleg methods, methods for composite functions
Conjugate Direction Methods:
methods for quadratic objectives, iterative refinement, extensions to non-quadratic functions, Fletcher-Reeves algorithm, Polak-x, restart criteria
Matrix Secant Methods:
Equation solving and Broyden's method, minimization and rank 2 updating, BFGS updating
Constrained Optimization Review:
polyhedral constraints, box constraints, convex constraints, nonlinear constraints
LP, QP, and LCP:
linear programming, quadratic programming, linear complementarity, interior point methods, path-following methods
Gradient Projection Algorithm:
convex constraints, quadratic constraints, polyhedral constraints, box constraints
Penalty Methods:
exterior and interior penalty methods, barrier methods, exact penalization, Sl1QP methods
Multiplier Methods:
Augmented Lagrangian, strong second-order sufficiency conditions
Sequential Quadratic Programming:
Basic introduction to SQP methodology and theory