Math 516: Numerical Optimization

Jim Burke

Spring 2001-2002, TTH 8:30-10:50


  

An optimization problem is one in which a given function is either minimized or maximized relative to a restricted range of choices

Available in a given situation.  Optimization problems arise in engineering design, portfolio design, system management, parameter estimation, statistics, and in the modeling physical and behavioral phenomena.  The 516 course furnishes an introduction to numerical methods for continuous optimization in finite dimensions.  It is designed as a sequel to the 515 course. Although some numerical techniques are examined in 515, that is done primarily to illustrate and motivate the theoretical developments there, as well as for purposes of enabling students to go all the way from modeling a problem to obtaining a solution.   In 516, this subject is carried much farther, and with more techniques and more involvement in the rigors and pitfalls of computation.

 

The 516 course begins with a detailed discussion of the basic convergence theory for numerical methods of optimization.  Both global and local theories are developed.  Special attention is given to descent methods based on exact first-order information and approximate second-order information.  The basic line-search and trust region methodologies are examined.  Generalizations to nondifferentiable functions in composite format are also studied.  The study of constrained methods begins with a review of interior point path-following methodology for linear programs, quadratic programs, and the monotone linear complementarity problem. This is followed by a review of gradient projection methods, penalty methods, and multiplier methods for constrained optimization problems.

 

Review of Math 515 Essentials:

Basic linear algebra, continuity and differentiability, convexity, optimality conditions

 

Convergence:

line search methods, backtracking, trust-region methods, convergence rates (linear, super-linear, quadratic), Newton's method for equations and optimization, dogleg methods, methods for composite functions

 

Conjugate Direction Methods:

methods for quadratic objectives, iterative refinement, extentions to non-quadratic functions, Flethcher-Reeves algorithm, Polak-Ribiere, restart criteria

 

Matrix Secant Methods:

Equation solving and Broyden's method, minimization and rank 2 updating, BFGS updating

 

Constrained Optimization Review:

polyhedral constraints, box constraints, convex constraints, nonlinear constraints

 

LP, QP, and LCP:

linear programming, quadratic programming, linear complementarity, interior point methods, path-following methods

 

Gradient Projection Algorithm:

convex constraints, quadratic constraints, polyhedral constraints, box constraints

 

Penalty Methods:

exterior and interior penalty methods, barrier methods, exact penalization, Sl1QP methods

 

Multiplier Methods:

Augmented Lagrangian, strong second-order sufficiency conditions

 

Sequential Quadratic Programming:

An introduction to SQP methodology and theory