UW OPTIMIZATION SEMINAR
The Optimization Seminar
provides an informal gathering for UW faculty,
students, and visitors to exchange research interests and ideas
on optimization. This term the seminar will focus on student
research projects.
Time and Place
Tuesdays from 10:30am to 11:20pm in Padelford C-401.
Comments and questions about the seminar may be directed to Jim Burke
during the Winter quarter
(burke@math.washington.edu).
This seminar may be taken by UW students for credit as Math 530D.
Schedule
-
Date: Tuesday, January 23, 10:30am
Speaker: Jim Burke
Place: Padelford C-401
Title:
The polynomial numerical hull of degree k, the numerical range,
and convex analysis
-
Date: Tuesday, January 30, 10:30am
Speaker:
Julie Eaton
Place: Padelford C-401
Title: TBA
-
Date: Tuesday, February 6, 10:30am
Speaker:
Liang (Linda) Xu
Place: Padelford C-401
Title: Exploring the LM-BFGS Update
Abstract:
The LM-BFGS(Nocedal et al) method is a Quasi-Newton approach to Hessian
approximation that incorporates local curvature information while reducing
dimensionality in seach direction computations. The method requires the storage of
the iterate and gradient information from only m of the previous steps (e.g. m=5).
The updating strategy guarantees that the approximated Hessian matrix is positive
definite. In this talk, I will discuss the formulation of LM-BFGS updating and how
to utilize its structure in solving QP subproblems.
-
Date: Tuesday, February 13, 10:30am
Speaker:
Jonathan Cross
Place: Padelford C-401
Title: SOSOS - Survey of Sums of Squares
Abstract: Sums of Squares methods provide several ways to solve hard
polynomial optimization problems. I will discuss how
sums of squares conditions may be written as a semidefinite programming
problem, and then relate some current results and how these lead to
relaxations for solving polynomial optimization problems.
-
Date: Tuesday, February 20, 10:30am
Speaker:
Alekander Aravkin
Place: Padelford C-401
Title: Kalman filter, Kalman Smoother, and Iterated Kalman Smoother
Abstract:
The Kalman Filter is a hidden markov model which is very useful in
real-time state estimation. The Kalman Smoother builds on the Kalman
filter and in the affine case yields the MLE of the state sequence. I
will present a decomposition of the affine Kalman Smoother as a
sequence of least squares problems, and discuss the implications of
this approach for the non-affine case.
-
Date: Tuesday, February 27, 10:30am
Speaker:
Qiuying Lin
Place: Padelford C-401
Title: Survey of solution recovery by L1 minimization
Abstract: Solution recovery can be applied in compressed sensing or error
correcting problems. In compressed sensing problem, we wish to recover x*
from y=Ax*, where A is an m by n matrix (m much less than n). In the
error correcting problem,
we wish to recover f* from z=Bf*+e, where B is an n by m matrix, and e is an
unknown error vector. These 2 problems are dual. Since
y=Ax* is an under-determined system, it's generally impossible to recover
x* exactly. But under certain conditions on A and the "sufficient"
sparsity of x*, L1 minimization can be applied to recover
x*. In this talk I describe the conditions required for A and
x* for x* to be exactly recoverable. Some examples will be given.
-
Date: Tuesday, March 6, 10:30am
Speaker:
Sangwoon Yun
Place: Padelford C-401
Title:
A Coordinate Gradient Descent method for Linearly
Constrained Smooth Optimization and Support Vector Machines Training
Abstract: Support vector machines (SVMs) training may be posed as a
large quadratic program (QP) with bound constraints and a single linear
equality constraint.
We propose a (block) coordinate gradient descent method for solving this
problem and, more generally, linearly constrained smooth optimization.
Our method is closely related to decomposition methods currently popular
for SVM training.
We establish global convergence and, under a local error bound
assumption (which is satisfied by the SVM QP), linear rate of convergence
for our method when the coordinate block is chosen by a
Gauss-Southwell-type rule to ensure sufficient descent. We show that, for
SVM QP with n variables, this rule can be implemented
in O(n) operations using Rockafellar's notion of conformal realization.
Thus, for SVM training, our method requires only O(n) operations per
iteration and, in contrast to existing decomposition methods,
achieves linear convergence without additional assumptions. We report our
numerical experience with the method on some large
SVM QP arising from two-class data classification.
Our experience suggests that the method can be efficient for SVM training
with nonlinear kernel.
This work is a collaboration with Paul Tseng.
Mathematics Department
University of Washington