|
HOME
SCHEDULE
ABSTRACTS
HOTELS
FOOD GUIDE
DINNER PARTY
TRAVEL
MAPS
CONTACT US
Pacific Institute for the Mathematical Sciences
Department of Electrical Engineering (UW)
Department of Mathematics (UW)
UNIVERSITY OF WASHINGTON
|
 |
 |
 |
(Printable version)
Abstracts
- Lieven
Vandenberghe (UCLA)
Decomposition in semidefinite programming
Decomposition in non-polyhedral conic linear optimization is more
involved than for linear programming because the conic inequalities
introduce additional nonlinear coupling between the variables.
However, in many applications the convex cones in a conic linear program
have a partially separable structure, allowing us to describe the cones
in terms of simpler low-dimensional cones.
The best example is sparse semidefinite programming with a chordal
sparsity pattern. Here, the partially separable structure follows
from the clique decomposition results that characterize positive
semidefinite and semidefinite-completable sparse matrices.
In the talk we will discuss a decomposition method that exploits
partially separable structure in conic linear optimization.
The method is based on the Douglas-Rachford splitting method,
combined with a customized interior-point algorithm for solving
the subproblems. The results will be illustrated with sparse
semidefinite programming examples.
Joint work with Martin Andersen (DTU) and Yifan Sun (UCLA).
- Nikhil
Devanur (Microsoft Research)
Prior-robust algorithms for online matching
In this talk we consider optimization in the presence of uncertain inputs. In a broad class of algorithmic problems, uncertainty is modeled as input being drawn from one among a large known universe of distributions. However the specific distribution is unknown to the algorithm. The goal is to develop a single algorithm that for every distribution in this universe, performs approximately as well as the optimal algorithm tailored for that specific distribution. Such algorithms are said to be prior-robust. Prior-robust optimization retains the robustness of worst-case analysis while yielding much better guarantees. Apart from this theoretical appeal, prior-robust algorithms have been simple enough to be implemented in real large scale systems and have been observed to perform well in practice. We show how to design prior-robust algorithms for the problem of online matching and its generalizations. These results are motivated by and have found applications in online advertising.
- Richard
Tapia
(Rice University)
Inverse, Shifted Inverse, and Rayleigh Quotient Iteration as Newton's Method
Normalized inverse iteration, shifted inverse iteration, and Rayleigh quotient iteration (RQI) are well-known algorithms for computing an eigenvector of a symmetric matrix. In this study we demonstrate that each of these algorithms can be viewed as a standard form of Newton’s method from the nonlinear programming literature, followed by the normalization. This provides an explanation for the good behavior of RQI despite the need to solve systems with nearly singular coefficient matrices; the singularity is removable. Our equivalence result also leads us naturally to a new proof that Rayleigh quotient iteration is cubically convergent with constant at worst 1. An interesting part of our study is the explanation as to why as normalized Newton’s method inverse and shifted inverse iteration are only linearly convergent and not quadratically convergent and why RQI is cubically convergent and not just quadratically convergent.
- Zelda
Zabinsky (University of Washington)
Adaptive Random Search for Global Optimization with an Interacting Particle Algorithm
Many engineers designing a complex system would like to optimize its performance, and perform trade-off studies to better understand the impact of decisions. The complex systems are often modeled with functions that are non-linear, non-convex, multi-modal, discontinuous and available only through computer programs. They may involve continuous and integer variables. In this talk, I will summarize some theoretical results regarding performance of random search algorithms, and discuss a new meta-control methodology that adaptively guides an interacting-particle algorithm with a filtering technique. Numerical results will be presented demonstrating how the meta-control methodology dynamically heats and cools a temperature parameter based on observed behavior of the algorithm to achieve desired performance characteristics (e.g., quality of the final outcome, algorithm running time, etc.). An application in engineering design of composites structures for aircraft fuselage, such as the new 787 Boeing composite aircraft, will be mentioned.
- Aleksandr
Aravkin (IBM T.J. Watson Research Center)
Piecewise linear quadratic and quadratic support
functions in regularized regression, machine learning,
system identification, and ESPECIALLY Kalman smoothing.
We discuss the class of piecewise linear quadratic (PLQ)
penalties. Well known examples include the L2, L1, Huber, Vapnik, hinge loss,
elastic net, and many others. These functions play a crucial role in a variety of
applications, including machine learning, robust statistical inference, sparsity
promotion, and inverse problems such as Kalman smoothing.
We build on a dual representation of this class of penalties, and use it to characterize conditions necessary to interpret
these functions as negative logs of true probability densities. We then present a generalized function class of so called quadratic support (QS) functions that shares this statistical interpretation.
In the second part of the talk, we discuss a general solver for the PLQ class. The dual representation of these penalties allows a simple calculus, which we exploit to build an overall objective from simple parts. The resulting algorithm can handle a number of fairly general problems, while still efficiently exploiting structure. Moreover, simple constraints are easily incorporated. We present several numerical examples from different areas, and end with a discussion of an application of these ideas to Kalman smoothing.
- Terry
Rockafellar (University of Washington)
Risk and Regret in Stochastic Optimization
In many potential applications of optimization, the quantities one would
like to minimize or keep below some threshold are random variables which
the decision parameters can only influence probabilistically, rather than
force to have specific values. The theory of risk, based heavily on
convex analysis, provides ways of converting such expressions into
numerical functions of the decision variables so that standard
optimization methodology may be applied. An important feature in the
conversion, however, is the introduction of auxilliary decision variables
which corrrespond to subproblems that quantify risk by minimizing
something called regret. Regret is a kind of mirror image of relative
utility in decision analysis, in that it captures an overall attitude toward
a mixture of future losses, and perhaps gains.
- Ben
Taskar (University of Washington)
Inference and Optimization in Determinantal Point Processes
Determinantal point processes arise in random matrix theory and
quantum physics as distributions of random variables with
negative correlations. Among their many remarkable properties, they
offer tractable algorithms for exact inference, including sampling
and computing marginal and conditional distributions. DPPs are a
natural model for subset selection problems where diversity is preferred.
For example, they can be used to select diverse sets of sentences to
form document summaries, or to return relevant but varied text and image
search results, or to detect non-overlapping multiple object
trajectories in video.
I'll present our recent work on a novel factorization and dual representation of
DPPs that enables efficient marginal inference for exponentially-sized
structured sets. However, finding the most likely configuration (maximum
a posteriori, or MAP), which is often required in practice, is NP-hard, so we
must resort to approximate inference. This optimization problem, which also
arises in experimental design, involves finding the largest principal minor of a
positive semidefinite matrix. Because the objective is log-submodular,
greedy algorithms have been used in the past with some empirical success;
however, these methods only give approximation guarantees in the special
case of monotone objectives, which correspond to a restricted class of DPPs.
We propose a new algorithm for approximating the MAP problem based on continuous
techniques for submodular function maximization. Our method involves a
novel continuous relaxation of the log-probability function, which
has restricted convexity properties and can be evaluated and differentiated
exactly and efficiently. We obtain a practical algorithm with a
1/4-approximation
guarantee for a more general class of non-monotone DPPs; our algorithm also
extends to MAP inference under solvable polytope constraints, making it possible
to optimize this objective over matchings and other combinatorial objects.
|