Monday, August 28 |
07:00 - 08:45 |
Breakfast ↓ Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
08:45 - 09:00 |
Introduction and Welcome by BIRS Staff ↓ A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions. (TCPL 201) |
09:00 - 09:15 |
David Silvester: Opening Remarks from Organisers (TCPL 201) |
09:15 - 10:00 |
Petros Drineas: Randomized linear algebra for interior point methods ↓ Linear programming is a central problem in computer science and applied mathematics with numerous applications across a wide range of domains, including machine learning and data science. Interior point methods (IPMs) are a common approach to solving linear programs with strong theoretical guarantees and solid empirical performance. The time complexity of these methods is dominated by the cost of solving a linear system of equations at each iteration. In common applications of linear programming, particularly in data science and scientific computing, the size of this linear system can become prohibitively large, requiring the use of iterative solvers which provide an approximate solution to the linear system. Approximately solving the linear system at each iteration of an IPM invalidates common analyses of IPMs and the theoretical guarantees they provide. In this talk we will discuss how randomized linear algebra can be used to design and analyze theoretically and practically efficient IPMs when using approximate linear solvers. (TCPL 201) |
10:00 - 10:30 |
Coffee Break (TCPL Foyer) |
10:30 - 11:00 |
Chen Greif: Numerical solution of double saddle-point systems ↓ Double saddle-point systems are drawing increasing attention in the past few years, due to the importance of associated applications and the challenge in developing efficient numerical solvers. In this talk we describe some of their numerical properties and make some distinctions between this family and the more well-established class of classical 2-by-2 block-structured saddle-point systems. We derive eigenvalue bounds, expressed in terms of extremal eigenvalues and singular values of block sub-matrices. The analysis includes bounds for preconditioned matrices based on block diagonal preconditioners using Schur complements, and it is shown that in this case the eigenvalues are clustered within a few intervals bounded away from zero. Analysis for approximations of Schur complements is included. (TCPL 201) |
11:00 - 11:30 |
Jemima Tabeart: Block alpha-circulant preconditioners for all-at-once diffusion-based correlation operators ↓ I will present practical methods to apply block α-circulant preconditioners approximately to an all-at-once system from ocean data assimilation. We apply the preconditioner via a nested iterative method within an outer Chebyshev semi-iteration. We present theory on the rate of convergence and use this to design an adapted approach which ensures each sub-problem is solved to the same tolerance. A panel of numerical experiments reveal that using an appropriate choice of α our new approaches are competitive in terms of outer iterations and matrix-vector products compared to current choices of preconditioner. (TCPL 201) |
11:30 - 13:00 |
Lunch ↓ Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
13:00 - 14:00 |
Guided Tour of The Banff Centre ↓ Meet in the PDC front desk for a guided tour of The Banff Centre campus. (PDC Front Desk) |
14:00 - 14:20 |
Group Photo ↓ Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo! (TCPL Foyer) |
14:20 - 15:05 |
Madeleine Udell: Low rank approximation for faster optimization ↓ Low rank structure is pervasive in real-world datasets.
This talk shows how to accelerate the
solution of fundamental computational problems,
including eigenvalue decomposition, linear system solves,
composite convex optimization,
and stochastic optimization (including deep learning),
by exploiting this low rank structure.
We present a simple method based on randomized numerical linear
algebra for efficiently computing approximate top eigendecompositions,
which can be used to replace large matrices
(such as Hessians and constraint matrices)
with low rank surrogates that are faster to apply and invert.
The resulting solvers for linear systems (NystromPCG),
composite convex optimization (NysADMM),
and deep learning (SketchySGD) demonstrate strong theoretical and
numerical support,
outperforming state-of-the-art methods in terms of speed and
robustness to hyperparameters. (TCPL 201) |
15:00 - 15:30 |
Coffee Break (TCPL Foyer) |
15:30 - 16:15 |
Hans De Sterck: Asymptotic convergence speed of windowed Anderson acceleration: an overview of results and open problems ↓ Anderson acceleration with window size m is a nonlinear convergence acceleration mechanism for fixed-point iterative methods that is widely used in scientific computing, optimization and machine learning. For many applications AA(m) dramatically improves the convergence speed, both when iteration counts are small, and asymptotically for large iteration counts. Nevertheless, there are still no known results yet that can bound or quantify the improvement in asymptotic convergence speed provided by windowed AA(m). In this talk I will give an overview of what is known about the asymptotic convergence speed of windowed AA(m) and highlight some recent results and open problems. (TCPL 201) |
16:15 - 17:00 |
Howard Elman: A new method of model order reduction for parametrized constrained partial differential equations ↓ Model order reduction techniques are effective for solving parametrized
models involving PDEs, including models of incompressible flow, where the
constraint is the incompressibility constraint, and in optimal control, where
the constraints themselves are PDEs. However, reduced models may fail to be
inf-sup stable. We present a new approach for generating reduced bases in
this scenario, using a so-called stacked reduced basis, which avoids some of
the difficulties associated with inf-sup stability. We show that this approach
is effective although in some circumstances it also requires stabilization,
which can be done using either classic methods of penalization or through
Petrov-Galerkin methods. Computational tests are presented for models based
on PDE-constrained optimization and incompressible flow.
This is joint work with Kayla D. Davie (University of Maryland, College Park). (TCPL 201) |
17:30 - 19:30 |
Dinner ↓ A buffet dinner is served daily between 5:30pm and 7:30pm in Vistas Dining Room, top floor of the Sally Borden Building. (Vistas Dining Room) |