Schedule for: 19w5231 - Models and Algorithms for Sequential Decision Problems Under Uncertainty

Beginning on Sunday, January 13 and ending Friday January 18, 2019

All times in Banff, Alberta time, MST (UTC-7).

Sunday, January 13
16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre)
17:30 - 19:30 Dinner
A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110))
Monday, January 14
07:00 - 08:45 Breakfast
Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
08:45 - 09:00 Introduction and Welcome by BIRS Staff
A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions.
(TCPL 201)
09:00 - 09:40 David Shmoys: Crowdsourcing Operations: Incentivizing Bike Angels in America
Bike-sharing systems are changing the urban transportation landscape; we have been working with New York City Bike Share (aka Citi Bike), using analytics and optimization to improve the management of their system. Huge rush-hour usage imbalances the system, and in this talk we focus on methods used to mitigate the imbalances that develop. In particular, we will focus on the use of incentives; we have helped guide the development of Bike Angels, which enlists users to make “rebalancing rides”, and we will describe tradeoffs among a number of policies for determining when and where rides should be incentivized, all of which are based on a user dissatisfaction function model of the performance of the system. Motivate, the operator of Citi Bike, has thus far exported this incentive program to Ford GoBike in San Francisco, Bluebikes in Boston, and Capital Bikeshare in DC. The more recent incentive results are joint work with Hangil Chung and Daniel Freund, but the basis for much of the research presented is also joint with Shane Henderson and Eoin O’Mahony.
(TCPL 201)
09:40 - 10:20 Varun Gupta: Online Metric Matching with Recourse
An Online Metric Matching instance is given by a bipartite graph of n clients and servers, one or both sides of which are revealed online. A client must be matched to one free server at the time a client is revealed with the goal of minimizing the total cost of the final matching where the costs of (client, server) pairs are assumed to obey the triangle inequality. We will review the rich history of this problem, and present our results on the recourse version of the problem where the online algorithm is allowed to rematch some clients online. For general metrics, we present two algorithms a deterministic log n competitive online algorithm which uses amortized log n rematches per client when all servers are known offline, and a randomized log n competitive algorithm which uses log n rematches per client when servers appear online as well. For line metrics, we present a constant competitive online algorithm which uses O(1) rematches per client. Thus moderate recourse allows us to bypass the best known upper bound for metric matching Joint work with: Ravishankar Krishnaswamy, Janardhan Kulkarni, Sandeep Sai
(TCPL 201)
10:20 - 11:00 Coffee Break (TCPL Foyer)
11:00 - 11:40 Peyman Mohajerin Esfahani: Data-driven Inverse Optimization with Imperfect Information
In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent’s objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect information, that is, where the agent’s true objective function is not contained in the search space of candidate objectives, where the agent suffers from bounded rationality or implementation errors, or where the observed signal-response pairs are corrupted by measurement noise. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the predicted decision (i.e., the decision implied by a particular candidate objective) differs from the agent’s actual response to a random signal. We show that our framework offers rigorous out-of-sample guarantees for different loss functions used to measure prediction errors and that the emerging inverse optimization problems can be exactly reformulated as (or safely approximated by) tractable convex programs when a new suboptimality loss function is used.
(TCPL 201)
11:40 - 12:20 Fatma Kilinc-Karzan: Online First-Order Framework for Robust Convex Optimization
Robust optimization (RO) has emerged as one of the leading paradigms to efficiently model parameter uncertainty. The recent connections between RO and problems in statistics and machine learning domains demand for solving RO problems in ever more larger scale. However, the traditional approaches for solving RO formulations based on building and solving robust counterparts or the iterative approaches utilizing nominal feasibility oracles can be prohibitively expensive and thus significantly hinder the scalability of RO paradigm. We present a general and flexible iterative framework to solve robust convex optimization problems that is built on a fully online first-order paradigm. In contrast to the existing literature, a key distinguishing feature of our approach is that it requires access to only cheap first-order oracles for the original problem that are remarkably cheaper than pessimization or nominal feasibility oracles, while maintaining the same convergence rates. This, in particular, makes our approach much more scalable and hence preferable in large-scale applications. In the case of strongly convex functions, we also establish a new improved iteration complexity addressing an open question in the literature. Motivated by a robust portfolio optimization problem, we demonstrate the performance of our approach on robust quadratic programs with ellipsoidal uncertainty.
(TCPL 201)
12:20 - 14:00 Lunch
Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
13:00 - 13:40 Guided Tour of The Banff Centre
Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus.
(Corbett Hall Lounge (CH 2110))
13:40 - 14:00 Group Photo
Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo!
(TCPL 201)
14:00 - 14:40 Yehua Wei: Analyzing Policies in Dynamic Robust Optimization
Determining optimal policies in dynamic robust optimization is often challenging due to the curse of dimensionality. As a result, many researchers have proposed restricting attention to classes of simpler decision rules (for instance, static or affine), which are computationally tractable and often deliver excellent performance. In this work, we propose a general theory for analyzing the effectiveness of policies in a large class of dynamic robust optimization problems. Through a transformation of the objective function, we provide a geometric characterization for the effectiveness of a policy class, which is helpful in either establishing the policy’s optimality or bounding its sub-optimality gap. Based on this geometric characterization, we recover and generalize several different results on static and affine polices from the literature. Furthermore, our theory establishes interesting connections with the theory of discrete convexity, global concave envelopes, and the integrality gap of mixed integer linear programs.
(TCPL 201)
14:40 - 15:20 Omid Nohadani: Robust Periodic-Affine Policies for Multiperiod Dynamic Problems
We introduce a new class of adaptive policies called periodic-affine policies, that allows a decision maker to optimally manage and control large-scale newsvendor networks in the presence of uncertain demand without distributional assumptions. These policies are data-driven and model many features of the demand such as correlation, and remain robust to parameter mis-specification. We present a model that can be generalized to multi-product settings and extended to multi-period problems. This is accomplished by modeling the uncertain demand via sets. In this way, it offers a natural framework to study competing policies such as base-stock, affine, and approximative approaches with respect to their profit, sensitivity to parameters and assumptions, and computational scalability. We show that the periodic-affine policies are sustainable, i.e. time consistent, because they warrant optimality both within subperiods and over the entire planning horizon. This approach is tractable and free of distributional assumptions, and hence, suited for real-world applications. We provide efficient algorithms to obtain the optimal periodic-affine policies and demonstrate their advantages on the sales data from one of India's largest pharmacy retailers.
(TCPL 201)
15:20 - 16:10 Coffee Break (TCPL Foyer)
16:10 - 16:50 Wolfram Wiesemann: Data-Driven Chance Constrained Programs over Wasserstein Balls
We provide an exact deterministic reformulation for data-driven chance constrained programs over Wasserstein balls. For individual chance constraints as well as joint chance constraints with right-hand side uncertainty, our reformulation amounts to a mixed-integer conic program. In the special case of a Wasserstein ball with the $1$-norm or the $\infty$-norm, the cone is the nonnegative orthant, and the chance constrained program can be reformulated as a mixed-integer linear program. Using our reformulation, we show that two popular approximation schemes based on the conditional-value-at-risk and the Bonferroni inequality can perform poorly in practice and that these two schemes are generally incomparable with each other.
(TCPL 201)
17:30 - 19:30 Dinner
A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
Tuesday, January 15
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 09:40 Daniel Kuhn: Distributionally Robust Inverse Covariance Estimation: The Wasserstein Shrinkage Estimator
We introduce a distributionally robust maximum likelihood estimation model with a Wasserstein ambiguity set to infer the inverse covariance matrix of a p-dimensional Gaussian random vector from n independent samples. The proposed model minimizes the worst case (maximum) of Stein's loss across all normal reference distributions within a prescribed Wasserstein distance from the normal distribution characterized by the sample mean and the sample covariance matrix. We prove that this estimation problem is equivalent to a semidefinite program that is tractable in theory but beyond the reach of general purpose solvers for practically relevant problem dimensions p. In the absence of any prior structural information, the estimation problem has an analytical solution that is naturally interpreted as a nonlinear shrinkage estimator. Besides being invertible and well-conditioned even for p>n, the new shrinkage estimator is rotation-equivariant and preserves the order of the eigenvalues of the sample covariance matrix. These desirable properties are not imposed ad hoc but emerge naturally from the underlying distributionally robust optimization model. Finally, we develop a sequential quadratic approximation algorithm for efficiently solving the general estimation problem subject to conditional independence constraints typically encountered in Gaussian graphical models.
(TCPL 201)
09:40 - 10:20 Karthik Natarajan: Exploiting Partial Correlations in Distributionally Robust Optimization
In this work, we identify partial correlation information structures that allow for simpler reformulations in evaluating the maximum expected value of mixed integer linear programs with random objective coefficients. To this end, assuming only the knowledge of the mean and the covariance matrix entries restricted to block-diagonal patterns, we develop a reduced semidefinite programming formulation. In some cases, this lends itself to efficient representations that result in polynomial-time solvable instances, most notably for the distributionally robust appointment scheduling problem with random job durations as well as for computing tight bounds in PERT networks and linear assignment problems. To the best of our knowledge, this is the first example of a distributionally robust optimization formulation for appointment scheduling that permits a tight polynomial-time solvable semidefinite programming reformulation which explicitly captures partially known correlation information between uncertain processing times of the jobs to be scheduled. This is joint work with Divya Padmanabhan and Karthyek Murthy at SUTD.
(TCPL 201)
10:20 - 11:00 Coffee Break (TCPL Foyer)
11:00 - 11:40 Omar El Housni: On the Optimality of Affine Policies for Budgeted Uncertainty Sets
We study the performance of affine policies for two-stage adjustable robust optimization problem under a budget of uncertainty set. This important class of uncertainty sets provides the flexibility to adjust the level of conservatism in terms of probabilistic bounds on constraint violations. The two-stage adjustable robust optimization problem is hard to approximate within a factor better than $\Omega( \frac{\log n}{\log \log n})$ for budget of uncertainty sets where $n$ is the number of decision variables. We show that surprisingly affine policies provide the optimal approximation for this class of uncertainty sets that matches the hardness of approximation; thereby, further confirming the power of affine policies. We also present strong theoretical guarantees for affine policies when the uncertainty set is given by intersection of budget constraints. Furthermore, our analysis gives a significantly faster algorithm to compute near-optimal affine policies. This is joint work with Vineet Goyal.
(TCPL 201)
11:40 - 12:20 Marco Campi: Preliminary results on two-stage scenario problems
We consider sequential decision problems where a first action is made prior to seeing any uncertainty (“here-and-now” optimization variable) and then, after uncertainty arrives, one makes a second corrective action that can depend on the observed uncertainty value (“wait-and-see” optimization variable), which is followed by the arrival of yet another uncertain input that closes the process. Given a sample of observations of the uncertainties, we assume one makes a cautious selection of the “here-and-now” and wait-and-see” actions that guards against the worst-case scenarios and asks for the risk that this actions will meet a shortfall in a new out-of-sample case so that the performance will worsen as compared to the expectation constructed from the observations. We present sample complexity results for this setup based on new compression schemes and discuss various extensions and the difficulties therein.
(TCPL 201)
12:20 - 14:00 Lunch (Vistas Dining Room)
14:00 - 14:40 Mohit Singh: Resource Allocation Problems in Cloud Computing
Cloud computing services are growing at an exponential rate and with it the cost of providing these services. For cost effectiveness, providers need to rely on multi-tenancy and resource sharing among tenants, since statically reserving resources for a tenant is prohibitively expensive. A major consequence of resource sharing is that the performance of one tenant can be adversely affected by resource demands of other co-located tenants. I will talk about the problem of effectively sharing resources such as memory and CPU in multi-tenant settings. Service level agreement (SLA) gives a framework that defines and enforces accountability of the service provider to the tenant even when the resource is not statically reserved on behalf of the tenant. We model the resource allocation problem as an online optimization problem that incorporates a rich variety of SLAs representing the diversity of clients’ requirements for resources as well as quality of service. We then design easy to implement algorithms that build on classical caching algorithms as well multiplicative weight update methods. The algorithms work under multi-tenant scenarios involving SLAs and overbooking. We use the framework of competitive analysis to analyze the performance of the algorithm. We will also describe results based on experiments that demonstrate the effectiveness of our solution in practice.
(TCPL 201)
14:40 - 15:20 Vahideh Manshadi: Online Resource Allocation under Partially Predictable Demand
For online resource allocation problems, we propose a new demand arrival model where the sequence of arrivals contains both an adversarial component and a stochastic one. Our model requires no demand forecasting; however, due to the presence of the stochastic component, we can partially predict future demand as the sequence of arrivals unfolds. Under the proposed model, we study the problem of the online allocation of a single resource to two types of customers, and design online algorithms that outperform existing ones. Our algorithms are adjustable to the relative size of the stochastic component, and our analysis reveals that as the portion of the stochastic component grows, the loss due to making online decisions decreases. This highlights the value of (even partial) predictability in online resource allocation. We impose no conditions on how the resource capacity scales with the maximum number of customers. However, we show that using an adaptive algorithm—which makes online decisions based on observed data—is particularly beneficial when capacity scales linearly with the number of customers. Our work serves as a first step in bridging the long-standing gap between the two well-studied approaches to the design and analysis of online algorithms based on (1) adversarial models and (2) stochastic ones. Using novel algorithm design, we demonstrate that even if the arrival sequence contains an adversarial component, we can take advantage of the limited information that the data reveals to improve allocation decisions. We also study the classical secretary problem under our proposed arrival model, and we show that randomizing over multiple stopping rules may increase the probability of success. (Based on joint work with Dawsen Hwang (Google) and Patrick Jaillet (MIT))
(TCPL 201)
15:20 - 16:00 Coffee Break (TCPL Foyer)
16:10 - 16:50 Guzin Bayraksan: Effective Scenarios in Multistage Distributionally Robust Optimization
Traditional multistage stochastic optimization assumes the underlying probability distribution is known. However, in practice, the probability distribution is often not known or cannot be accurately approximated. One way to address such distributional ambiguity is to use distributionally robust optimization (DRO), which minimizes the worst-case expected cost with respect to a set of probability distributions. In this talk, we study multistage convex DRO with a finite set of scenarios. We illustrate that not all but only some scenarios might have an effect on the optimal value, and we formally define this notion for multistage DRO. In particular, we investigate problems where the distributional ambiguity is modeled stagewise by the total variation distance on conditional probability distributions. We show the resulting problem is a multistage risk-averse optimization with nested coherent risk measures formed by a convex combination of the worst-case and conditional value-at-risk. We conduct perturbation analysis with respect to a collection of scenarios being excluded and propose easy-to-check sufficient conditions for effectiveness. We explore effectiveness of scenario paths as well as scenarios conditional on the history of the stochastic process. Computational experiments illustrate the results on finance and energy problems.
(TCPL 201)
16:50 - 17:30 Huan Xu: Secondary Market to Mitigate Demand Uncertainty
In this talk we consider a centralized resource manager whose aim is to regulate multiple paticipants where the participatnts' goals are to purchase resource so as to satisfy uncertain demands. A secondary market mechanism is introduced to mitigate the conservativeness toward demand uncertainty. We analyze the clearing price of the secondary market, show that a unique Nash equlibrium exists in the many player limit and the NE takes the form of a threshold strategy. Moreover, we show that both the social welfare loss and the deviation of the total ordering from the optimum is of O(\sqrt{n}) where n is the number of players. This is significantly better compared with the case of no secondary market, where the social welfare loss and the deviation of the total order are both O(n).
(TCPL 201)
18:00 - 19:30 Dinner (Vistas Dining Room)
Wednesday, January 16
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 09:40 Nathan Kallus: Learning to Decide from Observational Data Under Unobserved Confounding
Since observational data no matter how rich will have some unobserved confounding, methods for individualized decision making that assume unconfoundedness can lead to harm and are unusable in practice. We instead propose a new minimax-regret formulation, prove that the policy we get is guaranteed to improve on existing standards of care, and develop optimization algorithms to find this policy. A personalized medicine application demonstrates the power of our approach and danger of previous ones.
(TCPL 201)
09:40 - 10:20 Brad Sturt: A Data-Driven Approach for Multi-Stage Linear Optimization
Multi-stage linear optimization is an integral modeling paradigm in supply chain, energy planning, and finance. However, these problems are computationally demanding, and identifying the correlation structure of the uncertainty across stages presents significant challenges. In this talk, we propose a novel data-driven framework for addressing multi-stage linear optimization based on a simple robustification of the data. For this framework, we report several results: 1) We present a general approximation algorithm for finding near-optimal solutions to the proposed framework via techniques from robust optimization. 2) We establish nonparametric convergence guarantees for the proposed framework which are, to the best of our knowledge, the first of their kind for data-driven multi-stage linear optimization with uncertainty that is arbitrarily correlated across stages. 3) We discuss differences and limitations of alternative multi-stage distributionally robust optimization approaches using Wasserstein ambiguity sets. Finally, we demonstrate the practical tractability and near-optimality of the proposed approach on several data-driven multi-stage inventory management problems. This is joint work with Dimitris Bertsimas and Shimrit Shtern.
(TCPL 201)
10:20 - 11:00 Shimrit Shtern: Data-Driven Two-Stage Linear Optimization: Feasibility and Approximation Guarantees
An important and challenging class of two-stage linear optimization problems are those without relative-complete recourse, wherein there exists first-stage decisions and realizations of the uncertainty for which there are no feasible second-stage decisions. Previous data-driven methods for these problems, such as the sample average approximation (SAA), are asymptotically optimal but have prohibitively poor performance with respect to out-of-sample feasibility. In this talk, we present a data-driven method for two-stage linear optimization problems without relative-complete recourse which combines (i) strong out-of-sample feasibility guarantees and (ii) general asymptotic optimality. Our method employs a simple robustification of the data combined with a multi-policy approximation. A key contribution of this work is the development of novel geometric insights, which we use to show that the proposed approximation is asymptotically optimal. We demonstrate the benefit of using this method in practice through numerical experiments. This is a joint work with Dimitris Bertsimas and Brad Sturt.
(TCPL 201)
11:30 - 13:30 Lunch (Vistas Dining Room)
13:30 - 17:30 Free Afternoon (Banff National Park)
17:30 - 19:30 Dinner (Vistas Dining Room)
Thursday, January 17
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 09:40 Andrew Lim: Approximating the Gittins index in a Bayesian bandit
While the optimal policy for the Gittins-Whittle formulation of the multi-armed bandit is fully characterized in terms of the Gittins index, the Gittins index is notoriously difficult to compute for high dimensional Bayesian bandits. We develop a method for approximating the dynamics of the posterior using the Bayesian Central Limit Theorem and show how it can be used to approximate the Gittins index for high-dimensional Bayesian bandits. Comparisons of the Gittins-Whittle framework to Thompson sampling and the Upper Confidence Bound approach will be discussed, and applications of our approximation to Bayesian bandits where the rewards are mixtures will also be presented.
(TCPL 201)
09:40 - 10:20 Shipra Agrawal: Bandits with delayed, anonymous, aggregated feedback
In a typical interaction between an online recommendation system and users, there are delays in users' response to the recommendations. More importantly, when sale(s) or click(s) are observed, it is often difficult to pinpoint which previous recommendation(s) produced them. To capture this difficulty, we study a variant of the stochastic bandit problem, which we call "bandits with delayed, anonymous , aggregated feedback". In this problem, in each round, when the player pulls an arm, a reward is generated but not immediately observed. Instead, at the end of the round the player observes the aggregate of all previously generated rewards that happen to arrive in the given round due to stochastic delays. We provide an algorithm that matches the worst case regret of the non-anonymous problem exactly when the delays are bounded, and up to logarithmic factors with an additive variance term, when delays are unbounded.
(TCPL 201)
10:20 - 11:00 Coffee Break (TCPL Foyer)
11:00 - 11:40 Simone Garatti: Dataset size tuning in scenario optimization
joint work with Marco C. Campi Scenario optimization is a data-driven methodology where one makes a decision that is consistent with the available observations while also minimizing a given cost function. The “risk” is the probability that the scenario solution is not consistent with a new, out-of-sample, situation. Recent studies have unveiled a profound link between the risk and the complexity of the solution, which is defined as the cardinality of the smaller subset of data from which the solution can be recovered. In this talk, we leverage these results to introduce a new decision scheme where the size of the dataset is sequentially tuned during the optimization procedure. This scheme allows one to obtain a given level of risk with significantly fewer data points so providing a large saving in the required resources for a reliable data-driven optimization.
(TCPL 201)
11:40 - 12:20 Vishal Gupta: Data-Pooling in Stochastic Optimization
Managing large-scale systems often involves simultaneously solving thousands of potentially unrelated stochastic optimization problems, each with limited data. Optimization intuition suggests decoupling these unrelated problems and solving them separately. Statistical intuition, however, suggests that combining the problems via some form of shrinkage may outperform decoupling, but does not offer a concrete non-parametric approach for doing so when solving complex, constrained optimization problems such as vehicle-routing, economic lot-sizing or facility location. We propose a novel data-pooling algorithm that combines both perspectives. Our approach does not require strong distributional assumptions and applies to constrained, possibly non-convex optimization problems. We show that, unlike the classical statistical setting, the potential benefits of data-pooling, in general, depend strongly on the problem structure, and, in some cases, data-pooling offers no benefit. We prove that as the number of problems grows large, our method learns if pooling is necessary and the optimal amount to pool, even if the expected amount of data per problem is fixed and bounded. We further show that pooling offers significant benefits over decoupling when there are many problems, each of which has a small amount of relevant data. We demonstrate the practical benefits of data-pooling using real data from a chain of retail drug stores in the context of inventory management.
(TCPL 201)
12:20 - 14:00 Lunch (Vistas Dining Room)
14:00 - 14:40 Erick Delage: Worst-case regret minimization in a two-stage linear program
In this talk, we explain how two-stage worst-case regret minimization problems can be reformulated as two-stage robust optimization models. This allows us to employ both approximate and exact solution methods that are available in the recent literature to fficiently identify good solutions for these hard problems. In particular, our numerical experiments indicate that affine decision rules are particularly effective at identifying good conservative solutions for three different types of decision problems: a multi-item newsvendor problem, a lot-sizing problem, and a production-transportation problem.
(TCPL 201)
14:40 - 15:20 Phebe Vayanos: Two-Stage Robust Optimization with Decision-Dependent Information Discovery
Robust optimization is a popular paradigm for modeling and solving two-stage decision-making problems affected by uncertainty. Most approaches assume that the uncertain parameters can be observed for free and that the sequence in which they are revealed is independent of the decision-maker's actions. Yet, these assumptions fail to hold in many real-world applications where the time of information discovery is decision-dependent and the uncertain parameters only become observable after an often costly investment. This is the case for example in clinical trial planning, in offshore oilfield exploration, in active preference elicitation, and in R&D project portfolio optimization. To fill this gap in the literature, we consider two-stage robust optimization problems in which (part of) the first stage variables decide on the uncertain parameters that will be observed between the first and the second decision stages. The information available in the second stage is thus decision-dependent and can be discovered (at least in part) by making strategic exploratory investments in the first stage. We propose a novel min-max-min-max formulation of the problem. We prove correctness of this formulation and leverage this new model to provide a solution method inspired from the K-adaptability approximation approach, whereby K candidate strategies are chosen here-and-now and the best of these strategies is selected after the portion of the uncertain parameters that was chosen to be observed is revealed. We reformulate the problem as an MILP solvable with off-the-shelf solvers and demonstrate its effectiveness on several stylized problems. This is joint work with Angelos Georghiou and Han Yu.
(TCPL 201)
15:20 - 16:10 Coffee Break (TCPL Foyer)
16:10 - 16:50 Rajan Udwani: Multi-objective Maximization of Monotone Submodular Functions with Cardinality Constraint
We consider the problem of multi-objective maximization of monotone submodular functions subject to cardinality constraint, often formulated as $\max_{|A|=k}\min_{i\in\{1,\dots,m\}}f_i(A)$. While it is widely known that greedy methods work well for a single objective, the problem becomes much harder with multiple objectives. In fact, Krause et al.\ (2008) showed that when the number of objectives $m$ grows as the cardinality $k$ i.e., $m=\Omega(k)$, the problem is inapproximable (unless $P=NP$). We focus on the case where the number of objectives is super-constant yet much smaller than the cardinality of the chosen set. We propose the first constant factor algorithms for this regime, including one with the best achievable asymptotic guarantee and also a more practical nearly linear time algorithm. Experiments on synthetic data show that a heuristic based on our more practical and fast algorithm outperforms existing heuristics.
(TCPL 201)
16:50 - 17:30 Daniel Zhuoyu Long: Preservation of Supermodularity in Parametric Optimization: Necessary and Sufficient Conditions on Constraint Structures
This work presents a systematic study of the preservation of supermodularity under parametric optimization, that allows us to derive complementarity among parameters and monotone structural properties of optimal policies in many operations models. We introduce new concepts of mostly-lattice and additive mostly-lattice, which significantly generalize the commonly imposed lattice condition, and use them to establish the necessary and sufficient conditions on the feasible set so that supermodularity can be preserved under various assumptions on the objective functions. We further identify some classes of polyhedral sets which satisfy these concepts. Finally, we illustrate how our results can be used on a two-stage optimization problem.
(TCPL 201)
17:30 - 19:30 Dinner (Vistas Dining Room)
Friday, January 18
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 09:40 Julien Grand Clement: Robust Markov Decision Processes: Beyond Rectangularity
Markov decision processes (MDPs) are a common approach to model dynamic optimization problems in many applications. However, in most real world problems, the model parameters that are estimated from noisy observations are uncertain, and the optimal policy for the nominal parameter values might be highly sensitive to even small perturbations in the parameters leading to significantly suboptimal outcomes. We consider a robust approach where the uncertainty in probability transitions is modeled as an adversarial selection from an uncertainty set. Most prior work considers the case where uncertainty on parameters related to different states is unrelated and the adversary is allowed to select worst possible realization for each state unrelated to others, potentially leading to highly conservative solutions. On the other hand, the case of general uncertainty sets is known to be intractable. We consider a factor model for probability transitions where the transition probability is a linear function of a factor matrix that is uncertain and belongs to a factor matrix uncertainty set. This a significantly less conservative approach to modeling uncertainty in probability transitions while allowing to model dependence between probability transitions across different states. We show that under a certain rectangularity assumption, we can efficiently compute the optimal robust policy under the factor matrix uncertainty model. We also present a computational study to demonstrate the usefulness of our approach.
(TCPL 201)
11:15 - 12:00 Checkout by Noon
5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon.
(Front Desk - Professional Development Centre)
11:30 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room)