Invited Speakers
Plenary Speakers
-
Michèle Breton, HEC Montréal and GERAD, Canada
- Recursive Methods for Pricing Financial Derivatives
-
Bernhard von Stengel, London School of Economics and Political Science, UK
- Game Theory Explorer - Software for the Applied Game Theorist
-
Yinyu Ye, Standford University, USA
- Recent Progresses on Linear Programming and the Simplex Method
Evaluating financial derivatives regularly requires the use of numerical procedures. The talk will present the basic concepts of recursive modeling, show how it is the basis of most numerical methods for pricing derivatives, and discuss the relative efficiency of existing procedures. Various applications will be presented to illustrate the flexibility and applicability of recursive approaches.
We describe work in progress on a software tool for creating and analyzing noncooperative games, part of the GAMBIT project. The Game Theory Explorer (GTE) is a browser-based graphical user interface. It allows to create game trees with imperfect information, or games in strategic form. The Nash equilibria of the game are computed and displayed "at a mouseclick". The talk will give a demonstration of the software, outline its underlying algorithms, and discuss managerial challenges of the open-source software development of GTE with the help of student volunteers supported by the "Google Summer of Code".
Linear programming (LP), together with the simplex method, remains a core Operations Research, Computer Science and Mathematics topic since 1947. Due to the relentless research effort, a linear program can be solved today one million times faster than it was done thirty years ago. Businesses, large and small, now use LP models to control manufacture inventories, price commodities, design civil/communication networks, and plan investments. LP even becomes a popular subject taught in under/graduate and MBA curriculum, advancing human knowledge and promoting science education. The aim of the talk is to describe several recent progresses on LP and the simplex method.
Tutorial Speakers
-
Constantine Caraminis, University of Texas, USA
- Outlier-Robust Estimation for High Dimensional Statistics
-
David Fuller, University of Waterloo, Canada
- Complementarity Modeling in Energy Markets
-
Patrice Marcotte, Université de Montréal, Canada
- Pricing on Networks: A Bit of Theory and Some Applications
-
Mu Zhu, University of Waterloo, Canada
- Statistical Learning: An Introductory Overview
This tutorial is about a classical problem in a new regime: dealing with missing, noisy and corrupted data, in the setting of high-dimensional data, where we may have more parameters to estimate than samples available. This high-dimensional regime has been largely driven by application areas such as behavior prediction, collaborative filtering, e-commerce, bioinformatics and genomics, to name just a few. Common to all these is that available data may often be corrupted, subject to noise, erased and missing, or possibly even maliciously (manipulatively) corrupted.
The goal of this tutorial is twofold. First, we aim to convey the statistical and also algorithmic challenges of robust estimation in high dimensional data. Next, focusing on the two fundamental problems of regression and covariance estimation, we describe several new advances, outlining the development of convex optimization-based algorithms, as well as more efficient greedy-based algorithms, thus essentially describing the state of the art in high-dimensional robust statistics for this class of problems.
Complementarity models are widely used to represent competitive behavior by agents in markets, such as perfectly competitive, Cournot, or Stackelberg leader-follower behavior. The principles that define such behaviors have been known for a long time, but recently it has become practical to formulate and solve large scale models having much realistic detail, including constraints on variables controlled by the market agents. The tutorial will present an overview of the formulation of several types of models, with applications to energy markets, including Mixed Complementarity Problems, Mathematical Programs with Equilibrium Constraints, and Equilibrium Programs with Equilibrium Constraints.
Pricing, either for making profit or achieving a certain social objective, has a long history in economics, or has (fairly recently) caught the attention of operations researchers. In this tutorial, I will focus on the modeling and computational issues related to pricing over transportation networks. These will be considered within the framework of hierarchical game theory, where the authority that sets prices over the arcs or paths of the network anticipates the selfish reaction of users who maximize their own utility function. Besides the obvious application which consists in optimizing the performance of a system through "optimal" tolls, I will stress the connection with a key topic, that is, revenue management.
Statistical learning is primarily concerned with the problem of using data to uncover the relationship between a number of inputs and a certain outcome so that we can make predictions. First, I will go over some basic ideas: why are flexible models necessary, and why must we be careful with flexibility? Then, I will discuss some recent ideas: what are ensemble models, and how can we understand them? Finally, I will present some of our own ideas: how can we use ensembles for a different purpose, for instance, to identify important inputs rather than to make predictions?