16th June, 2013 --- 8.30am-12.00pm

Marquis 103, 104, 105

Abstract

Numerous problems in machine learning are inherently discrete. More often than not, these lead to challenging optimization problems. While convexity is an important property when solving continuous optimization problems, submodularity, often viewed as a discrete analog of convexity, is key to solving many discrete problems. Its characterizing property, diminishing marginal returns, appears naturally in a multitude of settings.
While submodularity has long been recognized in combinatorial optimization and game theory, it has seen a recent surge of interest in theoretical computer science and machine learning. This tutorial will introduce the concept of submodularity and its basic properties, and outline recent research directions -- such as new approaches towards large-scale optimization, learning submodular functions and sequential decision making tasks. We will discuss recent applications to challenging machine learning problems such as high-order graphical model inference, sparsity, document summarization, active learning and recommendation.
The tutorial will not assume any specific prior knowledge.

Topics

  1. What is Submodularity?
    A short introduction: basic facts, definitions and an intuition what we are talking about.
  2. Optimization
    Many machine learning problems are submodular optimization problems in disguise. We summarize the most important state-of-the-art results for optimization with submodular functions, including constrained and unconstrained minimization and maximization.
  3. Learning
    Can we learn submodular functions? What about related settings, such as online or adaptive optimization?

Content in detail

  1. What is Submodularity?
    1. * Definitions of submodularity and examples
    2. * Closedness properties
    3. * Submodularity, convexity and concavity
  2. Minimizing submodular functions
    1. * Techniques for unconstrained minimization, special cases
    2. * Applications: sparse reconstruction and MAP inference
    3. * Constrained minimization: recent results, examples and tools
  3. Maximizing submodular functions
    1. * Techniques for unconstrained maximization, recent results
    2. * Constrained maximization: tools and examples (network inference, summarization, sensing)
  4. Learning
    1. * General results for learning submodular functions; example
  5. Learning to optimize
    1. * Online submodular optimization, application to recommendation
    2. * Adaptive submodular maximization, application to experimental design and active learning

Material


Presenters

Andreas Krause is an assistant professor at ETH Zurich.
Stefanie Jegelka is a postdoctoral researcher at UC Berkeley.

Location and Time

June 16, 2013
room Marquis 103, 104, 105