16th June, 2013 --- 8.30am-12.00pm
Marquis 103, 104, 105
Numerous problems in machine learning are inherently discrete. More often than not, these lead to challenging optimization problems. While convexity is an important property when solving continuous optimization problems, submodularity, often viewed as a discrete analog of convexity, is key to solving many discrete problems. Its characterizing property, diminishing marginal returns, appears naturally in a multitude of settings.
While submodularity has long been recognized in combinatorial optimization and game theory, it has seen a recent surge of interest in theoretical computer science and machine learning. This tutorial will introduce the concept of submodularity and its basic properties, and outline recent research directions -- such as new approaches towards large-scale optimization, learning submodular functions and sequential decision making tasks. We will discuss recent applications to challenging machine learning problems such as high-order graphical model inference, sparsity, document summarization, active learning and recommendation.
The tutorial will not assume any specific prior knowledge.
Andreas Krause is an assistant professor at ETH Zurich.
Stefanie Jegelka is a postdoctoral researcher at UC Berkeley.
June 16, 2013
room Marquis 103, 104, 105