Research

...

Below is an incomplete summary of a selection of research projects that my group has been working on. A full list of publications is here.

Jump to research areas described here:

Determinantal Point Processes, Strongly Rayleigh Measures, Diversity and Sampling

Determinantal Point Processes (DPPs) are elegant probabilistic models of diversity: these probability distributions over subsets prefer sets that are diverse, and, conveniently, many inference computations reduce to linear algebra. DPPs belong to a larger class of distributions that are characterized by strong negative correlations, Strongly Rayleigh Measures. These occur from combinatorics to random matrices, and recently gained attention for breakthroughs in graph algorithms and the Kadison-Singer problem.
In machine learning, they are, for instance, key to modeling repulsion and diversity (from vision to recommender systems), and for compactly approximating data and/or models for faster learning, by capturing large parts of the information or variance. We show new results for fast sampling from DPPs and related measures, and applications to matrix approximation and kernel methods. Our results include practical algorithms, theoretical bounds on mixing time, fast lazy evaluation schemes that exploit quadrature, and empirical results that reflect the theory.

C. Li, S. Jegelka, S. Sra. Column Subset Selection via Polynomial Time Dual Volume Sampling. 2017
C. Li, S. Sra, S. Jegelka. Fast Mixing Markov Chains for Strongly Rayleigh Measures, DPPs, and Constrained Sampling. NIPS, 2016
C. Li, S. Sra, S. Jegelka. Gaussian quadrature for matrix inverse forms with applications. ICML, 2016
C. Li, S. Jegelka, S. Sra. Fast DPP Sampling for Nyström with Application to Kernel Methods. ICML, 2016
C. Li, S. Jegelka, S. Sra. Efficient Sampling for k-Determinantal Point Processes. AISTATS, 2016.

Broad Institute MIA seminar talks: A primer on DPPs by Chengtao, and Applications of DPPs and Sampling by Stefanie



Efficiently modeling uncertainty, and Bayesian Black-Box Optimization

When important decisions are based on a machine learning method, it can be extremely beneficial to obtain uncertainty estimates along with the prediction. Uncertainties can also help judge where more observations are needed, and are exploited in Bayesian black-box optimization, the optimization of an unknown function via queries. Bayesian Optimization has applications from robotics to parameter tuning to experiment design. Yet, common methods based on Gaussian Processes are computationally very expensive in high dimensions and when a lot of data is needed. We substantially improve the computational and sample complexity especially in high dimensions, without losing performance, by using a different estimate of information, and by learning adaptive distributions over partitions along multiple axes. Theoretically, we provide the first regret bounds for an instance of the popular “entropy search” criteria.

Z. Wang, C. Gehring, P. Kohli, S. Jegelka. Ensemble Bayesian Optimization. 2017.
Z. Wang, S. Jegelka. Max-value Entropy Search for Efficient Bayesian Optimization. ICML, 2017.
Z. Wang, C. Li, S. Jegelka, P. Kohli. Batched High-dimensional Bayesian Optimization via Structural Kernel Learning. ICML, 2017.
Z. Wang, S. Jegelka, L. P. Kaelbling, T. Lozano-Perez. Focused Model-Learning and Planning for Non-Gaussian Continuous State-Action Systems. ICRA, 2017.
Z. Wang, B. Zhou, S. Jegelka. Optimization as Estimation with Gaussian Processes in Bandit Settings. AISTATS 2016. (code)



Non-convex robust optimization

Nonconvex optimization is gaining importance in ML; here, we study it in connection to robustness. For example, in applications like investment or network optimization, we solve an optimization problem that relies on (network) parameters learned from observations. Since these are at most known within a confidence range, the resulting decision should be robustly good for any parameter in the confidence range. Robust optimization achieves this goal but can lead to hard optimization problems. We show that robust formulations of bidding, budget allocation and bipartite influence problems lead to a nonconvex saddle point optimization problem for which we can, despite nonconvexity, show an optimization algorithm with a theoretical optimality analysis. Our algorithm finds an optimal solution under certain conditions, and in practice always a solution at least very close to it. Our algorithm relies on the theory of continuous submodularity, extended to constraints, and connections with optimal transport theory.

M. Staib, S. Jegelka. Robust Budget Allocation via Continuous Submodular Functions. ICML, 2017.



Geometry and comparing probability distributions

Metric between probability distributions are of central importance in machine learning, occurring from estimation to distributed learning and clustering. In our work, we have studied two promising routes: (1) Hilbert space embeddings of probability distributions, which offers measures for testing independence as well as metrics between distributions; and (2) Optimal transport, which takes into account the geometry of the underlying space. In both cases, we have developed new applications and faster algorithms. Hilbert space embeddings allow to find independent sources within a signal, and to cluster data where we consider each cluster a distribution, and take higher-order moments (e.g., variance and higher) of the distributions into account (as opposed to, e.g., k-means). Optimal-transport-based Wasserstein Barycenters are a base routine for Bayesian inference, they allow to merge posteriors estimated on different data subsets while preserving shapes of distributions. Our algorithm offers a much faster, streaming method for merging that can even adapt to slowly shifting distributions (e.g., in sensor fusion). We also show a principled initialization for Wasserstein k-means, with an application to climate data analysis.

M. Staib and S. Jegelka. Wasserstein k-means++ for Cloud Regime Histogram Clustering. Climate Informatics, 2017.
M. Staib, S. Claici, J. Solomon, S. Jegelka. Parallel Streaming Wasserstein Barycenters, 2017.
H. Shen, S. Jegelka and A. Gretton. Fast Kernel-based Independent Component Analysis. IEEE Transactions on Signal Processing 57(9), pp. 3498-3511, 2009.
S. Jegelka, A. Gretton, B. Schoelkopf, B.K. Sriperumbudur and U. von Luxburg. Generalized Clustering via Kernel Embeddings. KI 2009: Advances in Artificial Intelligence, 2009.



Submodularity and submodular edge weights in graphs

Submodular set functions are characterized by an intuitive diminishing marginal costs property. They have long been important in combinatorics and many other areas, and it turns out that they can help model interesting complex interactions in machine learning problems too. Luckily, their structure admits efficient (approximate) optimization algorithms in many settings.
Unfortunately, however, not all set functions are submodular, and I am collecting a growing list of functions that may appear submodular at first look but are actually not.
I am interested in how we can efficiently use submodularity in machine learning, and what new interesting (combinatorial) models are possible.

On faster (approximate/parallel) submodular minimization

It is known that submodular set functions can be minimized in polynomial time, but many algorithms are not very practical for large real data. We address this problem with two very different approaches: (1) We write the minimization of a decomposable submodular function as a "Best Approximation" problem and apply operator splitting methods. The result is an easy, parallelizable and efficient algorithm for decomposable submodular functions that needs no parameter tuning. (2) Graph Cuts have been popular tools for representing set functions (and thereby offering an efficient tool for their optimization). Unfortunately, this is not possible for all submodular functions. However, every submodular function can be represented as a cooperative graph cut, and this insight leads to practical approximate algorithms.

S. Jegelka, F. Bach and S. Sra. Reflection methods for user-friendly submodular optimization . NIPS 2013.
R. Nishihara, S. Jegelka and M.I. Jordan. On the linear convergence rate of decomposable submodular function minimization. NIPS 2014.
S. Jegelka, H. Lin and J. Bilmes. On fast approximate submodular minimization. NIPS 2011.

Cooperative Graph cuts in Computer Vision

Graph cuts have been widely used as a generic tool in combinatorial optimization. Replacing the common sum of edge weights by a submodular function enhances the representative capabilities of graph cuts. For example, graph cuts haven been popular for MAP inference in pairwise Markov Random Fields, used for image segmentation. These have some well-known shortcomings: the optimal segmentations tend to short-cut fine object boundaries, in particular when the contrast is low. Cooperative cuts correspond enable us to introduce complex long-range dependencies between variables (high-order potentials), such as incorporating global information about the object boundary, and thereby lead to much better segmentations.
Cooperative cuts indeed unify and generalize a number of higher-order energy functions that have been used in Computer Vision.

P. Kohli, A. Osokin and S. Jegelka. A principled deep random field for image segmentation. CVPR 2013
S. Jegelka and J. Bilmes. Submodularity beyond submodular energies: coupling edges in graph cuts. CVPR 2011. (see also the description and demo by Evan Shelhamer here)
S. Jegelka and J. Bilmes. Multi-label Cooperative Cuts. CVPR 2011 Workshop on Inference in Graphical Models with Structured Potentials.

Probabilistic inference and submodular edge weights

Submodular functions over pairs of variables (edges) do not only provide structure for optimization, but for full approximate probabilistic inference. We provide a framework for efficient inference in such models that exploits both the structure of an underlying graph and the polyhedral structure of submodularity to compute lower and upper bounds on the partition function of high-treewidth probabilistic models, and approximate marginal probabilities.

J. Djolonga, S. Jegelka, S. Tschiatschek, A. Krause. Cooperative Graphical Models. NIPS, 2016.

Submodular edge weights in combinatorial problems - theory and algorithms

Starting with (a generalized) minimum cut, we study combinatorial problems where instead of a sum of edge weights, we have a submodular function on the edges. In their most general form, such problems are usually very hard, with polynomial lower bounds on the approximation factor (as several recent works show). But with some assumptions, efficient algorithms can give very decent results.
Motivated by good empirical results, we continued to study properties of functions that affect the complexity of submodular problems: the curvature of the function is a general complexity measure for minimization, maximization, approximation and learning.

S. Jegelka and J. Bilmes. Graph Cuts with Interacting Edge Costs - Examples, Approximations, and Algorithms. Mathematical Programming Ser. A, pp. 1-42, 2016. (arXiv version)
R. Iyer, S. Jegelka and J. Bilmes. Monotone Closure of Relaxed Constraints in Submodular Optimization: Connections Between Minimization and Maximization. UAI 2014.
R. Iyer, S. Jegelka and J. Bilmes. Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions. NIPS 2013.
R. Iyer, S. Jegelka and J. Bilmes. Fast Semidifferential-based Submodular Function Optimization. ICML 2013
S. Jegelka and J. Bilmes. Approximation bounds for inference using cooperative cut. ICML 2011.

Sequential combinatorial problems with submodular costs

Sequential decision problems ask to repeatedly solve an optimization problem with an unknown, changing cost function. A decision for the current cost must be made based on observing the costs at previous steps, but not the current one. Such problems become challenging when the optimization problem is combinatorial and therefore the decision space exponentially large. We address sequential combinatorial problems and derive the first algorithms that handle nonlinear, submodular cost functions.

S. Jegelka and J. Bilmes. Online submodular minimization for combinatorial structures. ICML 2011.

Finding Diverse Subsets in Exponentially-Large Structured Item Sets

To cope with the high level of ambiguity faced in domains such as Computer Vision or Natural Language processing, robust prediction methods often search for a diverse set of high-quality candidate solutions or proposals. In structured prediction problems, this becomes a daunting task, as the solution space (image labelings, sentence parses, etc.) is exponentially large - e.g., all possible labelings of an image, or all possible parse trees. We study greedy algorithms for finding a diverse subset of solutions in such combinatorial spaces by drawing new connections between submodular functions over combinatorial item sets and High-Order Potentials (HOPs) studied for graphical models. Specifically, we show via examples that when marginal gains of submodular diversity functions allow structured representations, this enables efficient (sub-linear time) approximate maximization by reducing the greedy augmentation step to inference in a factor graph with appropriately constructed HOPs.

A. Prasad, S. Jegelka and D. Batra. Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets. NIPS 2014.



Concurrency control for machine learning


Many machine learning algorithms iteratively transform some global state (e.g., model parameters or variable assignment) giving the illusion of serial dependencies between each operation. However, due to sparsity, exchangeability, and other symmetries, it is often the case that many, but not all, of the state-transforming operations can be computed concurrently while still preserving serializability: the equivalence to some serial execution where individual operations have been reordered.
This opportunity for serializable concurrency forms the foundation of distributed database systems. In this project, we implement updates in machine learning algorithms as concurrent transactions in a distributed database. As a result, we achieve high scalability while maintaining the semantics and theoretical properties of original serial algorithm.

X. Pan, S. Jegelka, J. Gonzalez, J. Bradley and M.I. Jordan. Parallel Double Greedy Submodular Maximization. NIPS 2014.
X. Pan, J. Gonzalez, S. Jegelka, T. Broderick and M.I. Jordan. Optimistic Concurrency Control for Distributed Unsupervised Learning. NIPS 2013



Weakly supervised object detection


Learning to localize objects in images is a fundamental problem in computer vision. For this problem (as for many others), we are increasingly faced with the problem that accurately labeled training data is expensive and hence scarce. Therefore, we desire algorithms that are robust to weak labelings, i.e., image-level labels of the nature "the object is present" (instead of object locations). We address this problem via a combination of combinatorial and convex optimization: a discriminative submodular cover problem and a smoothed SVM formulation.

H. Song, Y.J. Lee, S. Jegelka and T. Darrell. Weakly-supervised Discovery of Visual Pattern Configurations. NIPS 2014.
H. Song, R. Girshick, S. Jegelka, J. Mairal, Z. Harchaoui and T. Darrell. On learning to localize objects with minimal supervision. ICML 2014



Clustering and graph partitioning


LP Stability for graph partitioning problems

Data in machine learning often arises from noisy measurements. When such data is used in an optimization problem, it is beneficial to know the stability of the optimal solution to perturbations in the data. We show a method for analyzing this stability for LP relaxations of graph partitioning problems. The method can handle the exponential number of constraints and applies to problems such as correlation clustering, clustering aggregation or modularity clustering.

S. Nowozin and S. Jegelka. Solution stability in linear programming relaxations: graph partitioning and unsupervised learning. ICML 2009.

Separating distributions by higher-order moments

Many clustering criteria aim for clusers that are spatially separated. For example, the popular k-means crtiterion seeks clusers whose means are maximally far apart. If the data is assumed to be samples of a mixture of distributions and we want to recover the underlying distributions, then spatial separation may not be the ideal criterion, e.g. if we have two Gaussians with the same mean but different variance. Using a kernel criterion, however, we can separate distributions by higher-order moments. This observation also explains capabilities of the kernel k-means algorithm for example in separating distributions by moments other than the mean.

S. Jegelka, A. Gretton, B. Schoelkopf, B.K Sriperumbudur and U. von Luxburg. Generalized clustering via kernel embeddings. KI 2009.

Tensor Clustering

The tensor clustering problem generalizes co-custering (also called biclustering) from matrices to tensors. Informally, we aim to partition a given tensor into honogeneous "cubes". Formally, we want to find the closest best low-rank factorization of a particular form. We show the first approximation bounds for tensor clustering with metrics and Bregman divergences. This work also illustrates the limits of ignoring the "co" in co-clustering.

S. Jegelka, S. Sra and A. Banerjee. Approximation algorithms for tensor clustering. ALT 2009.

Statistically consistent clustering

Most clustering problems correspond to NP-hard optimization problems. Furthermore, even if we could find the optimzal solution, this procedure may fail to be statisticaly consistent. Therefore, we relax computationally hard clustering problems (such as k-means or normalized cut) to formulations that cen be solved exactly in polynomial time and that are statistically consistent and converge to the solution of the given objective as the number of sample points grows.

U. von Luxburg, S. Bubeck, S. Jegelka and M. Kaufmann. Consistent minimization of clustering objective functions. NIPS 2007



Kernel independent component analysis

In Independent Component Analysis (ICA), we observe a linear mixture of signals from independent source distributions and aum to recover the unknown sources. Kernel dependence measures have proved particularly useful for ICA. However, optimizing such a kernel criterion over the special orthogonal group is a difficult optimization problem, and can quickly become inefficient as the kernel matrices become large. We therefore derive an approximate Newton method that handles these problems more efficiently. Empirically, the method compares favorably to state-of-the-art ICA methods.
In eralier work, we explored the effectiveness of different factorizations to approximate large kernel matrices.

H. Shen, S. Jegelka and A. Gretton. Fast Kernel-based Independent Component Analysis. IEEE Transactions on Signal Processing 57(9), pp. 3498-3511, 2009.
H. Shen, S. Jegelka and A. Gretton. Fast Kernel ICA using an Approximate Newton Method. AISTATS, 2007.
S. Jegelka and A. Gretton. Brisk Kernel Independent Component Analysis. In L. Bottou, O. Chapelle, D. DeCoste, J. Weston, editors. Large Scale Kernel Machines, pp. 225-250. MIT Press, 2007.