Thursday, February 22, 2018 - 8:00am to 10:00am


1507 Newell-Simon Hall


Jesse Dodge

Event Website:

For More Information, Contact:

Stacey Young,

The LTI is proud to announce the following PhD Thesis Proposal:

Modeling Diversity in the Machine Learning Pipeline


Noah Smith, (Chair)
Pradeep Ravikumar
Barnabás Póczos
Kevin Jamieson, (University of Washington)


Randomness is a foundation on which many aspects of the machine learning pipeline are built. From training models with stochastic gradient descent to tuning hyperparameters with random search, independent random sampling is ubiquitous. While independent sampling can be fast, it can also lead to undesirable properties, such as when samples are very similar. Abstaining from the independence assumption, we propose to examine the role of diversity in these samples, especially in cases where we have limited computation, limited space, or limited data. We address three applications: tuning hyperparameters, subsampling data, and ensemble generation.

Hyperparameter optimization requires training and evaluating numerous models, which can often be time consuming. Random search allows for training and evaluating these models fully in parallel; we model diversity explicitly in this regime, and find that sets of hyperparameter assignments which are more diverse lead to better optima than random search, a low discrepancy sequence, and a sequential Bayesian optimization approach.

Drawing a subset of a dataset is useful in many applications, such as when the full training set is too large to fit into memory. How these subsets are drawn is important, as their distribution may differ significantly from that of the full dataset, especially in the case when some labels are much more common than others. We propose to sample smaller datasets that are diverse, both in terms of labels and features.

Ensembling models is a popular technique when minimizing task-specific errors is more important than computational efficiency or interpretability. Diversity among models in an ensemble is tied to the ensemble's error, with more diversity often leading to lower error. We propose to apply our diversity promoting techniques to ensemble generation, first when selecting data to train the base models (as in bagging), and second when choosing which already-trained models will comprise an ensemble.

For a copy of the thesis proposal, please use the following link:


Thesis Proposal