I’m a CS PhD student at MIT, advised by David Sontag. My research lies at the intersection of causality and machine learning, and is generally motivated by technical challenges that arise when applying machine learning to healthcare data. During my PhD, I’ve focused on a few complementary themes:

  • Robustness to distribution shift: The performance of a model learned in one environment (e.g., a particular hospital) often degrades when used in other contexts. However, we often have some sense for the nature and degree of plausible shifts (e.g., differences in certain unobserved factors, like social determinants of health). This has motivated my work in finding interpretable worst-case shifts for a fixed prediction model [NeurIPS 2022], and learning linear models that maintain performance across changes in unobserved factors, when noisy proxies are available [ICML 2021].
  • “Debugging” causal models: Retrospective healthcare data is often used to learn better policies for treating disease, when experimentation is infeasible: However, this requires strong causal assumptions, and not all policies can be reliably evaluated. This has motivated my work on developing methods to help domain experts assess the plausibility of causal models [ICML 2019, MS Thesis], and get interpretable characterization of subpopulations where a given policy can be evaluated [AISTATS 2020].

These methodological problems are informed by my applied work with clinical collaborators, such as learning antibiotic treatment policies [Science Trans. Med. 2020] and debugging reinforcement-learning algorithms for sepsis management [AMIA 2021].

Selected publications (Full List)

Evaluating Robustness to Dataset Shift via Parametric Robustness Sets
Nikolaj Thams*, Michael Oberst*, David Sontag
Neural Information Processing Systems (NeurIPS), 2022
[paper], [code]
*Equal Contribution, order determined by coin flip

Falsification before Extrapolation in Causal Effect Estimation
Zeshan Hussain*, Michael Oberst*, Ming-Chieh Shih*, David Sontag
Neural Information Processing Systems (NeurIPS), 2022
*Equal Contribution, alphabetical order

Regularizing towards Causal Invariance: Linear Models with Proxies
Michael Oberst, Nikolaj Thams, Jonas Peters, David Sontag
International Conference on Machine Learning (ICML), 2021
[paper], [video], [slides], [poster], [code]

A decision algorithm to promote outpatient antimicrobial stewardship for uncomplicated urinary tract infection
Sanjat Kanjilal, Michael Oberst, Sooraj Boominathan, Helen Zhou, David C. Hooper, David Sontag
Science Translational Medicine, 2020
[article], [code], [dataset]

Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models
Michael Oberst, David Sontag
International Conference on Machine Learning (ICML), 2019
[paper], [slides], [poster], [video]


Bias-robust Integration of Observational and Experimental Estimators
Michael Oberst, Alexander D’Amour, Minmin Chen, Yuyan Wang, David Sontag, Steve Yadlowsky
Oral presentation at the American Causal Inference Conference (ACIC), 2022

Invited Talks

Regularizing towards Causal Invariance: Linear Models with Proxies
Online Causal Inference Seminar
Stanford, March 29th, 2022
[video], [slides]

Primer: Learning Treatment Policies from Observational Data
Models, Inference, and Algorithms Seminar
Broad Institute, September 23rd, 2020
[video], [slides]


Head TA for 6.867 (Machine Learning), Fall 2021
Received Frederick C. Hennie III Award for teaching excellence.


Conferences: ICML 2022 (Top 10% of reviewers), NeurIPS 2022/2021, UAI 2021 (Top 5% of reviewers), AISTATS 2019
Journals: Journal of Causal Inference, Statistics & Computing, Bayesian Analysis.