I’m an Assistant Professor of Computer Science at Wellesley College, where I lead the Model-Guided Uncertainty (MOGU) Lab. My research focuses on developing new machine learning methods to advance the understanding, prediction, and prevention of suicide and related behaviors.
Before joining Wellesley, I was a postdoctoral fellow at the Nock Lab in the Department of Psychology at Harvard University and Mass General Hospital. I completed my Ph.D. in Machine Learning at the Data to Actionable Knowledge Lab (DtAK) at Harvard, working with Professor Finale Doshi-Velez. I had the pleasure of interning with the Biomedical-ML team at Microsoft Research New England (Summer 2021). Lastly, I received a Master’s of Music in Contemporary Improvisation from the New England Conservatory (2016) and a Bachelor’s of Arts in Computer Science from Harvard University (2015). I am currently a performing musician.
Selected Publications
For a complete list, see my publications page.
-
Towards Model-Agnostic Posterior Approximation for Fast and Accurate Variational Autoencoders
Accepted @ Workshop at AABI 2024
Inference for Variational Autoencoders (VAEs) consists of learning two models: (1) a generative model, which transforms a simple distribution over a latent space into the distribution over observed data, and (2) an inference model, which approximates the posterior of the latent codes given data. The two components are learned jointly via a lower bound to the generative model’s log marginal likelihood. In early phases of joint training, the inference model poorly approximates the latent code posteriors. Recent work showed that this leads optimization to get stuck in local optima, negatively impacting the learned generative model. As such, recent work suggests ensuring a high-quality inference model via iterative training: maximizing the objective function relative to the inference model before every update to the generative model. Unfortunately, iterative training is inefficient, requiring heuristic criteria for reverting from iterative to joint training for speed. Here, we suggest an inference method that trains the generative and inference models independently. It approximates the posterior of the true model a priori; fixing this posterior approximation, we then maximize the lower bound relative to only the generative model. By conventional wisdom, this approach should rely on the true prior and likelihood of the true model to approximate its posterior (which are unknown). However, we show that we can compute a deterministic, model-agnostic posterior approximation (MAPA) of the true model’s posterior. We then use MAPA to develop a proof-of-concept inference method. We present preliminary results on low-dimensional synthetic data that (1) MAPA captures the trend of the true posterior, and (2) our MAPA-based inference performs better density estimation with less computation than baselines. Lastly, we present a roadmap for scaling the MAPA-based inference method to high-dimensional data.
-
Empowering First-Year Computer Science Ph.D. Students to Create a Culture that Values Community and Mental Health
Accepted @
SIGCSE 2023
Oral Presentation
Doctoral programs often have high rates of depression, anxiety, isolation, and imposter phenomenon. Consequently, graduating students may feel inadequately prepared for research-focused careers, contributing to an attrition of talent. Prior work identifies an important contributing factor to maladjustment: that, even with prior exposure to research, entering Ph.D. students often have problematically idealized views of science. These preconceptions can become obstacles for students in their own professional growth. Unfortunately, existing curricular and extracurricular programming in many doctoral programs do not include mechanisms to systematically address students’ misconceptions of their profession. In this work, we describe a new initiative at our institution that aims to address Ph.D. mental health via a mandatory seminar for entering doctoral students. The seminar is designed to build professional resilience in students by (1) increasing self-regulatory competence, and (2) teaching students to proactively examine academic cultural values, and to participate in shaping them. Our evaluation indicates that students improved in both areas after completing the seminar.
-
Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables
Accepted @
JMLR 2022
Previous version accepted @
ICML UDL 2019
Spotlight Talk
Bayesian Neural Networks with Latent Variables (BNN+LVs) capture predictive uncertainty by explicitly modeling model uncertainty (via priors on network weights) and environmental stochasticity (via a latent input noise variable). In this work, we first show that BNN+LV suffers from a serious form of non-identifiability: explanatory power can be transferred between the model parameters and latent variables while fitting the data equally well. We demonstrate that as a result, in the limit of infinite data, the posterior mode over the network weights and latent variables is asymptotically biased away from the ground-truth. Due to this asymptotic bias, traditional inference methods may in practice yield parameters that generalize poorly and misestimate uncertainty. Next, we develop a novel inference procedure that explicitly mitigates the effects of likelihood non-identifiability during training and yields high-quality predictions as well as uncertainty estimates. We demonstrate that our inference method improves upon benchmark methods across a range of synthetic and real data-sets.