I’m an Assistant Professor of Computer Science at Wellesley College, where I lead the Model-Guided Uncertainty (MOGU) Lab. My research focuses on developing new machine learning methods to advance the understanding, prediction, and prevention of suicide and related behaviors.
Before joining Wellesley, I was a postdoctoral fellow at the Nock Lab in the Department of Psychology at Harvard University and Mass General Hospital. I completed my Ph.D. in Machine Learning at the Data to Actionable Knowledge Lab (DtAK) at Harvard, working with Professor Finale Doshi-Velez. I had the pleasure of interning with the Biomedical-ML team at Microsoft Research New England (Summer 2021). Lastly, I received a Master’s of Music in Contemporary Improvisation from the New England Conservatory (2016) and a Bachelor’s of Arts in Computer Science from Harvard University (2015). I am currently a performing musician.
Selected Publications
For a complete list, see my publications page.
-
Teaching Probabilistic Machine Learning in the Liberal Arts: Empowering Socially and Mathematically Informed AI Discourse
Y Yacoby
Accepted @ SIGCSE 2026
We present a new undergraduate ML course at our institution, a small liberal arts college serving students minoritized in STEM, designed to empower students to critically connect the mathematical foundations of ML with its sociotechnical implications. We propose a "framework-focused" approach, teaching students the language and formalism of probabilistic modeling while leveraging probabilistic programming to lower mathematical barriers. We introduce methodological concepts through a whimsical, yet realistic theme, the "Intergalactic Hypothetical Hospital," to make the content both relevant and accessible. Finally, we pair each technical innovation with counter-narratives that challenge its value using real, open-ended case-studies to cultivate dialectical thinking. By encouraging creativity in modeling and highlighting unresolved ethical challenges, we help students recognize the value and need of their unique perspectives, empowering them to participate confidently in AI discourse as technologists and critical citizens.
-
Neural Stochastic Differential Equations on Compact State-Spaces
Accepted @ ICML MOSS 2025
Many modern probabilistic models rely on SDEs, but their adoption is hampered by instability, poor inductive bias outside bounded domains, and reliance on restrictive dynamics or training tricks. While recent work constrains SDEs to compact spaces using reflected dynamics, these approaches lack continuous dynamics and efficient high-order solvers, limiting interpretability and applicability. We propose a novel class of neural SDEs on compact polyhedral spaces with continuous dynamics, amenable to higher-order solvers, and with favorable inductive bias.
-
Towards Model-Agnostic Posterior Approximation for Fast and Accurate Variational Autoencoders
Accepted @ Workshop at AABI 2024
Inference for Variational Autoencoders (VAEs) consists of learning two models: (1) a generative model, which transforms a simple distribution over a latent space into the distribution over observed data, and (2) an inference model, which approximates the posterior of the latent codes given data. The two components are learned jointly via a lower bound to the generative model’s log marginal likelihood. In early phases of joint training, the inference model poorly approximates the latent code posteriors. Recent work showed that this leads optimization to get stuck in local optima, negatively impacting the learned generative model. As such, recent work suggests ensuring a high-quality inference model via iterative training: maximizing the objective function relative to the inference model before every update to the generative model. Unfortunately, iterative training is inefficient, requiring heuristic criteria for reverting from iterative to joint training for speed. Here, we suggest an inference method that trains the generative and inference models independently. It approximates the posterior of the true model a priori; fixing this posterior approximation, we then maximize the lower bound relative to only the generative model. By conventional wisdom, this approach should rely on the true prior and likelihood of the true model to approximate its posterior (which are unknown). However, we show that we can compute a deterministic, model-agnostic posterior approximation (MAPA) of the true model’s posterior. We then use MAPA to develop a proof-of-concept inference method. We present preliminary results on low-dimensional synthetic data that (1) MAPA captures the trend of the true posterior, and (2) our MAPA-based inference performs better density estimation with less computation than baselines. Lastly, we present a roadmap for scaling the MAPA-based inference method to high-dimensional data.
-
Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables
Accepted @
JMLR 2022
Previous version accepted @
ICML UDL 2019
Spotlight Talk
Bayesian Neural Networks with Latent Variables (BNN+LVs) capture predictive uncertainty by explicitly modeling model uncertainty (via priors on network weights) and environmental stochasticity (via a latent input noise variable). In this work, we first show that BNN+LV suffers from a serious form of non-identifiability: explanatory power can be transferred between the model parameters and latent variables while fitting the data equally well. We demonstrate that as a result, in the limit of infinite data, the posterior mode over the network weights and latent variables is asymptotically biased away from the ground-truth. Due to this asymptotic bias, traditional inference methods may in practice yield parameters that generalize poorly and misestimate uncertainty. Next, we develop a novel inference procedure that explicitly mitigates the effects of likelihood non-identifiability during training and yields high-quality predictions as well as uncertainty estimates. We demonstrate that our inference method improves upon benchmark methods across a range of synthetic and real data-sets.