I’m an Assistant Professor of Computer Science at Wellesley College, where I lead the Model-Guided Uncertainty (MOGU) Lab. My research focuses on developing new machine learning methods to advance the understanding, prediction, and prevention of suicide and related behaviors.
Before joining Wellesley, I was a postdoctoral fellow at the Nock Lab in the Department of Psychology at Harvard University and Mass General Hospital. I completed my Ph.D. in Machine Learning at the Data to Actionable Knowledge Lab (DtAK) at Harvard, working with Professor Finale Doshi-Velez. I had the pleasure of interning with the Biomedical-ML team at Microsoft Research New England (Summer 2021). Lastly, I received a Master’s of Music in Contemporary Improvisation from the New England Conservatory (2016) and a Bachelor’s of Arts in Computer Science from Harvard University (2015). I am currently a performing musician.
Selected Publications
For a complete list, see my publications page.
-
Teaching Probabilistic Machine Learning in the Liberal Arts: Empowering Socially and Mathematically Informed AI Discourse
Y Yacoby
Accepted @
SIGCSE 2026
Oral Presentation
We present a new undergraduate ML course at our institution, a small liberal arts college serving students minoritized in STEM, designed to empower students to critically connect the mathematical foundations of ML with its sociotechnical implications. We propose a "framework-focused" approach, teaching students the language and formalism of probabilistic modeling while leveraging probabilistic programming to lower mathematical barriers. We introduce methodological concepts through a whimsical, yet realistic theme, the "Intergalactic Hypothetical Hospital," to make the content both relevant and accessible. Finally, we pair each technical innovation with counter-narratives that challenge its value using real, open-ended case-studies to cultivate dialectical thinking. By encouraging creativity in modeling and highlighting unresolved ethical challenges, we help students recognize the value and need of their unique perspectives, empowering them to participate confidently in AI discourse as technologists and critical citizens.
-
Improving Forecasts of Suicide Attempts for Patients with Little Data
Accepted @ NeurIPS TS4H 2025
Ecological Momentary Assessment provides real-time data on suicidal thoughts and behaviors, but predicting suicide attempts remains challenging due to their rarity and patient heterogeneity. We show that single models fit to all patients perform poorly, while individualized models overfit with limited data. To address this, we introduce a Latent Similarity Gaussian Process (LSGP) that models patient heterogeneity, enabling those with little data to leverage similar patients’ trends. Preliminary results show improved sensitivity over baselines and offer new understanding of patient similarity.
-
Neural Stochastic Differential Equations on Compact State-Spaces
Accepted @ ICML MOSS 2025
Many modern probabilistic models rely on SDEs, but their adoption is hampered by instability, poor inductive bias outside bounded domains, and reliance on restrictive dynamics or training tricks. While recent work constrains SDEs to compact spaces using reflected dynamics, these approaches lack continuous dynamics and efficient high-order solvers, limiting interpretability and applicability. We propose a novel class of neural SDEs on compact polyhedral spaces with continuous dynamics, amenable to higher-order solvers, and with favorable inductive bias.
-
Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables
Accepted @
JMLR 2022
Previous version accepted @
ICML UDL 2019
Spotlight Talk
Bayesian Neural Networks with Latent Variables (BNN+LVs) capture predictive uncertainty by explicitly modeling model uncertainty (via priors on network weights) and environmental stochasticity (via a latent input noise variable). In this work, we first show that BNN+LV suffers from a serious form of non-identifiability: explanatory power can be transferred between the model parameters and latent variables while fitting the data equally well. We demonstrate that as a result, in the limit of infinite data, the posterior mode over the network weights and latent variables is asymptotically biased away from the ground-truth. Due to this asymptotic bias, traditional inference methods may in practice yield parameters that generalize poorly and misestimate uncertainty. Next, we develop a novel inference procedure that explicitly mitigates the effects of likelihood non-identifiability during training and yields high-quality predictions as well as uncertainty estimates. We demonstrate that our inference method improves upon benchmark methods across a range of synthetic and real data-sets.