I’m an Assistant Professor of Computer Science at Wellesley College, where I lead the Model-Guided Uncertainty (MOGU) Lab. My research focuses on developing new machine learning methods to advance the understanding, prediction, and prevention of suicide and related behaviors.
Before joining Wellesley, I was a postdoctoral fellow at the Nock Lab in the Department of Psychology at Harvard University and Mass General Hospital. I completed my Ph.D. in Machine Learning at the Data to Actionable Knowledge Lab (DtAK) at Harvard, working with Professor Finale Doshi-Velez. I had the pleasure of interning with the Biomedical-ML team at Microsoft Research New England (Summer 2021). Lastly, I received a Master’s of Music in Contemporary Improvisation from the New England Conservatory (2016) and a Bachelor’s of Arts in Computer Science from Harvard University (2015). I am currently a performing musician.
Selected Publications
For a complete list, see my publications page.
-
Neural Stochastic Differential Equations on Compact State Spaces: Theory, Methods, and Application to Suicide Risk Modeling
Full paper on arXiv 2026
Previous version accepted @ ICML MOSS 2025
Ecological Momentary Assessment (EMA) studies enable the collection of high-frequency self-reports of suicidal thoughts and behaviors (STBs) via smartphones. Latent stochastic differential equations (SDE) are a promising model class for EMA data, as it is irregularly sampled, noisy, and partially observed. But SDE-based models suffer from two key limitations. (a) These models often violate domain constraints, undermining scientific validity and clinical trust of the model. (b) Training is numerically unstable without ad-hoc fixes (e.g. oversimplified dynamics) that are ill-suited for high-stakes applications. Here, we develop a novel class of expressive SDEs whose solutions are provably confined to a prescribed compact polyhedral state space, matching the domains of EMA data. (1) We show why chain-rule-based constructions of SDEs on compact domains fail, theoretically and empirically; (2) we derive constraints on drift and diffusion for non-stationary/stationary SDEs so their solutions remain on the desired state space; and (3), we introduce a parameterization that maps arbitrary (neural or expert-given) dynamics into constraint-satisfying SDEs. On several real EMA datasets, including a large suicide-risk study, our parameterization improves inductive bias, training dynamics, and predictive performance over standard latent neural SDE baselines. These contributions pave way for principled, trustworthy continuous-time models of suicide risk and other clinical time series; they also extend the application of SDE-based methods (e.g. diffusion models) to domains with hard state constraints.
-
Teaching Probabilistic Machine Learning in the Liberal Arts: Empowering Socially and Mathematically Informed AI Discourse
Y Yacoby
Accepted @
SIGCSE 2026
Oral Presentation
We present a new undergraduate ML course at our institution, a small liberal arts college serving students minoritized in STEM, designed to empower students to critically connect the mathematical foundations of ML with its sociotechnical implications. We propose a "framework-focused" approach, teaching students the language and formalism of probabilistic modeling while leveraging probabilistic programming to lower mathematical barriers. We introduce methodological concepts through a whimsical, yet realistic theme, the "Intergalactic Hypothetical Hospital," to make the content both relevant and accessible. Finally, we pair each technical innovation with counter-narratives that challenge its value using real, open-ended case-studies to cultivate dialectical thinking. By encouraging creativity in modeling and highlighting unresolved ethical challenges, we help students recognize the value and need of their unique perspectives, empowering them to participate confidently in AI discourse as technologists and critical citizens.
-
Improving Forecasts of Suicide Attempts for Patients with Little Data
Accepted @ NeurIPS TS4H 2025
Ecological Momentary Assessment provides real-time data on suicidal thoughts and behaviors, but predicting suicide attempts remains challenging due to their rarity and patient heterogeneity. We show that single models fit to all patients perform poorly, while individualized models overfit with limited data. To address this, we introduce a Latent Similarity Gaussian Process (LSGP) that models patient heterogeneity, enabling those with little data to leverage similar patients’ trends. Preliminary results show improved sensitivity over baselines and offer new understanding of patient similarity.
-
Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables
Accepted @
JMLR 2022
Previous version accepted @
ICML UDL 2019
Spotlight Talk
Bayesian Neural Networks with Latent Variables (BNN+LVs) capture predictive uncertainty by explicitly modeling model uncertainty (via priors on network weights) and environmental stochasticity (via a latent input noise variable). In this work, we first show that BNN+LV suffers from a serious form of non-identifiability: explanatory power can be transferred between the model parameters and latent variables while fitting the data equally well. We demonstrate that as a result, in the limit of infinite data, the posterior mode over the network weights and latent variables is asymptotically biased away from the ground-truth. Due to this asymptotic bias, traditional inference methods may in practice yield parameters that generalize poorly and misestimate uncertainty. Next, we develop a novel inference procedure that explicitly mitigates the effects of likelihood non-identifiability during training and yields high-quality predictions as well as uncertainty estimates. We demonstrate that our inference method improves upon benchmark methods across a range of synthetic and real data-sets.