Probabilistic Methods for Trustworthy Machine Learning by Prof. Tim Rudner, New York University
Wed, November 6th, 2024
1:00 pm - 1:50 pm
- This event has passed.
Statistics Colloquium by Prof. Tim Rudner, New York University, Wednesday November 6, 1:00 – 1:50pm, North Science Building 015, Wachenheim, Statistics Colloquium
Abstract:
Machine learning models, while effective in controlled environments, can fail catastrophically when exposed to unexpected conditions upon deployment. This lack of robustness, well-documented even in state-of-the-art models, can lead to severe harm in high-stakes, safety-critical application domains such as healthcare. This shortcoming raises a central question: How can we develop machine learning models we can trust?
In this talk, I will approach this question from a probabilistic perspective and address deficiencies in the trustworthiness of neural network models using Bayesian principles. Specifically, I will show how to improve the reliability and fairness of neural networks with data-driven, domain-informed prior distributions over model parameters. To do so, I will first demonstrate how to train neural networks with such priors using a simple learning objective with a regularizer that reflects the constraints implicitly encoded in the prior. I will then show how to construct and use domain-informed, data-driven priors to improve uncertainty quantification and group robustness in neural network models for selected application domains. Throughout this talk, I will highlight carefully designed evaluation procedures for assessing the trustworthiness of machine learning models in safety-critical settings.
Bio:
Tim G. J. Rudner is a Faculty Fellow at New York University’s Center for Data Science and an AI Fellow at Georgetown University’s Center for Security and Emerging Technology. He conducted PhD research on probabilistic machine learning in the Department of Computer Science at the University of Oxford, where he was advised by Yee Whye Teh and Yarin Gal. The goal of his research is to create trustworthy machine learning models by developing methods and theoretical insights that improve the reliability, safety, transparency, and fairness of machine learning systems deployed in safety-critical settings. Tim holds a master’s degree in statistics from the University of Oxford and an undergraduate degree in applied mathematics and economics from Yale University. He was selected as a Rising Star in Generative AI and is also a Qualcomm Innovation Fellow and a Rhodes Scholar.