Title:
Uncertainty quantification for black-box models with conditional guarantees
Abstract:
A central problem in uncertainty quantification is designing methods that are both distribution-free and individualized to the test sample at hand. Prior work has shown that it is impossible to achieve finite-sample conditional validity without modelling assumptions. Thus, canonical methods in the conformal inference literature typically only issue marginal guarantees over a random draw of the test covariates. In this talk, I will outline a framework that bridges this gap by recasting the conditional objective as a set of robustness criteria under covariate shifts. By modifying the target class of covariate shifts, I will define a spectrum of problems that range between marginal and exact instance-wise validity and give methods that provide precise guarantees in between these extremes. This framework has broad applications and I will show how it can be used to construct prediction sets around the outputs of black-box regression models and filter out false information from the responses of large language models. This talk is based on joint work with John Cherian and Emmanuel Candès.
Bio:
Isaac Gibbs is a postdoctoral researcher at the University of California, Berkeley, where he is advised by Ryan Tibshirani. He received his Ph.D. in Statistics from Stanford University, advised by Emmanuel Candès, and his B.Sc. in Math and Computer Science from McGill University. His research focuses on topics related to predictive inference, distribution-free uncertainty quantification, online learning, and high-dimensional statistics.