TITLE:  Adaptive inference after model selection

SPEAKER: E. Labner, Asst. Prof., NC State

ABSTRACT:

Penalized maximum likelihood methods that perform automatic variable selection have been developed, studied, and deployed in almost every area of statistical research. A prominent example is the LASSO Tibshirani (1996) with its numerous variants. It is now well-known, however, that these estimators are nonregular and consequently have limiting distributions that can be highly sensitive to small perturbations of the underlying generative model. This is the case even for the fixed “p” framework. Hence, the usual asymptotic methods for inference, like the bootstrap and series approximations, often perform poorly in small samples and require modification. Here, we develop locally asymptotically consistent confidence intervals for regression coefficients when estimation is done using the Adaptive LASSO (Zou, 2006) in the fixed “p” framework. We construct the confidence intervals by sandwiching the nonregular functional of interest between two smooth, data-driven, upper and lower bounds and then approximating the distribution of the bounds using the bootstrap. We leverage the smoothness of the bounds to obtain consistent inference for the nonregular functional under both fixed and local alternatives. The bounds are adaptive to the amount of underlying non- regularity in the sense that they deliver asymptotically exact coverage whenever the underlying generative model is such that the Adaptive LASSO estimators are consis- tent and asymptotically normal, and conservative otherwise. The resultant confidence intervals possess a certain tightness property among all regular bounds. Although we focus on the case of the Adaptive LASSO, our approach generalizes to other penalized methods including the elastic net and SCAD.