|Title||Adaptive inference after model selection|
|Year of Publication||2011|
|Authors||Laber, Eric B.|
Penalized maximum likelihood methods that perform automatic variable have been developed, studied, and deployed in nearly every area of statistical research. Prominent examples include SCAD and the LASSO (Tibshirani 1996) with its numerous variants. However, it is now well known that estimators born of such penalties are nonregular and consequently have limiting distributions that can be highly sensitive to perturbations of the underlying generative model. Thus, usual asymptotic methods for inference, like the bootstrap and series approximations, are invalid without modification and may result in poor small sample performance. In this talk, we develop locally asymptotically consistent confidence intervals for parameters indexing a linear regression model when estimation is done using the LASSO. Confidence intervals are constructed by sandwiching the nonregular functional of interest between two smooth data-driven upper and lower bounds and then approximating the distribution of the bounds using standard techniques. We leverage the smoothness of the bounds to obtain consistent inference for the nonregular functional under both fixed an local alternatives. Moreover, the bounds are adaptive to the amount of underlying nonregularity in the sense that they deliver asymptotically exact coverage whenever the underlying generative model is such that LASSO estimators are consistently asymptotically normal, and conservative otherwise. The bounds used to construct the confidence interval are shown to be smallest (tightest) among all regular bounds. Although this talk focuses on the case of the LASSO, the approach generalizes to other penalized methods including the adaptive LASSO and SCAD.