09:00 - 10:00 |
Roger Koenker: Nonparametric maximum likelihood methods for binary response models with random coefficients ↓ Single index linear models for binary response with random coefficients have been extensively employed in many settings under various parametric specifications of the distribution of the random coefficients. Nonparametric maximum likelihood estimation (NPMLE) as proposed by Kiefer and Wolfowitz (1956) in contrast, has received less attention in applied work due primarily to computational difficulties. We propose a new approach to computation of NPMLEs for binary response models that significantly increase their computational tractability thereby facilitating greater flexibility in applications. Our approach, which relies on recent developments involving the geometry of hyperplane arrangements by Rada and Černý (2018), is contrasted with the recently proposed deconvolution method of Gautier and Kitamura (2013). (TCPL 201) |
10:30 - 11:30 |
Iain Johnstone: Minimax optimality of sparse Bayes Predictive density estimates ↓ We study predictive density estimation under Kullback-Leibler loss in ℓ0-sparse Gaussian sequence models. We propose proper Bayes predictive density estimates and establish asymptotic minimaxity in sparse models.
A surprise is the existence of a phase transition in the future-to-past variance ratio r. For r<r0=(√5−1)/4, the natural discrete prior ceases to be asymptotically optimal. Instead, for subcritical r, a `bi-grid' prior with a central region of reduced grid spacing recovers asymptotic minimaxity. This phenomenon seems to have no analog in the otherwise parallel theory of point estimation of a multivariate normal mean under quadratic loss.
For spike-and-slab priors to have any prospect of minimaxity, we show that the sparse parameter space needs also to be magnitude constrained. Within a substantial range of magnitudes, spike-and-slab priors can attain asymptotic minimaxity. This is joint work with Gourab Mukherjee. (TCPL 201) |
14:20 - 15:05 |
Jinchi Lv: Blessing of Dimensionality: High-Dimensional Nonparametric Inference with Distance Correlation ↓ The curse of dimensionality is a well-known phenomenon in nonparametric statistics, which is echoed by the phenomenon of noise accumulation that is common in high-dimensional statistics. Even for the null case of a standard Gaussian random vector, the empirical linear correlation between the first component and the set of all remaining components can shift toward one as the dimensionality grows, and it is unclear how to correct exactly for such a dimensionality-induced bias in finite samples in a simple generalizable way. In this talk, I will present a new theoretical study that reveals an opposite phenomenon of the blessing of dimensionality in high-dimensional nonparametric inference with distance correlation, in which a pair of large random matrices are observed. This is a joint work with Lan Gao and Qiman Shao. (TCPL 201) |
15:30 - 16:15 |
William Strawderman: On efficient prediction and predictive density estimation for normal and spherically symmetric models ↓ Let X Nd(q,s2I), Y Nd(q,s2I), U Nk(q,s2I) be independently distributed, or more generally let (X,Y,U) have a spherically symmetric distribution with density hd+k/2f(h(||x−q||2+||u||2+||y−cq||2)) with unknown parameters h∈Rd, and with known density f(.) and constant c>0. Based on observing X=x,U=u, we consider the problem of obtaining a predictive density qhat(•;x,u) for Y with risk measured by the expected Kullback–Leibler loss. A benchmark procedure is the minimum risk equivariant density 𝑞hat MRE, which is Generalized Bayes with respect to the prior π(q,h)=1/h. In dimension d≥3, we obtain improvements on 𝑞hat MRE, and further, show that the dominance holds simultaneously for all f(.) subject to finite moment and finite risk conditions. We also show that the Bayes predictive density with respect to the “harmonic prior”,\pih(q,h)=q2−d/h dominates 𝑞hat MRE simultaneously for all f(.) that are scale mixture of normals. The results hinge on a duality with a point prediction problem, as well as posterior representations for (,), which are of independent interest. In particular, for d≥3, we obtain point predictors (X,U) of Y that dominate the benchmark predictor cX simultaneously for all f(.), and simultaneously for risk functions EEf[r(||Y−(X,U)||2+(1+c2)||U||2)], with r(.) increasing and concave on R+, including the squared error case, r(t)=t. (TCPL 201) |
16:15 - 17:00 |
Malay Ghosh: Bayesian High Dimensional Multivariate Regression with Shrinkage Priors ↓ We consider sparse Bayesian estimation in the classical multivariate linear regression model with p regressors and q response variables. In univariate Bayesian linear regression with a single response y, shrinkage priors which can be expressed as scale-mixtures of normal densities are a popular approach for obtaining sparse estimates of the coefficients. In this paper, we extend the use of these priors to the multivariate case to estimate a p times q coefficients matrix B. Our method can be used for any sample size n and any dimension p. Moreover, we show that the posterior distribution can consistently estimate B even when p grows at nearly exponential rate with the sample size n. Concentration inequalities are proved and our results are illustrated through simulation and data analysis. (TCPL 201) |