Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Dec;103(4):985-991.
doi: 10.1093/biomet/asw042. Epub 2016 Oct 27.

Fast sampling with Gaussian scale-mixture priors in high-dimensional regression

Affiliations

Fast sampling with Gaussian scale-mixture priors in high-dimensional regression

Anirban Bhattacharya et al. Biometrika. 2016 Dec.

Abstract

We propose an efficient way to sample from a class of structured multivariate Gaussian distributions. The proposed algorithm only requires matrix multiplications and linear system solutions. Its computational complexity grows linearly with the dimension, unlike existing algorithms that rely on Cholesky factorizations with cubic complexity. The algorithm is broadly applicable in settings where Gaussian scale mixture priors are used on high-dimensional parameters. Its effectiveness is illustrated through a high-dimensional regression problem with a horseshoe prior on the regression coefficients. Other potential applications are outlined.

Keywords: Confidence interval; Gaussian scale mixture; Global-local prior; Shrinkage; Sparsity.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Boxplots of ℓ1, ℓ2 and prediction error across 100 simulation replicates. HSme and HSm respectively denote posterior point wise median and mean for the horeshoe prior. True β0 is 5-sparse with nonzero entries ±{1.5, 1.75, 2, 2.25, 2.5}. Top row: Σ = Ip (independent). Bottom row: Σjj = 1, Σjj′ = 0.5, jj′ (compound symmetry).
Fig. 2
Fig. 2
Same setting as in Fig 1. True β0 is 5-sparse with non-zero entries ±{0.75, 1, 1.25, 1.5, 1.75}.

References

    1. Albert JH, Chib S. Bayesian analysis of binary and polychotomous response data. Journal of the American statistical Association. 1993;88:669–679.
    1. Armagan A, Dunson D, Lee J. Generalized double Pareto shrinkage. Statistica Sinica. 2013;23:119. - PMC - PubMed
    1. Bhattacharya A, Dunson D. Sparse Bayesian infinite factor models. Biometrika. 2011;98:291–306. - PMC - PubMed
    1. Bhattacharya A, Pati D, Pillai NS, Dunson DB. Dirichlet–Laplace priors for optimal shrinkage. Journal of the American Statistical Association. 2015;110:1479–1490. - PMC - PubMed
    1. Caron F, Doucet A. Sparse Bayesian nonparametric regression. Proceedings of the 25th International Conference on Machine learning; ACM; 2008.

LinkOut - more resources