Sequential Co-Sparse Factor Regression
- PMID: 30337797
- PMCID: PMC6190918
- DOI: 10.1080/10618600.2017.1340891
Sequential Co-Sparse Factor Regression
Abstract
In multivariate regression models, a sparse singular value decomposition of the regression component matrix is appealing for reducing dimensionality and facilitating interpretation. However, the recovery of such a decomposition remains very challenging, largely due to the simultaneous presence of orthogonality constraints and co-sparsity regularization. By delving into the underlying statistical data generation mechanism, we reformulate the problem as a supervised co-sparse factor analysis, and develop an efficient computational procedure, named sequential factor extraction via co-sparse unit-rank estimation (SeCURE), that completely bypasses the orthogonality requirements. At each step, the problem reduces to a sparse multivariate regression with a unit-rank constraint. Nicely, each sequentially extracted sparse and unit-rank coefficient matrix automatically leads to co-sparsity in its pair of singular vectors. Each latent factor is thus a sparse linear combination of the predictors and may influence only a subset of responses. The proposed algorithm is guaranteed to converge, and it ensures efficient computation even with incomplete data and/or when enforcing exact orthogonality is desired. Our estimators enjoy the oracle properties asymptotically; a non-asymptotic error bound further reveals some interesting finite-sample behaviors of the estimators. The efficacy of SeCURE is demonstrated by simulation studies and two applications in genetics.
Keywords: multivariate analysis; reduced-rank regression; regularization; singular value decomposition.
Figures





References
-
- Anderson TW (1951) Estimating linear restrictions on regression coefficients for multivariate normal distributions. Annals of Mathematical Statistics, 22, 327–351.
-
- Anderson TW and Rubin H (1956) Statistical inference in factor analysis In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, vol. 5, 111–150. Berkeley, CA: University of California Press.
-
- Bunea F, She Y and Wegkamp M (2011) Optimal selection of reduced rank estimators of high-dimensional matrices. Annals of Statistics, 39, 1282–1309.
-
- Bunea F, She Y and Wegkamp M (2012) Joint variable and rank selection for parsimonious estimation of high dimensional matrices. Annals of Statistics, 40, 2359–2388.
-
- Candès EJ and Recht B (2009) Exact matrix completion via convex optimization. Found. Comput. Math, 9, 717–772.
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources