nips nips2013 nips2013-130 nips2013-130-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Daniel Bartz, Klaus-Robert Müller
Abstract: Analytic shrinkage is a statistical technique that offers a fast alternative to crossvalidation for the regularization of covariance matrices and has appealing consistency properties. We show that the proof of consistency requires bounds on the growth rates of eigenvalues and their dispersion, which are often violated in data. We prove consistency under assumptions which do not restrict the covariance structure and therefore better match real world data. In addition, we propose an extension of analytic shrinkage –orthogonal complement shrinkage– which adapts to the covariance structure. Finally we demonstrate the superior performance of our novel approach on data from the domains of finance, spoken letter and optical character recognition, and neuroscience. 1
[1] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer, 2008.
[2] Charles Stein. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Proc. 3rd Berkeley Sympos. Math. Statist. Probability, volume 1, pages 197–206, 1956.
[3] Olivier Ledoit and Michael Wolf. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis, 88(2):365–411, 2004.
[4] Jerome. H. Friedman. Regularized discriminant analysis. Journal of the American Statistical Association, 84(405):165–175, 1989.
[5] Juliane Sch¨ fer and Korbinian Strimmer. A shrinkage approach to large-scale covariance matrix estia mation and implications for functional genomics. Statistical Applications in Genetics and Molecular Biology, 4(1):1175–1189, 2005.
[6] Boaz Nadler. Finite sample approximation results for principal component analysis: A matrix perturbation approach. The Annals of Statistics, 36(6):2791–2817, 2008.
[7] Harry Markowitz. Portfolio selection. Journal of Finance, VII(1):77–91, March 1952.
[8] Daniel Bartz, Kerr Hatrick, Christian W. Hesse, Klaus-Robert M¨ ller, and Steven Lemm. Directional u Variance Adjustment: Bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization. PLoS ONE, 8(7):e67503, 07 2013.
[9] Olivier Ledoit and Michael Wolf. Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. Journal of Empirical Finance, 10:603–621, 2003.
[10] Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(5):550–554, May 1994.
[11] Mark A Fanty and Ronald Cole. Spoken letter recognition. In Advances in Neural Information Processing Systems, volume 3, pages 220–226, 1990.
[12] Kevin Bache and Moshe Lichman. UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences, 2013.
[13] Anne Kerstin Porbadnigk, Jan-Niklas Antons, Benjamin Blankertz, Matthias S Treder, Robert Schleicher, Sebastian M¨ ller, and Gabriel Curio. Using ERPs for assessing the (sub)conscious perception of noise. o In 32nd Annual Intl Conf. of the IEEE Engineering in Medicine and Biology Society, pages 2690–2693, 2010.
[14] Anne Kerstin Porbadnigk, Matthias S Treder, Benjamin Blankertz, Jan-Niklas Antons, Robert Schleicher, Sebastian M¨ ller, Gabriel Curio, and Klaus-Robert M¨ ller. Single-trial analysis of the neural correlates o u of speech quality perception. Journal of neural engineering, 10(5):056003, 2013. 9