nips nips2008 nips2008-126 nips2008-126-reference knowledge-graph by maker-knowledge-mining

126 nips-2008-Localized Sliced Inverse Regression


Source: pdf

Author: Qiang Wu, Sayan Mukherjee, Feng Liang

Abstract: We developed localized sliced inverse regression for supervised dimension reduction. It has the advantages of preventing degeneracy, increasing estimation accuracy, and automatic subclass discovery in classification problems. A semisupervised version is proposed for the use of unlabeled data. The utility is illustrated on simulated as well as real data sets.


reference text

[1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, 2003.

[2] R. Cook and L. Ni. Using intra-slice covariances for improved estimation of the central subspace in regression. Biometrika, 93(1):65–74, 2006.

[3] R. Cook and S. Weisberg. Disussion of li (1991). J. Amer. Statist. Assoc., 86:328–332, 1991.

[4] R. Cook and X. Yin. Dimension reduction and visualization in discriminant analysis (with discussion). Aust. N. Z. J. Stat., 43(2):147–199, 2001.

[5] D. Donoho and C. Grimes. Hessian eigenmaps: new locally linear embedding techniques for highdimensional data. PNAS, 100:5591–5596, 2003.

[6] K. Fukumizu, F. R. Bach, and M. I. Jordan. Kernel dimension reduction in regression. Annals of Statistics, to appear, 2008.

[7] A. Globerson and S. Roweis. Metric learning by collapsing classes. In Y. Weiss, B. Sch¨ lkopf, o and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 451–458. MIT Press, Cambridge, MA, 2006.

[8] T. Golub, D. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. Mesirov, H. Coller, M. Loh, J. Downing, M. Caligiuri, C. Bloomfield, and E. Lander. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science, 286:531–537, 1999.

[9] T. Hastie and R. Tibshirani. Discrminant adaptive nearest neighbor classification. IEEE Transacations on Pattern Analysis and Machine Intelligence, 18(6):607–616, 1996.

[10] K. Li. Sliced inverse regression for dimension reduction (with discussion). J. Amer. Statist. Assoc., 86:316–342, 1991.

[11] K. C. Li. On principal hessian directions for data visulization and dimension reduction: another application of stein’s lemma. J. Amer. Statist. Assoc., 87:1025–1039, 1992.

[12] K. C. Li. High dimensional data analysis via the sir/phd approach, 2000.

[13] J. Nilsson, F. Sha, and M. I. Jordan. Regression on manifold using kernel dimension reduction. In Proc. of ICML 2007, 2007.

[14] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323–2326, 2000.

[15] M. Sugiyam. Dimension reduction of multimodal labeled data by local fisher discriminatn analysis. Journal of Machine Learning Research, 8:1027–1061, 2007.

[16] J. Tenenbaum, V. de Silva, and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290:2319–2323, 2000.

[17] Q. Wu, F. Liang, and S. Mukherjee. Regularized sliced inverse regression for kernel models. Technical report, ISDS Discussion Paper, Duke University, 2007.

[18] Y. Xia, H. Tong, W. Li, and L.-X. Zhu. An adaptive estimation of dimension reduction space. J. R. Statist. Soc. B, 64(3):363–410, 2002.

[19] G. Young. Maximum likelihood estimation and factor analysis. Psychometrika, 6:49–53, 1941.

[20] W. Zhong, P. Zeng, P. Ma, J. S. Liu, and Y. Zhu. RSIR: regularized sliced inverse regression for motif discovery. Bioinformatics, 21(22):4169–4175, 2005. 8