nips nips2009 nips2009-167 nips2009-167-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Mingyuan Zhou, Haojun Chen, Lu Ren, Guillermo Sapiro, Lawrence Carin, John W. Paisley
Abstract: Non-parametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this non-parametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stick-breaking process are also considered to exploit structure within an image. The proposed method can learn a sparse dictionary in situ; training images may be exploited if available, but they are not required. Further, the noise variance need not be known, and can be nonstationary. Another virtue of the proposed method is that sequential inference can be readily employed, thereby allowing scaling to large images. Several example results are presented, using both Gibbs and variational Bayesian inference, with comparisons to other state-of-the-art approaches. 1
[1] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000.
[2] M. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 2001.
[3] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58, 1994.
[4] B.A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37, 1998.
[5] M. Aharon, M. Elad, and A. M. Bruckstein. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Processing, 54, 2006.
[6] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Processing, 15, 2006.
[7] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE Trans. Image Processing, 17, 2008.
[8] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In Proc. International Conference on Machine Learning, 2009.
[9] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Supervised dictionary learning. In Proc. Neural Information Processing Systems, 2008.
[10] M. Ranzato, C. Poultney, S. Chopra, and Y. Lecun. Efficient learning of sparse representations with an energy-based model. In Proc. Neural Information Processing Systems, 2006.
[11] E. Cand` s and T. Tao. Near-optimal signal recovery from random projections: universal ene coding strategies? IEEE Trans. Information Theory, 52, 2006.
[12] J.M. Duarte-Carvajalino and G. Sapiro. Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization. IMA Preprint Series 2211, 2008.
[13] J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Trans. Pattern Analysis Machine Intelligence, 31, 2009.
[14] S. Ji, Y. Xue, and L. Carin. Bayesian compressive sensing. IEEE Trans. Signal Processing, 56, 2008.
[15] R. Raina, A. Battle, H. Lee, B. Packer, and A.Y. Ng. Self-taught learning: transfer learning from unlabeled data. In Proc. International Conference on Machine Learning, 2007.
[16] R. Thibaux and M.I. Jordan. Hierarchical beta processes and the indian buffet process. In Proc. International Conference on Artificial Intelligence and Statistics, 2007.
[17] J. Paisley and L. Carin. Nonparametric factor analysis with beta process priors. In Proc. International Conference on Machine Learning, 2009.
[18] T. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1, 1973.
[19] A. Rodriguez and D.B. Dunson. Nonparametric bayesian models through probit stickbreaking processes. Univ. California Santa Cruz Technical Report, 2009.
[20] D. Knowles and Z. Ghahramani. Infinite sparse factor analysis and infinite independent components analysis. In Proc. International Conference on Independent Component Analysis and Signal Separation, 2007.
[21] P. Rai and H. Daum´ III. The infinite hierarchical factor regression model. In Proc. Neural e Information Processing Systems, 2008.
[22] M.J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Computational Neuroscience Unit, University College London, 2003.
[23] M. Girolami and S. Rogers. Variational Bayesian multinomial probit regression with Gaussian process priors. Neural Computation, 18, 2006.
[24] R.G. Baraniuk. Compressive sensing. IEEE Signal Processing Magazine, 24, 2007.
[25] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4, 1994. 9