iccv iccv2013 iccv2013-354 iccv2013-354-reference knowledge-graph by maker-knowledge-mining

354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition


Source: pdf

Author: Zhuoyuan Chen, Ying Wu

Abstract: Sparsity models have recently shown great promise in many vision tasks. Using a learned dictionary in sparsity models can in general outperform predefined bases in clean data. In practice, both training and testing data may be corrupted and contain noises and outliers. Although recent studies attempted to cope with corrupted data and achieved encouraging results in testing phase, how to handle corruption in training phase still remains a very difficult problem. In contrast to most existing methods that learn the dictionaryfrom clean data, this paper is targeted at handling corruptions and outliers in training data for dictionary learning. We propose a general method to decompose the reconstructive residual into two components: a non-sparse component for small universal noises and a sparse component for large outliers, respectively. In addition, , further analysis reveals the connection between our approach and the “partial” dictionary learning approach, updating only part of the prototypes (or informative codewords) with remaining (or noisy codewords) fixed. Experiments on synthetic data as well as real applications have shown satisfactory per- formance of this new robust dictionary learning approach.


reference text

[1] D. M. Bradley and J. A. Bagnell. Differentiable sparse coding. NIPS, 2008.

[2] A. Buades, B. Coll, and J.-M. Morel. A non-local algorithm for image denoising. CVPR, 2005.

[3] E. Cand e`s, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. TIT, 52(2):489–509, Feb. 2006.

[4] D. Donoho. Compressed sensing. TIT, 52: 1289–1306, 2006.

[5] A. Efros and W. T. Freeman. Image quilting for texture synthesis and transfer. SIGGRAPH, 2001.

[6] M. Elad and M. Aharon. Image denoising via learned dictionaries

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22] and sparse representation. CVPR, 2006. F. Estrada, D. Fleet, and A. Jepson. Stochastic image denoising. BMVC, 2009. P. J. Garrigues and B. A. Olshausen. Group sparse coding with a laplacian scale mixture prior. NIPS, 2010. A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. PAMI, 23(6):643–660, 2001. E. Hale, W. Yin, and Y. Zhang. Fixed-point continuation for l1minimization: Methodology and convergence. SIAM: Journal on Optimization, 19(3): 1107–1 130, 2008. P. J. Huber and E. M. Ronchetti. Robust Statistics. John Wiley and Sons Inc, 2009. A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. NIPS, 2010. Z. Jiang, Z. Lin, and L. S. Davis. Learning a discriminative dictionary for sparse coding via label consistent k-svd. CVPR, 2011. A. B. Lee, B. Nadler, and L. Wasserman. Treelets-an adaptive multiscale basis for sparse unordered data. Annals of Applied Statistics, 2(2):435–471, 2008. H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. NIPS, 2006. C. Lu, J. Shi, and J. Jia. Online robust dictionary learning. CVPR, 2013. J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. JMLR, 11:19–60, 2010. J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Discriminative learned dictionaries for local image analysis. CVPR, 2008. J. Mairal, M. Leordeanu, F. Bach, M. Hebert, and J. Ponce. Discriminative sparse image models for class-specific edge detection and image interpretation. ECCV, 2008. D. Mumford and J. Shah. Optimal approximation of piecewise smooth functions and associated variational problems. CPAM, 42:577–685, 1989. B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision Research, 37:331 1 3325, 1997. G. Peyr e`. Sparse modeling of textures. Journal of Mathematical Imaging and Vision, 34(1): 17–31, 2009.

[23] I. Ramirez, P. Sprechmann, and G. Sapiro. Classification and clustering via dictionary learning with structured incoherence and shared features. CVPR, 2010.

[24] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259–268, 1992.

[25] I. W. Selesnick. The estimation of laplace random vectors in additive white gaussian noise. TSP, 56(8):3482–3496, 2008.

[26] D. Spielman, H. Wang, and J. Wright. Exact recovery of sparselyused dictionaries. COLT, 2012.

[27] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58(1):267288, 1996.

[28] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. Huang, and S. Yan. Sparse representation for computer vision and pattern recognition. Proceedings of the IEEE, 98(6):1031 – 1044, 2010.

[29] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust face recognition via sparse representation. PAMI, 3 1(2):210 – 227, 2009.

[30] M. Yang, L. Zhang, X. Feng, and D. Zhang. Fisher discrimination dictionary learning for sparse representation. ICCV, 2011. [3 1] M. Yang, L. Zhang, J. Yang, and D. Zhang. Robust sparse coding for face recognition. CVPR, 2011.

[32] C. Zhao, X. Wang, and W. kuen Cham. Background subtraction via robust dictionary learning. EURASIP J. Image and Video Processing, 2011.

[33] M. Zhou, H. Chen, J. Paisley, L. Ren, G. Sapiro, and L. Carin. Nonparametric bayesian dictionary learning for sparse image representations. NIPS, 2009. 2223