nips nips2013 nips2013-321 nips2013-321-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Pablo Sprechmann, Roee Litman, Tal Ben Yakar, Alexander M. Bronstein, Guillermo Sapiro
Abstract: In this paper, we propose a new computationally efficient framework for learning sparse models. We formulate a unified approach that contains as particular cases models promoting sparse synthesis and analysis type of priors, and mixtures thereof. The supervised training of the proposed model is formulated as a bilevel optimization problem, in which the operators are optimized to achieve the best possible performance on a specific task, e.g., reconstruction or classification. By restricting the operators to be shift invariant, our approach can be thought as a way of learning sparsity-promoting convolutional operators. Leveraging recent ideas on fast trainable regressors designed to approximate exact sparse codes, we propose a way of constructing feed-forward networks capable of approximating the learned models at a fraction of the computational cost of exact solvers. In the shift-invariant case, this leads to a principled way of constructing a form of taskspecific convolutional networks. We illustrate the proposed models on several experiments in music analysis and image processing applications. 1
[1] M. Aharon, M. Elad, and A. Bruckstein. k-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Sig. Proc., 54(11):4311–4322, 2006.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Img. Sci., 2:183–202, March 2009.
[3] E. Benetos and S. Dixon. Multiple-instrument polyphonic music transcription using a convolutive probabilistic model. In Sound and Music Computing Conference, pages 19–24, 2011.
[4] D.P. Bertsekas. Nonlinear programming. 1999.
[5] H. Bischof, Y. Chen, and T. Pock. Learning l1-based analysis and synthesis sparsity priors using bi-level optimization. NIPS workshop, 2012.
[6] M. M. Bronstein, A. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi. Blind deconvolution of images using optimal sparse representations. IEEE Trans. Im. Proc., 14(6):726–736, 2005.
[7] J. C. Brown. Calculation of a constant Q spectral transform. The Journal of the Acoustical Society of America, 89:425, 1991.
[8] B. Colson, P. Marcotte, and G. Savard. An overview of bilevel optimization. Annals of operations research, 153(1):235–256, 2007.
[9] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. on Im. Proc., 54(12):3736–3745, 2006.
[10] V. Emiya, R. Badeau, and B. David. Multipitch estimation of piano sounds using a new probabilistic spectral smoothness principle. IEEE Trans. Audio, Speech, and Language Proc., 18(6):1643–1654, 2010.
[11] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In ICML, pages 399–406, 2010.
[12] J. Mairal, F. Bach, and J. Ponce. Task-driven dictionary learning. IEEE Trans. PAMI, 34(4):791–804, 2012.
[13] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE Trans. on Im. Proc., 17(1):53–69, 2008.
[14] S. Mallat. A Wavelet Tour of Signal Processing, Second Edition. Academic Press, 1999.
[15] Y. Nesterov. Gradient methods for minimizing composite objective function. In CORE. Catholic University of Louvain, Louvain-la-Neuve, Belgium, 2007.
[16] B.A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996.
[17] G. Peyr´ and J. Fadili. Learning analysis sparsity priors. SAMPTA’11, 2011. e
[18] G. E. Poliner and D. Ellis. A discriminative model for polyphonic piano transcription. EURASIP J. Adv. in Sig. Proc., 2007, 2006.
[19] L.I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation-based noise removal algorithms. Physica D, 60(1-4):259–268, 1992.
[20] P. Sprechmann, A. M. Bronstein, and G. Sapiro. Learning efficient sparse and low rank models. arXiv preprint arXiv:1212.3631, 2012.
[21] R. Tibshirani. Regression shrinkage and selection via the LASSO. J. Royal Stat. Society: Series B, 58(1):267–288, 1996.
[22] Ryan Joseph Tibshirani. The solution path of the generalized lasso. Stanford University, 2011.
[23] S. Vaiter, G. Peyre, C. Dossal, and J. Fadili. Robust sparse analysis regularization. Information Theory, IEEE Transactions on, 59(4):2001–2016, 2013.
[24] J. Yang, John W., T. Huang, and Y. Ma. Image super-resolution as sparse representation of raw image patches. In Proc. CVPR, pages 1–8. IEEE, 2008.
[25] G. Yu and J.-M. Morel. On the consistency of the SIFT method. Inverse problems and Imaging, 2009.
[26] G. Yu, G. Sapiro, and S. Mallat. Solving inverse problems with piecewise linear estimators: from gaussian mixture models to structured sparsity. IEEE Trans. Im. Proc., 21(5):2481–2499, 2012. 9