nips nips2012 nips2012-235 nips2012-235-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Daniel Zoran, Yair Weiss
Abstract: Simple Gaussian Mixture Models (GMMs) learned from pixels of natural image patches have been recently shown to be surprisingly strong performers in modeling the statistics of natural images. Here we provide an in depth analysis of this simple yet rich model. We show that such a GMM model is able to compete with even the most successful models of natural images in log likelihood scores, denoising performance and sample quality. We provide an analysis of what such a model learns from natural images as a function of number of mixture components including covariance structure, contrast variation and intricate structures such as textures, boundaries and more. Finally, we show that the salient properties of the GMM learned from natural images can be derived from a simplified Dead Leaves model which explicitly models occlusion, explaining its surprising success relative to other models. 1 GMMs and natural image statistics models Many models for the statistics of natural image patches have been suggested in recent years. Finding good models for natural images is important to many different research areas - computer vision, biological vision and neuroscience among others. Recently, there has been a growing interest in comparing different aspects of models for natural images such as log-likelihood and multi-information reduction performance, and much progress has been achieved [1,2, 3,4,5, 6]. Out of these results there is one which is particularly interesting: simple, unconstrained Gaussian Mixture Models (GMMs) with a relatively small number of mixture components learned from image patches are extraordinarily good in modeling image statistics [6, 4]. This is a surprising result due to the simplicity of GMMs and their ubiquity. Another surprising aspect of this result is that many of the current models may be thought of as GMMs with an exponential or infinite number of components, having different constraints on the covariance structure of the mixture components. In this work we study the nature of GMMs learned from natural image patches. We start with a thorough comparison to some popular and cutting edge image models. We show that indeed, GMMs are excellent performers in modeling natural image patches. We then analyze what properties of natural images these GMMs capture, their dependence on the number of components in the mixture and their relation to the structure of the world around us. Finally, we show that the learned GMM suggests a strong connection between natural image statistics and a simple variant of the dead leaves model [7, 8] , explicitly modeling occlusions and explaining some of the success of GMMs in modeling natural images. 1 3.5 .,...- ••.......-.-.. -..---'-. 1 ~~6\8161·· -.. .-.. --...--.-- ---..-.- -. --------------MII+··+ilIl ..... .. . . ~ '[25 . . . ---- ] B'II 1_ -- ~2 ;t:: fI 1 - --- ,---- ._.. : 61.5 ..... '
[1] M. Bethge,
[2] P. Berkes, R. Turner, and M. Sahani,
[3] S. Lyu and E. P. Simoncelli,
[4] D. Zoran and Y. Weiss,
[5] B. Culpepper, J. Sohl-Dickstein, and B. Olshausen,
[6] L. Theis, S. Gerwinn, F. Sinz, and M. Bethge,
[7] G. Matheron, Random sets and integral geometry. Wiley New York, 1975, vol. 1.
[8] X. Pitkow,
[9] B. 01shausen et al.,
[10] A. J. Bell and T. J. Sejnowski,
[11] A. Hyvarinen and E. Oja,
[12] Y. Karklin and M. Lewicki,
[13] J. Sohl-Dickstein and B. Culpepper,
[14] M. Lewicki and B. Olshausen,
[15] A. Lee, D. Mumford, and J. Huang,
[16] C. Zetzsche, E. Barth, and B. Wegmann,
[17] E . Simoncelli,
[18] D. Field,
[19] J. Lucke, R. Turner, M. Sahani, and M. Henniges,
[20] G. Puertas, J. Bornschein, and 1. Lucke,
[21] N. Le Roux, N. Heess, J. Shotton, and J. Winn,