nips nips2009 nips2009-172 knowledge-graph by maker-knowledge-mining

172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis


Source: pdf

Author: Long Zhu, Yuanahao Chen, Bill Freeman, Antonio Torralba

Abstract: We present a nonparametric Bayesian method for texture learning and synthesis. A texture image is represented by a 2D Hidden Markov Model (2DHMM) where the hidden states correspond to the cluster labeling of textons and the transition matrix encodes their spatial layout (the compatibility between adjacent textons). The 2DHMM is coupled with the Hierarchical Dirichlet process (HDP) which allows the number of textons and the complexity of transition matrix grow as the input texture becomes irregular. The HDP makes use of Dirichlet process prior which favors regular textures by penalizing the model complexity. This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly and automatically. The HDP-2DHMM results in a compact representation of textures which allows fast texture synthesis with comparable rendering quality over the state-of-the-art patch-based rendering methods. We also show that the HDP2DHMM can be applied to perform image segmentation and synthesis. The preliminary results suggest that HDP-2DHMM is generally useful for further applications in low-level vision problems. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We present a nonparametric Bayesian method for texture learning and synthesis. [sent-5, score-0.42]

2 A texture image is represented by a 2D Hidden Markov Model (2DHMM) where the hidden states correspond to the cluster labeling of textons and the transition matrix encodes their spatial layout (the compatibility between adjacent textons). [sent-6, score-1.398]

3 The 2DHMM is coupled with the Hierarchical Dirichlet process (HDP) which allows the number of textons and the complexity of transition matrix grow as the input texture becomes irregular. [sent-7, score-0.935]

4 The HDP makes use of Dirichlet process prior which favors regular textures by penalizing the model complexity. [sent-8, score-0.251]

5 This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly and automatically. [sent-9, score-0.683]

6 The HDP-2DHMM results in a compact representation of textures which allows fast texture synthesis with comparable rendering quality over the state-of-the-art patch-based rendering methods. [sent-10, score-1.097]

7 We also show that the HDP2DHMM can be applied to perform image segmentation and synthesis. [sent-11, score-0.195]

8 1 Introduction Texture learning and synthesis are important tasks in computer vision and graphics. [sent-13, score-0.273]

9 The first style emphasizes the modeling and understanding problems and develops statistical models [1, 2] which are capable of representing texture using textons and their spatial layout. [sent-15, score-0.864]

10 The second style relies on patch-based rendering techniques [3, 4] which focus on rendering quality and speed, but forego the semantic understanding and modeling of texture. [sent-17, score-0.338]

11 This paper aims at texture understanding and modeling with fast synthesis and high rendering quality. [sent-18, score-0.776]

12 We represent a texture image by a 2D Hidden Markov Model (2D-HMM) (see figure (1)) where the hidden states correspond to the cluster labeling of textons and the transition matrix encodes the texton spatial layout (the compatibility between adjacent textons). [sent-20, score-1.839]

13 The 2D-HMM is coupled with the Hierarchical Dirichlet process (HDP) [5, 6] which allows the number of textons (i. [sent-21, score-0.446]

14 hidden states) and the complexity of the transition matrix to grow as more training data is available or the randomness of the input texture becomes large. [sent-23, score-0.551]

15 The Dirichlet process prior penalizes the model complexity to favor reusing clusters and transitions and thus regular texture which can be represented by compact models. [sent-24, score-0.536]

16 This framework (HDP-2DHMM) discovers the semantic meaning of texture in an explicit way that the texton vocabulary and their spatial layout are learnt jointly and automatically (the number of textons is fully determined by HDP-2DHMM). [sent-25, score-1.496]

17 Once the texton vocabulary and the transition matrix are learnt, the synthesis process samples the latent texton labeling map according to the probability encoded in the transition matrix. [sent-26, score-1.493]

18 The final 1 Figure 1: The flow chart of texture learning and synthesis. [sent-27, score-0.388]

19 The colored rectangles correspond to the index (labeling) of textons which are represented by image patches. [sent-28, score-0.591]

20 The texton vocabulary shows the correspondence between the color (states) and the examples of image patches. [sent-29, score-0.697]

21 The transition matrices show the probability (indicated by the intensity) of generating a new state (coded by the color of the top left corner rectangle), given the states of the left and upper neighbor nodes (coded by the top and left-most rectangles). [sent-30, score-0.251]

22 The inferred texton map shows the state assignments of the input texture. [sent-31, score-0.569]

23 The top-right panel shows the sampled texton map according to the transition matrices. [sent-32, score-0.542]

24 The last panel shows the synthesized texture using image quilting according to the correspondence between the sampled texton map and the texton vocabulary. [sent-33, score-1.705]

25 image is then generated by selecting the image patches based on the sampled texton labeling map. [sent-34, score-0.897]

26 Here, image quilting [3] is applied to search and stitch together all the patches so that the boundary inconsistency is minimized. [sent-35, score-0.428]

27 By contrast to [3], our method is only required to search a much smaller set of candidate patches within a local texton cluster. [sent-36, score-0.555]

28 We show that the HDP-2DHMM is able to synthesize texture in one second (25 times faster than image quilting) with comparable quality. [sent-38, score-0.593]

29 We also show that the HDP-2DHMM can be applied to perform image segmentation and synthesis. [sent-40, score-0.195]

30 2 Previous Work Our primary interest is texture understanding and modeling. [sent-42, score-0.412]

31 This model is very successful in capturing stochastic textures, but may fail for more structured textures due to lack of spatial modeling. [sent-44, score-0.228]

32 [1, 2] extend it to explicitly learn the textons and their spatial relations which are represented by extra hidden layers. [sent-46, score-0.537]

33 This new model is parametric (the number of texton clusters has to be tuned by hand for different texture images) and model selection which might be unstable in practice, is needed to avoid overfitting. [sent-47, score-0.873]

34 Inspired by recent progress in machine learning, we extend the nonparametric Bayesian framework of coupling 1D HMM and HDP [6] to deal with 2D texture image. [sent-49, score-0.42]

35 A new model (HDP-2DHMM) is developed to learn texton vocabulary and spatial layouts jointly and automatically. [sent-50, score-0.658]

36 Since the HDP-2DHMM is designed to generate appropriate image statistics, but not pixel intensity, a patch-based texture synthesis technique, called image quilting [3], is integrated into our system to sample image patches. [sent-51, score-1.162]

37 The texture synthesis algorithm has also been applied to image inpainting [8]. [sent-52, score-0.779]

38 xi are observations (features) of the image patch at position i. [sent-69, score-0.239]

39 [9, 10] and Varma and Zisserman [11] study the filter representations of textons which are related to our implementations of visual features. [sent-74, score-0.436]

40 But the interactions between textons are not explicitly considered. [sent-75, score-0.395]

41 [12, 13] address texture understanding by discovering regularity without explicit statistical texture modeling. [sent-77, score-0.827]

42 Our method is closely related to [16] which is not designed for texture learning. [sent-80, score-0.388]

43 They use hierarchical Dirichlet process, but the models and the image feature representations, including both the image filters and the data likelihood model, are different. [sent-81, score-0.307]

44 1 Image Patches and Features A texture image I is represented by a grid of image patches {xi } with size of 24 × 24 in this paper where i denotes the location. [sent-88, score-0.778]

45 {xi } will be grouped into different textons by the HDP-2DHMM. [sent-89, score-0.395]

46 We begin with a simplified model where the positions of textons represented by image patches are pre-determined by the image grid, and not allowed to shift. [sent-90, score-0.785]

47 l,h,b Each patch xi is characterized by a set of filter responses {wi } which correspond to values b of image filter response h at location l. [sent-92, score-0.266]

48 By contrast, we skip the clustering step and leave the learning of texton vocabulary together with spatial layout learning into the HDP2DHMM which takes over the role of k-means. [sent-107, score-0.695]

49 2 HDP-2DHMM: Coupling Hidden Markov Model with Hierarchical Dirichlet Process A texture is modeled by a 2D Hidden Markov Model (2DHMM) where the nodes correspond to the image patches xi and the compatibility is encoded by the edges connecting 4 neighboring nodes. [sent-109, score-0.773]

50 We use zi to index the states 3 • β ∼ GEM (α) • For each state z ∈ {1, 2, 3, . [sent-112, score-0.283]

51 The likelihood model p(xi |zi ) which specifies the probability of visual fetures is defined by multinomial distribution parameterized by θzi specific to its corresponding hidden state zi : xi ∼ M ultinomial(θzi ) (1) where θzi specify the weights of visual features. [sent-116, score-0.408]

52 L(i) = ∅ and T (i) = ∅), the probability p(zi |zL(i) , zT (i) ) of its state zi is only determined by the states (zL(i) , zT (i) ) of the connected nodes. [sent-119, score-0.283]

53 The distribution has a form of multinomial distribution parameterized by πzL(i) ,zT (i) : zi ∼ M ultinomial(πzL(i) ,zT (i) ) (2) where πzL(i) ,zT (i) encodes the transition matrix and thus the spatial layout of textons. [sent-120, score-0.389]

54 Without loss of generality, we will skip the details of the boundary cases, but only focus on the nodes whose states should be determined by their top and left nodes jointly. [sent-125, score-0.209]

55 To make a nonparametric Bayesian representation, we need to allow the number of states zi countably infinite and put prior distributions over the parameters θzi and πzL(i) ,zT (i) . [sent-126, score-0.286]

56 Therefore, the HDP makes use of a Dirichlet process prior to place a soft bias towards simpler models (in terms of the number of states and the regularity of state transitions) which explain the texture. [sent-136, score-0.195]

57 But this simplified model does not allow the textons (image patches) to be shifted. [sent-139, score-0.395]

58 We remove this constraint by introducing two hidden variables (ui , vi ) which indicate the displacements of textons associated with node i. [sent-140, score-0.52]

59 We only need to adjust the correspondence between image features xi and hidden states zi . [sent-141, score-0.508]

60 xi is modified to be xui ,vi which refers to image features located at the position with displacement of (ui , vi ) to the position i. [sent-142, score-0.211]

61 We first instantiate a random hidden state labeling and then iteratively repeat the following two steps. [sent-153, score-0.178]

62 The next term P (zi = t|zL(i) = r, zT (i) = s) is the probability of state of t, given the states of the nodes on the left and above, i. [sent-159, score-0.173]

63 The probability of generating state t is given by: P (zi = t|zL(i) = r, zT (i) = s) = nrst + α βt t nrst + α (8) where βt refers to the weight of state t. [sent-163, score-0.204]

64 The last two terms P (zR(i) |zi = t, zT (R(i)) ) and P (zD(i) |zL(D(i)) , zi = t) are the probability of the states of the right and lower neighbor nodes (R(i), D(i)) given zi . [sent-165, score-0.449]

65 5 Figure 4: The color of rectangles in columns 2 and 3 correspond to the index (labeling) of textons which are represented by 24*24 image patches. [sent-169, score-0.591]

66 The synthesized images are all 384*384 (16*16 textons /patches). [sent-170, score-0.521]

67 Our method captures both stochastic textures (the last two rows) and more structured textures (the first three rows, see the horizontal and grided layouts). [sent-171, score-0.355]

68 The inferred texton maps for structured textures are simpler (less states/textons) and more regular (less cluttered texton maps) than stochastic textures. [sent-172, score-1.176]

69 5 Texture Synthesis Once the texton vocabulary and the transition matrix are learnt, the synthesis process first samples the latent texton labeling map according to the probability encoded in the transition matrix. [sent-173, score-1.493]

70 But the HDP-2DHMM is generative only for image features, but not image intensity. [sent-174, score-0.276]

71 To make it practical for image synthesis, image quilting [3] is integrated with the HDP-2DHMM. [sent-175, score-0.413]

72 The final image is then generated by selecting image patches according to the texton labeling map. [sent-176, score-0.897]

73 Image quilting is applied to select and stitch together all the patches in a top-left-to-bottom-right order so that the boundary inconsistency is minimized . [sent-177, score-0.29]

74 By contrast to [3] which need to search over all image patches to ensure high rendering quality, our method is only required to search the candidate patches within a local cluster. [sent-179, score-0.461]

75 The HDP-2DHMM is capable of producing high rendering quality because the patches have been grouped based on visual features. [sent-180, score-0.305]

76 We show that the HDP-2DHMM is able to synthesize a 6 Figure 5: More synthesized texture images (for each pair, left is input texture, right is synthesized). [sent-182, score-0.559]

77 texture image with size of 384*384 and with comparable quality in one second (25 times faster than image quilting). [sent-183, score-0.718]

78 1 Texture Learning and Synthesis We use the texture images in [3]. [sent-185, score-0.424]

79 For each image, it takes 100 seconds for learning and 1 second for synthesis (almost 25 times faster than [3]). [sent-193, score-0.223]

80 Figure (4) shows the inferred texton labeling maps, the sampled texton maps and the synthesized texture images. [sent-194, score-1.517]

81 The rendering quality is visually comparable with [3] (not shown) for both structured textures and stochastic textures. [sent-196, score-0.366]

82 It is interesting to see that the HMM-HDP captures different types of texture patterns, such as vertical, horizontal and grided layouts. [sent-197, score-0.422]

83 It suggests that our method is able to discover the semantic texture meaning by learning texton vocabulary and their spatial relations. [sent-198, score-0.98]

84 The first three rows show the HDP-2DHMM is able to segment images with mixture of textures and synthesize new textures. [sent-200, score-0.231]

85 The last row shows a failure example where the texton is not well aligned. [sent-201, score-0.464]

86 2 Image Segmentation and Synthesis We also apply the HDP-2DHMM to perform image segmentation and synthesis. [sent-203, score-0.195]

87 The segmentation results are represented by the inferred state assignments (the texton map). [sent-205, score-0.649]

88 The last row in figure (6) shows a failure example where the texton is not well aligned. [sent-208, score-0.464]

89 The 2D Hidden Markov Model (HMM) is coupled with the hierarchical Dirichlet process (HDP) which allows the number of textons and the complexity of transition matrix grows as the input texture becomes irregular. [sent-210, score-0.943]

90 The HDP makes use of Dirichlet process prior which favors regular textures by penalizing the model complexity. [sent-211, score-0.251]

91 This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly and automatically. [sent-212, score-0.683]

92 We demonstrated that the resulting compact representation obtained by the HDP-2DHMM allows fast texture synthesis (under one second) with comparable rendering quality to the state-of-the-art image-based rendering methods. [sent-213, score-0.947]

93 Our results on image segmentation and synthesis suggest that the HDP-2DHMM is generally useful for further applications in low-level vision problems. [sent-214, score-0.468]

94 Guo, “Statistical modeling of texture sketch,” in ECCV ’02: Proceedings of the 7th European Conference on Computer Vision-Part III, 2002, pp. [sent-224, score-0.388]

95 Freeman, “Image quilting for texture synthesis and transfer,” in Siggraph, 2001. [sent-241, score-0.748]

96 Leung, “Texture synthesis by non-parametric sampling,” in International Conference on Computer Vision, 1999, pp. [sent-244, score-0.223]

97 Mumford, “Filters, random fields and maximum entropy (frame): Towards a unified theory for texture modeling,” International Journal of Computer Vision, vol. [sent-265, score-0.388]

98 Hays, “Near regular texture analysis and manipulation,” ACM Transactions on Graphics (SIGGRAPH 2004), vol. [sent-294, score-0.42]

99 Liu, “Discovering texture regularity as a higherorder correspondence problem,” in 9th European Conference on Computer Vision, May 2006. [sent-303, score-0.439]

100 Levoy, “Fast texture synthesis using tree-structured vector quantization,” in SIGGRAPH ’00: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, 2000, pp. [sent-337, score-0.611]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('texton', 0.464), ('textons', 0.395), ('texture', 0.388), ('zl', 0.317), ('synthesis', 0.223), ('zi', 0.163), ('textures', 0.15), ('rendering', 0.141), ('image', 0.138), ('quilting', 0.137), ('ultinomial', 0.137), ('dirichlet', 0.119), ('zt', 0.115), ('patches', 0.091), ('synthesized', 0.09), ('hdp', 0.081), ('transition', 0.078), ('vocabulary', 0.071), ('layout', 0.07), ('states', 0.07), ('labeling', 0.066), ('hidden', 0.062), ('spatial', 0.057), ('segmentation', 0.057), ('nodes', 0.053), ('leung', 0.052), ('nrst', 0.052), ('xi', 0.051), ('vision', 0.05), ('state', 0.05), ('patch', 0.05), ('synthesize', 0.045), ('layouts', 0.045), ('malik', 0.045), ('node', 0.041), ('visual', 0.041), ('ft', 0.037), ('gem', 0.037), ('images', 0.036), ('rectangles', 0.035), ('dp', 0.034), ('grided', 0.034), ('nqt', 0.034), ('stitch', 0.034), ('lter', 0.034), ('efros', 0.033), ('skip', 0.033), ('nonparametric', 0.032), ('regular', 0.032), ('siggraph', 0.032), ('quality', 0.032), ('appearance', 0.032), ('hierarchical', 0.031), ('assignments', 0.031), ('learnt', 0.03), ('compatibility', 0.03), ('inpainting', 0.03), ('criminisi', 0.03), ('hays', 0.03), ('gure', 0.029), ('hmm', 0.029), ('lters', 0.028), ('zhu', 0.028), ('zr', 0.028), ('inconsistency', 0.028), ('responses', 0.027), ('regularity', 0.027), ('process', 0.027), ('ui', 0.027), ('antonio', 0.026), ('kannan', 0.026), ('correspondence', 0.024), ('zd', 0.024), ('understanding', 0.024), ('inferred', 0.024), ('transitions', 0.024), ('coupled', 0.024), ('hyperparameter', 0.023), ('grow', 0.023), ('coded', 0.023), ('varma', 0.023), ('wq', 0.023), ('represented', 0.023), ('sampling', 0.023), ('wi', 0.023), ('bayesian', 0.023), ('comparable', 0.022), ('encoded', 0.022), ('vi', 0.022), ('zisserman', 0.022), ('guo', 0.022), ('structured', 0.021), ('markov', 0.021), ('prior', 0.021), ('winn', 0.021), ('penalizing', 0.021), ('encodes', 0.021), ('maps', 0.021), ('clusters', 0.021), ('jointly', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999976 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

Author: Long Zhu, Yuanahao Chen, Bill Freeman, Antonio Torralba

Abstract: We present a nonparametric Bayesian method for texture learning and synthesis. A texture image is represented by a 2D Hidden Markov Model (2DHMM) where the hidden states correspond to the cluster labeling of textons and the transition matrix encodes their spatial layout (the compatibility between adjacent textons). The 2DHMM is coupled with the Hierarchical Dirichlet process (HDP) which allows the number of textons and the complexity of transition matrix grow as the input texture becomes irregular. The HDP makes use of Dirichlet process prior which favors regular textures by penalizing the model complexity. This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly and automatically. The HDP-2DHMM results in a compact representation of textures which allows fast texture synthesis with comparable rendering quality over the state-of-the-art patch-based rendering methods. We also show that the HDP2DHMM can be applied to perform image segmentation and synthesis. The preliminary results suggest that HDP-2DHMM is generally useful for further applications in low-level vision problems. 1

2 0.11410817 211 nips-2009-Segmenting Scenes by Matching Image Composites

Author: Bryan Russell, Alyosha Efros, Josef Sivic, Bill Freeman, Andrew Zisserman

Abstract: In this paper, we investigate how, given an image, similar images sharing the same global description can help with unsupervised scene segmentation. In contrast to recent work in semantic alignment of scenes, we allow an input image to be explained by partial matches of similar scenes. This allows for a better explanation of the input scenes. We perform MRF-based segmentation that optimizes over matches, while respecting boundary information. The recovered segments are then used to re-query a large database of images to retrieve better matches for the target regions. We show improved performance in detecting the principal occluding and contact boundaries for the scene over previous methods on data gathered from the LabelMe database.

3 0.082158454 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

Author: Alan S. Willsky, Erik B. Sudderth, Michael I. Jordan, Emily B. Fox

Abstract: We propose a Bayesian nonparametric approach to the problem of modeling related time series. Using a beta process prior, our approach is based on the discovery of a set of latent dynamical behaviors that are shared among multiple time series. The size of the set and the sharing pattern are both inferred from data. We develop an efficient Markov chain Monte Carlo inference method that is based on the Indian buffet process representation of the predictive distribution of the beta process. In particular, our approach uses the sum-product algorithm to efficiently compute Metropolis-Hastings acceptance probabilities, and explores new dynamical behaviors via birth/death proposals. We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising results on unsupervised segmentation of visual motion capture data.

4 0.069635525 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation

Author: Lan Du, Lu Ren, Lawrence Carin, David B. Dunson

Abstract: A non-parametric Bayesian model is proposed for processing multiple images. The analysis employs image features and, when present, the words associated with accompanying annotations. The model clusters the images into classes, and each image is segmented into a set of objects, also allowing the opportunity to assign a word to each object (localized labeling). Each object is assumed to be represented as a heterogeneous mix of components, with this realized via mixture models linking image features to object types. The number of image classes, number of object types, and the characteristics of the object-feature mixture models are inferred nonparametrically. To constitute spatially contiguous objects, a new logistic stick-breaking process is developed. Inference is performed efficiently via variational Bayesian analysis, with example results presented on two image databases.

5 0.068389148 241 nips-2009-The 'tree-dependent components' of natural scenes are edge filters

Author: Daniel Zoran, Yair Weiss

Abstract: We propose a new model for natural image statistics. Instead of minimizing dependency between components of natural images, we maximize a simple form of dependency in the form of tree-dependencies. By learning filters and tree structures which are best suited for natural images we observe that the resulting filters are edge filters, similar to the famous ICA on natural images results. Calculating the likelihood of an image patch using our model requires estimating the squared output of pairs of filters connected in the tree. We observe that after learning, these pairs of filters are predominantly of similar orientations but different phases, so their joint energy resembles models of complex cells. 1 Introduction and related work Many models of natural image statistics have been proposed in recent years [1, 2, 3, 4]. A common goal of many of these models is finding a representation in which components or sub-components of the image are made as independent or as sparse as possible [5, 6, 2]. This has been found to be a difficult goal, as natural images have a highly intricate structure and removing dependencies between components is hard [7]. In this work we take a different approach, instead of minimizing dependence between components we try to maximize a simple form of dependence - tree dependence. It would be useful to place this model in context of previous works about natural image statistics. Many earlier models are described by the marginal statistics solely, obtaining a factorial form of the likelihood: p(x) = pi (xi ) (1) i The most notable model of this approach is Independent Component Analysis (ICA), where one seeks to find a linear transformation which maximizes independence between components (thus fitting well with the aforementioned factorization). This model has been applied to many scenarios, and proved to be one of the great successes of natural image statistics modeling with the emergence of edge-filters [5]. This approach has two problems. The first is that dependencies between components are still very strong, even with those learned transformation seeking to remove them. Second, it has been shown that ICA achieves, after the learned transformation, only marginal gains when measured quantitatively against simpler method like PCA [7] in terms of redundancy reduction. A different approach was taken recently in the form of radial Gaussianization [8], in which components which are distributed in a radially symmetric manner are made independent by transforming them non-linearly into a radial Gaussian, and thus, independent from one another. A more elaborate approach, related to ICA, is Independent Subspace Component Analysis or ISA. In this model, one looks for independent subspaces of the data, while allowing the sub-components 1 Figure 1: Our model with respect to marginal models such as ICA (a), and ISA like models (b). Our model, being a tree based model (c), allows components to belong to more than one subspace, and the subspaces are not required to be independent. of each subspace to be dependent: p(x) = pk (xi∈K ) (2) k This model has been applied to natural images as well and has been shown to produce the emergence of phase invariant edge detectors, akin to complex cells in V1 [2]. Independent models have several shortcoming, but by far the most notable one is the fact that the resulting components are, in fact, highly dependent. First, dependency between the responses of ICA filters has been reported many times [2, 7]. Also, dependencies between ISA components has also been observed [9]. Given these robust dependencies between filter outputs, it is somewhat peculiar that in order to get simple cell properties one needs to assume independence. In this work we ask whether it is possible to obtain V1 like filters in a model that assumes dependence. In our model we assume the filter distribution can be described by a tree graphical model [10] (see Figure 1). Degenerate cases of tree graphical models include ICA (in which no edges are present) and ISA (in which edges are only present within a subspace). But in its non-degenerate form, our model assumes any two filter outputs may be dependent. We allow components to belong to more than one subspace, and as a result, do not require independence between them. 2 Model and learning Our model is comprised of three main components. Given a set of patches, we look for the parameters which maximize the likelihood of a whitened natural image patch z: N p(yi |ypai ; β) p(z; W, β, T ) = p(y1 ) (3) i=1 Where y = Wz, T is the tree structure, pai denotes the parent of node i and β is a parameter of the density model (see below for the details). The three components we are trying to learn are: 1. The filter matrix W, where every row defines one of the filters. The response of these filters is assumed to be tree-dependent. We assume that W is orthogonal (and is a rotation of a whitening transform). 2. The tree structure T which specifies which components are dependent on each other. 3. The probability density function for connected nodes in the tree, which specify the exact form of dependency between nodes. All three together describe a complete model for whitened natural image patches, allowing likelihood estimation and exact inference [11]. We perform the learning in an iterative manner: we start by learning the tree structure and density model from the entire data set, then, keeping the structure and density constant, we learn the filters via gradient ascent in mini-batches. Going back to the tree structure we repeat the process many times iteratively. It is important to note that both the filter set and tree structure are learned from the data, and are continuously updated during learning. In the following sections we will provide details on the specifics of each part of the model. 2 β=0.0 β=0.5 β=1.0 β=0.0 β=0.5 β=1.0 2 1 1 1 1 1 1 0 −1 0 −1 −2 −1 −2 −3 0 x1 2 0 x1 2 0 x1 2 −2 −3 −2 0 x1 2 0 −1 −2 −3 −2 0 −1 −2 −3 −2 0 −1 −2 −3 −2 0 x2 3 2 x2 3 2 x2 3 2 x2 3 2 x2 3 2 x2 3 −3 −2 0 x1 2 −2 0 x1 2 Figure 2: Shape of the conditional (Left three plots) and joint (Right three plots) density model in log scale for several values of β, from dependence to independence. 2.1 Learning tree structure In their seminal paper, Chow and Liu showed how to learn the optimal tree structure approximation for a multidimensional probability density function [12]. This algorithm is easy to apply to this scenario, and requires just a few simple steps. First, given the current estimate for the filter matrix W, we calculate the response of each of the filters with all the patches in the data set. Using these responses, we calculate the mutual information between each pair of filters (nodes) to obtain a fully connected weighted graph. The final step is to find a maximal spanning tree over this graph. The resulting unrooted tree is the optimal tree approximation of the joint distribution function over all nodes. We will note that the tree is unrooted, and the root can be chosen arbitrarily - this means that there is no node, or filter, that is more important than the others - the direction in the tree graph is arbitrary as long as it is chosen in a consistent way. 2.2 Joint probability density functions Gabor filter responses on natural images exhibit highly kurtotic marginal distributions, with heavy tails and sharp peaks [13, 3, 14]. Joint pair wise distributions also exhibit this same shape with varying degrees of dependency between the components [13, 2]. The density model we use allows us to capture both the highly kurtotic nature of the distributions, while still allowing varying degrees of dependence using a mixing variable. We use a mix of two forms of finite, zero mean Gaussian Scale Mixtures (GSM). In one, the components are assumed to be independent of each other and in the other, they are assumed to be spherically distributed. The mixing variable linearly interpolates between the two, allowing us to capture the whole range of dependencies: p(x1 , x2 ; β) = βpdep (x1 , x2 ) + (1 − β)pind (x1 , x2 ) (4) When β = 1 the two components are dependent (unless p is Gaussian), whereas when β = 0 the two components are independent. For the density functions themselves, we use a finite GSM. The dependent case is a scale mixture of bivariate Gaussians: 2 πk N (x1 , x2 ; σk I) pdep (x1 , x2 ) = (5) k While the independent case is a product of two independent univariate Gaussians: 2 πk N (x1 ; σk ) pind (x1 , x2 ) = k 2 πk N (x2 ; σk ) (6) k 2 Estimating parameters πk and σk for the GSM is done directly from the data using Expectation Maximization. These parameters are the same for all edges and are estimated only once on the first iteration. See Figure 2 for a visualization of the conditional distribution functions for varying values of β. We will note that the marginal distributions for the two types of joint distributions above are the same. The mixing parameter β is also estimated using EM, but this is done for each edge in the tree separately, thus allowing our model to theoretically capture the fully independent case (ICA) and other degenerate models such as ISA. 2.3 Learning tree dependent components Given the current tree structure and density model, we can now learn the matrix W via gradient ascent on the log likelihood of the model. All learning is performed on whitened, dimensionally 3 reduced patches. This means that W is a N × N rotation (orthonormal) matrix, where N is the number of dimensions after dimensionality reduction (see details below). Given an image patch z we multiply it by W to get the response vector y: y = Wz (7) Now we can calculate the log likelihood of the given patch using the tree model (which we assume is constant at the moment): N log p(yi |ypai ) log p(y) = log p(yroot ) + (8) i=1 Where pai denotes the parent of node i. Now, taking the derivative w.r.t the r-th row of W: ∂ log p(y) ∂ log p(y) T = z ∂Wr ∂yr (9) Where z is the whitened natural image patch. Finally, we can calculate the derivative of the log likelihood with respect to the r-th element in y: ∂ log p(ypar , yr ) ∂ log p(y) = + ∂yr ∂yr c∈C(r) ∂ log p(yr , yc ) ∂ log p(yr ) − ∂yr ∂yr (10) Where C(r) denote the children of node r. In summary, the gradient ascent rule for updating the rotation matrix W is given by: t+1 t Wr = Wr + η ∂ log p(y) T z ∂yr (11) Where η is the learning rate constant. After update, the rows of W are orthonormalized. This gradient ascent rule is applied for several hundreds of patches (see details below), after which the tree structure is learned again as described in Section 2.1, using the new filter matrix W, repeating this process for many iterations. 3 Results and analysis 3.1 Validation Before running the full algorithm on natural image data, we wanted to validate that it does produce sensible results with simple synthetic data. We generated noise from four different models, one is 1/f independent Gaussian noise with 8 Discrete Cosine Transform (DCT) filters, the second is a simple ICA model with 8 DCT filters, and highly kurtotic marginals. The third was a simple ISA model - 4 subspaces, each with two filters from the DCT filter set. Distribution within the subspace was a circular, highly kurtotic GSM, and the subspaces were sampled independently. Finally, we generated data from a simple synthetic tree of DCT filters, using the same joint distributions as for the ISA model. These four synthetic random data sets were given to the algorithm - results can be seen in Figure 3 for the ICA, ISA and tree samples. In all cases the model learned the filters and distribution correctly, reproducing both the filters (up to rotations within the subspace in ISA) and the dependency structure between the different filters. In the case of 1/f Gaussian noise, any whitening transformation is equally likely and any value of beta is equally likely. Thus in this case, the algorithm cannot find the tree or the filters. 3.2 Learning from natural image patches We then ran experiments with a set of natural images [9]1 . These images contain natural scenes such as mountains, fields and lakes. . The data set was 50,000 patches, each 16 × 16 pixels large. The patches’ DC was removed and they were then whitened using PCA. Dimension was reduced from 256 to 128 dimensions. The GSM for the density model had 16 components. Several initial 1 available at http://www.cis.hut.fi/projects/ica/imageica/ 4 Figure 3: Validation of the algorithm. Noise was generated from three models - top row is ICA, middle row is ISA and bottom row is a tree model. Samples were then given to the algorithm. On the right are the resulting learned tree models. Presented are the learned filters, tree model (with white edges meaning β = 0, black meaning β = 1 and grays intermediate values) and an example of a marginal histogram for one of the filters. It can be seen that in all cases all parts of the model were correctly learned. Filters in the ISA case were learned up to rotation within the subspace, and all filters were learned up to sign. β values for the ICA case were always below 0.1, as were the values of β between subspaces in ISA. conditions for the matrix W were tried out (random rotations, identity) but this had little effect on results. Mini-batches of 10 patches each were used for the gradient ascent - the gradient of 10 patches was summed, and then normalized to have unit norm. The learning rate constant η was set to 0.1. Tree structure learning and estimation of the mixing variable β were done every 500 mini-batches. All in all, 50 iterations were done over the data set. 3.3 Filters and tree structure Figures 4 and 5 show the learned filters (WQ where Q is the whitening matrix) and tree structure (T ) learned from natural images. Unlike the ISA toy data in figure 3, here a full tree was learned and β is approximately one for all edges. The GSM that was learned for the marginals was highly kurtotic. It can be seen that resulting filters are edge filters at varying scales, positions and orientations. This is similar to the result one gets when applying ICA to natural images [5, 15]. More interesting is Figure 4: Left: Filter set learned from 16 × 16 natural image patches. Filters are ordered by PCA eigenvalues, largest to smallest. Resulting filters are edge filters having different orientations, positions, frequencies and phases. Right: The “feature” set learned, that is, columns of the pseudo inverse of the filter set. 5 Figure 5: The learned tree graph structure and feature set. It can be seen that neighboring features on the graph have similar orientation, position and frequency. See Figure 4 for a better view of the feature details, and see text for full detail and analysis. Note that the figure is rotated CW. 6 Optimal Orientation Optimal Frequency 3.5 Optimal Phase 7 3 0.8 6 2.5 Optimal Position Y 0.9 6 5 5 0.7 0.6 3 Child 1.5 4 Child 2 Child Child 4 3 2 1 1 0.4 0.3 2 0.5 0.5 0.2 0 1 0.1 0 0 1 2 Parent 3 4 0 0 2 4 Parent 6 8 0 0 1 2 3 Parent 4 5 6 0 0.2 0.4 0.6 Parent 0.8 1 Figure 6: Correlation of optimal parameters in neighboring nodes in the tree graph. Orientation, frequency and position are highly correlated, while phase seems to be entirely uncorrelated. This property of correlation in frequency and orientation, while having no correlation in phase is related to the ubiquitous energy model of complex cells in V1. See text for further details. Figure 7: Left: Comparison of log likelihood values of our model with PCA, ICA and ISA. Our model gives the highest likelihood. Right: Samples taken at random from ICA, ISA and our model. Samples from our model appear to contain more long-range structure. the tree graph structure learned along with the filters which is shown in Figure 5. It can be seen that neighboring filters (nodes) in the tree tend to have similar position, frequency and orientation. Figure 6 shows the correlation of optimal frequency, orientation and position for neighboring filters in the tree - it is obvious that all three are highly correlated. Also apparent in this figure is the fact that the optimal phase for neighboring filters has no significant correlation. It has been suggested that filters which have the same orientation, frequency and position with different phase can be related to complex cells in V1 [2, 16]. 3.4 Comparison to other models Since our model is a generalization of both ICA and ISA we use it to learn both models. In order to learn ICA we used the exact same data set, but the tree had no edges and was not learned from the data (alternatively, we could have just set β = 0). For ISA we used a forest architecture of 2 node trees, setting β = 1 for all edges (which means a spherical symmetric distribution), no tree structure was learned. Both models produce edge filters similar to what we learn (and to those in [5, 15, 6]). The ISA model produces neighboring nodes with similar frequency and orientation, but different phase, as was reported in [2]. We also compare to a simple PCA whitening transform, using the same whitening transform and marginals as in the ICA case, but setting W = I. We compare the likelihood each model gives for a test set of natural image patches, different from the one that was used in training. There were 50,000 patches in the test set, and we calculate the mean log likelihood over the entire set. The table in Figure 7 shows the result - as can be seen, our model performs better in likelihood terms than both ICA and ISA. Using a tree model, as opposed to more complex graphical models, allows for easy sampling from the model. Figure 7 shows 20 random samples taken from our tree model along with samples from the ICA and ISA models. Note the elongated structures (e.g. in the bottom left sample) in the samples from the tree model, and compare to patches sampled from the ICA and ISA models. 7 40 40 30 30 20 20 10 1 10 0.8 0.6 0.4 0 0.2 0 0 1 2 3 Orientation 4 0 0 2 4 6 Frequency 8 0 2 4 Phase Figure 8: Left: Interpretation of the model. Given a patch, the response of all edge filters is computed (“simple cells”), then at each edge, the corresponding nodes are squared and summed to produce the response of the “complex cell” this edge represents. Both the response of complex cells and simple cells is summed to produce the likelihood of the patch. Right: Response of a “complex cell” in our model to changing phase, frequency and orientation. Response in the y-axis is the sum of squares of the two filters in this “complex cell”. Note that while the cell is selective to orientation and frequency, it is rather invariant to phase. 3.5 Tree models and complex cells One way to interpret the model is looking at the likelihood of a given patch under this model. For the case of β = 1 substituting Equation 4 into Equation 3 yields: 2 2 ρ( yi + ypai ) − ρ(|ypai |) log L(z) = (12) i 2 Where ρ(x) = log k πk N (x; σk ) . This form of likelihood has an interesting similarity to models of complex cells in V1 [2, 4]. In Figure 8 we draw a simple two-layer network that computes the likelihood. The first layer applies linear filters (“simple cells”) to the image patch, while the second layer sums the squared outputs of similarly oriented filters from the first layer, having different phases, which are connected in the tree (“complex cells”). Output is also dependent on the actual response of the “simple cell” layer. The likelihood here is maximized when both the response of the parent filter ypai and the child yi is zero, but, given that one filter has responded with a non-zero value, the likelihood is maximized when the other filter also fires (see the conditional density in Figure 2). Figure 8 also shows an example of the phase invariance which is present in the learned

6 0.068117693 257 nips-2009-White Functionals for Anomaly Detection in Dynamical Systems

7 0.066959053 219 nips-2009-Slow, Decorrelated Features for Pretraining Complex Cell-like Networks

8 0.066440091 96 nips-2009-Filtering Abstract Senses From Image Search Results

9 0.066197492 201 nips-2009-Region-based Segmentation and Object Detection

10 0.065279149 22 nips-2009-Accelerated Gradient Methods for Stochastic Optimization and Online Learning

11 0.065256067 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

12 0.062462315 84 nips-2009-Evaluating multi-class learning strategies in a generative hierarchical framework for object detection

13 0.061887793 8 nips-2009-A Fast, Consistent Kernel Two-Sample Test

14 0.06108579 65 nips-2009-Decoupling Sparsity and Smoothness in the Discrete Hierarchical Dirichlet Process

15 0.060956955 242 nips-2009-The Infinite Partially Observable Markov Decision Process

16 0.060825273 137 nips-2009-Learning transport operators for image manifolds

17 0.060478155 133 nips-2009-Learning models of object structure

18 0.059213549 58 nips-2009-Constructing Topological Maps using Markov Random Fields and Loop-Closure Detection

19 0.058404207 104 nips-2009-Group Sparse Coding

20 0.057521507 83 nips-2009-Estimating image bases for visual image reconstruction from human brain activity


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.162), (1, -0.095), (2, -0.065), (3, -0.083), (4, 0.033), (5, 0.041), (6, 0.053), (7, 0.062), (8, 0.065), (9, -0.028), (10, -0.022), (11, 0.054), (12, 0.021), (13, 0.025), (14, -0.018), (15, -0.036), (16, -0.053), (17, -0.003), (18, 0.089), (19, 0.038), (20, 0.037), (21, 0.047), (22, -0.107), (23, -0.063), (24, -0.052), (25, 0.052), (26, -0.013), (27, 0.012), (28, -0.028), (29, 0.045), (30, 0.027), (31, -0.003), (32, -0.076), (33, 0.024), (34, -0.006), (35, -0.128), (36, 0.116), (37, 0.02), (38, 0.025), (39, 0.003), (40, 0.052), (41, 0.019), (42, -0.036), (43, -0.052), (44, -0.043), (45, 0.057), (46, -0.06), (47, -0.035), (48, -0.049), (49, -0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92756754 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

Author: Long Zhu, Yuanahao Chen, Bill Freeman, Antonio Torralba

Abstract: We present a nonparametric Bayesian method for texture learning and synthesis. A texture image is represented by a 2D Hidden Markov Model (2DHMM) where the hidden states correspond to the cluster labeling of textons and the transition matrix encodes their spatial layout (the compatibility between adjacent textons). The 2DHMM is coupled with the Hierarchical Dirichlet process (HDP) which allows the number of textons and the complexity of transition matrix grow as the input texture becomes irregular. The HDP makes use of Dirichlet process prior which favors regular textures by penalizing the model complexity. This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly and automatically. The HDP-2DHMM results in a compact representation of textures which allows fast texture synthesis with comparable rendering quality over the state-of-the-art patch-based rendering methods. We also show that the HDP2DHMM can be applied to perform image segmentation and synthesis. The preliminary results suggest that HDP-2DHMM is generally useful for further applications in low-level vision problems. 1

2 0.57152367 211 nips-2009-Segmenting Scenes by Matching Image Composites

Author: Bryan Russell, Alyosha Efros, Josef Sivic, Bill Freeman, Andrew Zisserman

Abstract: In this paper, we investigate how, given an image, similar images sharing the same global description can help with unsupervised scene segmentation. In contrast to recent work in semantic alignment of scenes, we allow an input image to be explained by partial matches of similar scenes. This allows for a better explanation of the input scenes. We perform MRF-based segmentation that optimizes over matches, while respecting boundary information. The recovered segments are then used to re-query a large database of images to retrieve better matches for the target regions. We show improved performance in detecting the principal occluding and contact boundaries for the scene over previous methods on data gathered from the LabelMe database.

3 0.55662644 58 nips-2009-Constructing Topological Maps using Markov Random Fields and Loop-Closure Detection

Author: Roy Anati, Kostas Daniilidis

Abstract: We present a system which constructs a topological map of an environment given a sequence of images. This system includes a novel image similarity score which uses dynamic programming to match images using both the appearance and relative positions of local features simultaneously. Additionally, an MRF is constructed to model the probability of loop-closures. A locally optimal labeling is found using Loopy-BP. Finally we outline a method to generate a topological map from loop closure data. Results, presented on four urban sequences and one indoor sequence, outperform the state of the art. 1

4 0.54524374 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation

Author: Lan Du, Lu Ren, Lawrence Carin, David B. Dunson

Abstract: A non-parametric Bayesian model is proposed for processing multiple images. The analysis employs image features and, when present, the words associated with accompanying annotations. The model clusters the images into classes, and each image is segmented into a set of objects, also allowing the opportunity to assign a word to each object (localized labeling). Each object is assumed to be represented as a heterogeneous mix of components, with this realized via mixture models linking image features to object types. The number of image classes, number of object types, and the characteristics of the object-feature mixture models are inferred nonparametrically. To constitute spatially contiguous objects, a new logistic stick-breaking process is developed. Inference is performed efficiently via variational Bayesian analysis, with example results presented on two image databases.

5 0.52290082 96 nips-2009-Filtering Abstract Senses From Image Search Results

Author: Kate Saenko, Trevor Darrell

Abstract: We propose an unsupervised method that, given a word, automatically selects non-abstract senses of that word from an online ontology and generates images depicting the corresponding entities. When faced with the task of learning a visual model based only on the name of an object, a common approach is to find images on the web that are associated with the object name and train a visual classifier from the search result. As words are generally polysemous, this approach can lead to relatively noisy models if many examples due to outlier senses are added to the model. We argue that images associated with an abstract word sense should be excluded when training a visual classifier to learn a model of a physical object. While image clustering can group together visually coherent sets of returned images, it can be difficult to distinguish whether an image cluster relates to a desired object or to an abstract sense of the word. We propose a method that uses both image features and the text associated with the images to relate latent topics to particular senses. Our model does not require any human supervision, and takes as input only the name of an object category. We show results of retrieving concrete-sense images in two available multimodal, multi-sense databases, as well as experiment with object classifiers trained on concrete-sense images returned by our method for a set of ten common office objects. 1

6 0.51550502 28 nips-2009-An Additive Latent Feature Model for Transparent Object Recognition

7 0.50604379 6 nips-2009-A Biologically Plausible Model for Rapid Natural Scene Identification

8 0.5045312 257 nips-2009-White Functionals for Anomaly Detection in Dynamical Systems

9 0.49370933 131 nips-2009-Learning from Neighboring Strokes: Combining Appearance and Context for Multi-Domain Sketch Recognition

10 0.48972499 175 nips-2009-Occlusive Components Analysis

11 0.46951395 201 nips-2009-Region-based Segmentation and Object Detection

12 0.46115533 226 nips-2009-Spatial Normalized Gamma Processes

13 0.46101969 83 nips-2009-Estimating image bases for visual image reconstruction from human brain activity

14 0.45285708 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

15 0.44268715 188 nips-2009-Perceptual Multistability as Markov Chain Monte Carlo Inference

16 0.43850002 149 nips-2009-Maximin affinity learning of image segmentation

17 0.43086863 137 nips-2009-Learning transport operators for image manifolds

18 0.4299621 93 nips-2009-Fast Image Deconvolution using Hyper-Laplacian Priors

19 0.42395574 133 nips-2009-Learning models of object structure

20 0.42052054 241 nips-2009-The 'tree-dependent components' of natural scenes are edge filters


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.01), (21, 0.022), (24, 0.025), (25, 0.088), (35, 0.048), (36, 0.096), (39, 0.048), (55, 0.025), (58, 0.059), (61, 0.022), (66, 0.316), (71, 0.076), (86, 0.059)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.86369026 114 nips-2009-Indian Buffet Processes with Power-law Behavior

Author: Yee W. Teh, Dilan Gorur

Abstract: The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora. 1

same-paper 2 0.79016703 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

Author: Long Zhu, Yuanahao Chen, Bill Freeman, Antonio Torralba

Abstract: We present a nonparametric Bayesian method for texture learning and synthesis. A texture image is represented by a 2D Hidden Markov Model (2DHMM) where the hidden states correspond to the cluster labeling of textons and the transition matrix encodes their spatial layout (the compatibility between adjacent textons). The 2DHMM is coupled with the Hierarchical Dirichlet process (HDP) which allows the number of textons and the complexity of transition matrix grow as the input texture becomes irregular. The HDP makes use of Dirichlet process prior which favors regular textures by penalizing the model complexity. This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly and automatically. The HDP-2DHMM results in a compact representation of textures which allows fast texture synthesis with comparable rendering quality over the state-of-the-art patch-based rendering methods. We also show that the HDP2DHMM can be applied to perform image segmentation and synthesis. The preliminary results suggest that HDP-2DHMM is generally useful for further applications in low-level vision problems. 1

3 0.71741915 101 nips-2009-Generalization Errors and Learning Curves for Regression with Multi-task Gaussian Processes

Author: Kian M. Chai

Abstract: We provide some insights into how task correlations in multi-task Gaussian process (GP) regression affect the generalization error and the learning curve. We analyze the asymmetric two-tasks case, where a secondary task is to help the learning of a primary task. Within this setting, we give bounds on the generalization error and the learning curve of the primary task. Our approach admits intuitive understandings of the multi-task GP by relating it to single-task GPs. For the case of one-dimensional input-space under optimal sampling with data only for the secondary task, the limitations of multi-task GP can be quantified explicitly. 1

4 0.71256441 187 nips-2009-Particle-based Variational Inference for Continuous Systems

Author: Andrew Frank, Padhraic Smyth, Alexander T. Ihler

Abstract: Since the development of loopy belief propagation, there has been considerable work on advancing the state of the art for approximate inference over distributions defined on discrete random variables. Improvements include guarantees of convergence, approximations that are provably more accurate, and bounds on the results of exact inference. However, extending these methods to continuous-valued systems has lagged behind. While several methods have been developed to use belief propagation on systems with continuous values, recent advances for discrete variables have not as yet been incorporated. In this context we extend a recently proposed particle-based belief propagation algorithm to provide a general framework for adapting discrete message-passing algorithms to inference in continuous systems. The resulting algorithms behave similarly to their purely discrete counterparts, extending the benefits of these more advanced inference techniques to the continuous domain. 1

5 0.62189233 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

Author: Lei Shi, Thomas L. Griffiths

Abstract: The goal of perception is to infer the hidden states in the hierarchical process by which sensory data are generated. Human behavior is consistent with the optimal statistical solution to this problem in many tasks, including cue combination and orientation detection. Understanding the neural mechanisms underlying this behavior is of particular importance, since probabilistic computations are notoriously challenging. Here we propose a simple mechanism for Bayesian inference which involves averaging over a few feature detection neurons which fire at a rate determined by their similarity to a sensory stimulus. This mechanism is based on a Monte Carlo method known as importance sampling, commonly used in computer science and statistics. Moreover, a simple extension to recursive importance sampling can be used to perform hierarchical Bayesian inference. We identify a scheme for implementing importance sampling with spiking neurons, and show that this scheme can account for human behavior in cue combination and the oblique effect. 1

6 0.54693323 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

7 0.54609936 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

8 0.54514462 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

9 0.53746784 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

10 0.53581452 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

11 0.53148192 215 nips-2009-Sensitivity analysis in HMMs with application to likelihood maximization

12 0.52959543 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

13 0.4959411 28 nips-2009-An Additive Latent Feature Model for Transparent Object Recognition

14 0.49522671 97 nips-2009-Free energy score space

15 0.49347228 205 nips-2009-Rethinking LDA: Why Priors Matter

16 0.49106726 226 nips-2009-Spatial Normalized Gamma Processes

17 0.48981792 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference

18 0.48819545 113 nips-2009-Improving Existing Fault Recovery Policies

19 0.48755285 168 nips-2009-Non-stationary continuous dynamic Bayesian networks

20 0.48673999 260 nips-2009-Zero-shot Learning with Semantic Output Codes