nips nips2002 nips2002-100 knowledge-graph by maker-knowledge-mining

100 nips-2002-Half-Lives of EigenFlows for Spectral Clustering


Source: pdf

Author: Chakra Chennubhotla, Allan D. Jepson

Abstract: Using a Markov chain perspective of spectral clustering we present an algorithm to automatically find the number of stable clusters in a dataset. The Markov chain’s behaviour is characterized by the spectral properties of the matrix of transition probabilities, from which we derive eigenflows along with their halflives. An eigenflow describes the flow of probability mass due to the Markov chain, and it is characterized by its eigenvalue, or equivalently, by the halflife of its decay as the Markov chain is iterated. A ideal stable cluster is one with zero eigenflow and infinite half-life. The key insight in this paper is that bottlenecks between weakly coupled clusters can be identified by computing the sensitivity of the eigenflow’s halflife to variations in the edge weights. We propose a novel E IGEN C UTS algorithm to perform clustering that removes these identified bottlenecks in an iterative fashion.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu   ¡ Abstract Using a Markov chain perspective of spectral clustering we present an algorithm to automatically find the number of stable clusters in a dataset. [sent-4, score-0.373]

2 The Markov chain’s behaviour is characterized by the spectral properties of the matrix of transition probabilities, from which we derive eigenflows along with their halflives. [sent-5, score-0.215]

3 An eigenflow describes the flow of probability mass due to the Markov chain, and it is characterized by its eigenvalue, or equivalently, by the halflife of its decay as the Markov chain is iterated. [sent-6, score-0.145]

4 The key insight in this paper is that bottlenecks between weakly coupled clusters can be identified by computing the sensitivity of the eigenflow’s halflife to variations in the edge weights. [sent-8, score-0.545]

5 We propose a novel E IGEN C UTS algorithm to perform clustering that removes these identified bottlenecks in an iterative fashion. [sent-9, score-0.217]

6 datapoints) in each cluster should be connected with high-affinity edges, while different clusters are either not connected or are connnected only by a few edges with low affinity. [sent-13, score-0.38]

7 The practical problem is to identify these tightly coupled clusters, and cut the inter-cluster edges. [sent-14, score-0.179]

8 Here we use the random walk formulation of [4], where the edge weights are used to construct a Markov defines a random walk on the graph to transition probability matrix, . [sent-16, score-0.375]

9 The eigenvalues and eigenvectors of provide the basis for deciding on a particular segmentation. [sent-18, score-0.189]

10 In particular, it has been shown that for weakly coupled clusters, the leading eigenvectors of will be roughly piecewise constant [4, 13, 5]. [sent-19, score-0.377]

11 This result motivates many of the current spectral clustering algorithms. [sent-20, score-0.162]

12 For example in [5], the number of clusters must be known a priori, and the -means algorithm is used on the leading eigenvectors of in an attempt to identify the appropriate piecewise constant regions. [sent-21, score-0.445]

13 ¢ ¢ ¢ £ ¢ £ £ £ ¢ £ In this paper we investigate the form of the leading eigenvectors of the Markov matrix . [sent-22, score-0.244]

14 Using some simple image segmentation examples we confirm that the leading eigenvectors of are roughly piecewise constant for problems with well separated clusters. [sent-23, score-0.396]

15 However, we observe that for several segmentation problems that we might wish to solve, the coupling between the clusters is significantly stronger and, as a result, the piecewise constant approximation breaks down. [sent-24, score-0.288]

16 ¢ ¢ Unlike the piecewise constant approximation, a perfectly general view is that the eigenvectors of determine particular flows of probability along the edges in the graph. [sent-25, score-0.411]

17 Instead of measuring the decay rate in terms of the eigenvalue , we find it more convenient to use the flow’s halflife , which is simply defined by . [sent-27, score-0.124]

18 ¢   ¡ § ©§ ¥ ¢ ¨¦¤ £ ¢    ¡   From the perspective of eigenflows, a graph representing a set of weakly coupled clusters produces eigenflows between the various clusters which decay with long halflives. [sent-30, score-0.445]

19 In order to identify clusters we therefore consider the eigenflows with long halflives. [sent-32, score-0.16]

20 Given such a slowly decaying eigenflow, we identify particular bottleneck regions in the graph which critically restrict the flow (cf. [sent-33, score-0.229]

21 To identify these bottlenecks we propose computing the sensitivity of the flow’s halflife with respect to perturbations in the edge weights. [sent-35, score-0.447]

22 We implement a simple spectral graph partitioning algorithm which is based on these ideas. [sent-36, score-0.193]

23 We first compute the eigenvectors for the Markov transition matrix, and select those with long halflives. [sent-37, score-0.259]

24 For each such eigenvector, we identify bottlenecks by computing the sensitivity of the flow’s halflife with respect to perturbations in the edge weights. [sent-38, score-0.447]

25 In the current algorithm, we simply select one of these eigenvectors in which a bottleneck has been identified, and cut edges within the bottleneck. [sent-39, score-0.517]

26 The algorithm recomputes the eigenvectors and eigenvalues for the modified graph, and continues this iterative process until no further edges are cut. [sent-40, score-0.339]

27 2 From Affinities to Markov Chains Following the formulation in [4], we consider an undirected graph with vertices , for , and edges with non-negative weights . [sent-41, score-0.343]

28 The edge affinities are assumed to be symmetric, . [sent-43, score-0.117]

29 A Markov chain is defined using these affinities by setting the transition that is, probability from vertex to vertex to be proportional to the edge affinity, . [sent-44, score-0.458]

30 In matrix notation, the affinities are represented by a symmetric matrix , with elements , and the transition probability matrix is given by  ) 1 0 ' ) ( 0 ' )  $! [sent-46, score-0.283]

31 (1) defines the random walk of a particle on the graph This transition probability matrix . [sent-54, score-0.323]

32 Suppose the initial probability of the particle being at vertex is , for . [sent-55, score-0.196]

33 Then, the probability of the particle being initially at vertex and taking edge is . [sent-56, score-0.313]

34 In matrix notation, the probability of the particle ending up any of the vertices after one step is given by the distribution , where . [sent-57, score-0.226]

35 The matrix therefore has the same spectrum as and any eigenvector of must correspond to an eigenvector of with the same eigenvalue. [sent-66, score-0.274]

36 Note that , and therefore is a symmetric matrix since is symmetric while is diagonal. [sent-67, score-0.16]

37 ¢ ¢ c ¢ ¢ ¢  The advantage of considering the matrix over is that the symmetric eigenvalue problem is more stable to small perturbations, and is computationally much more tractable. [sent-68, score-0.205]

38 Since the matrix is symmetric, it has an orthogonal decomposition of the form: (2) ¢  e ˜ ™ ˜ ¥ f„d„F‘  (a) (b) (c) (d) (e) Figure 1: (a-c) Three random images each having an occluder in front of a textured background. [sent-69, score-0.095]

39 ¢ x” § Y¥ G   €   hC"##vsh B     C#"# € ”  B ” g˜ ¥ x x £ x” £   ¡   are the eigenvectors and is a diagonal matrix of eigenvalsorted in decreasing order. [sent-71, score-0.22]

40 While the eigenvectors have unit length, , the eigenvalues are real and have an absolute value bounded by 1, . [sent-72, score-0.223]

41 ™ ¤ ¥ § ¦ ¢ ƒ ¢  G ¢ where ues ƒ The eigenvector representation provides a simple way to capture the Markovian relaxation process [12]. [sent-73, score-0.14]

42 The , can be represented as: transition matrix after iterations, namely ¡ ’ c ! [sent-75, score-0.116]

43 As , the Markov chain approaches the stationary distribution , . [sent-77, score-0.095]

44 Assuming the graph is connected with edges having non-zero weights, it is convenient to interpret the Markovian relaxation process as perturbations to the stationary distribution, , where is associated with the stationary distribution and . [sent-78, score-0.465]

45 By the definition of the Markov chain, recall that the probability of making the transition from vertex to is the probability of starting in vertex , times the given that the particle is at vertex , namely conditional probability of taking edge . [sent-80, score-0.597]

46 The net flow of probability mass along edge from to is therefore the difference . [sent-82, score-0.168]

47 It then follows that the net flow of probability mass from vertex to is given by , where is the -element of the matrix   )   )   ) ( & ' ) 5 r) ' sq ) 1 3  R pR S (4) ! [sent-83, score-0.222]

48 Therefore, the flow is caused by the eigenvectors with , and hence we analyze the rate of decay of these eigenflows . [sent-95, score-0.203]

49 ) 0S ) ('—S ('§ § (a) (b) (c) Figure 2: (a) Eigenmode (b) corresponding eigenflow (c) gray value at each pixel corresponds to the maximum of the absolute sensitivities of all the weights on edges connected to a pixel (not including itself). [sent-104, score-0.426]

50 A graph clustering problem is formed where each pixel in a test image is associated with a vertex of the graph . [sent-106, score-0.438]

51 The edges in are defined by the standard 8-neighbourhood of each pixel (with pixels at the edges and corners of the image only having 5 and 3 neighbours, respectively). [sent-107, score-0.426]

52 The edge weight between neighbouring vertices and is given by the affinity , where is the test image brightness and is a grey-level standard deviation. [sent-108, score-0.267]

53 We use , where is the median at pixel absolute difference of gray levels between all neighbouring pixels and . [sent-109, score-0.147]

54 §   ¡  © ƒ X ¤¢  ¨ ¨ ¦  a  x ¨ ‰§¦X  ¥£¡¥ ) (' x 0 a €  6© € wa ) ‰ X a X x ©    This generative process provides an ensemble of clustering problems which we feel are representative of the structure of typical image segmentation problems. [sent-111, score-0.197]

55 This latter property ensures that there are some edges with significant weights between the two clusters in the graph associated with the foreground and background pixels. [sent-115, score-0.449]

56 In Figure 2 we plot one eigenvector, , of the matrix Notice that the displayed eigenmode is not in general piecewise constant. [sent-117, score-0.325]

57 Rather, the eigenvector is more like vibrational mode of a non-uniform membrane (in fact, they can be modeled in precisely that way). [sent-118, score-0.146]

58 Also, for all but the stationary distribution, there is a significant net flow between neighbours, especially in regions where the magnitude of the spatial gradient of the eigenmode is larger. [sent-119, score-0.266]

59 aƒ u X x ¢ ƒ x  4 Perturbation Analysis of EigenFlows As discussed in the introduction, we seek to identify bottlenecks in the eigenflows associated with long halflives. [sent-121, score-0.174]

60 This notion of identifying bottlenecks is similar to the well-known max-flow, min-cut theorem. [sent-122, score-0.131]

61 In particular, for a graph whose edge weights represent maximum flow capacities between pairs of vertices, instead of the current conditional transition probabilities, the bottleneck edges can be identified as precisely those edges across which the maximum flow is equal to their maximum capacity. [sent-123, score-0.681]

62 However, in the Markov framework, the flow of probability across an edge is only maximal in the extreme cases for which the initial probability of being at one of the edge’s endpoints is equal to one, and zero at the other endpoint. [sent-124, score-0.14]

63 Instead, we show that the desired bottleneck edges can be conveniently identified by considering the sensitivity of the flow’s halflife to perturbations of the edge weights (see Fig. [sent-126, score-0.528]

64 Intuitively, this sensitivity arises because the flow across a bottleneck will have fewer alternative routes to take and therefore will be particularly sensitive to changes in the edge weights within the bottleneck. [sent-128, score-0.359]

65 In comparison, the flow between two vertices in a strongly coupled cluster will have many alternative routes and therefore will not be particularly sensitive on the precise weight of any single edge. [sent-129, score-0.195]

66 r ¡ ¥ a 1X ¡   a ¢£  hX ¢ x” ar ¡ ¡  ¡ (X ¤ ¢ ¥£¡ Suppose we have an eigenvector of , with eigenvalue . [sent-133, score-0.167]

67 In Figure 2, for a given eigenvector and its flow, we plot the maximum of absolute sensitivities of all the weights on edges connected to a pixel (not including itself). [sent-138, score-0.49]

68 Note that the sensitivities are large in the bottlenecks at the border of the foreground and background. [sent-139, score-0.318]

69 a)8 8 5%%ˆX Here (5) a t X  5 E IGEN C UTS : A Basic Clustering Algorithm We select a simple clustering algorithm to test our proposal of using the derivative of the eigenmode’s halflife for identifying bottleneck edges. [sent-140, score-0.2]

70 Set , and set a scale factor to be the median of Form the symmetric matrix . [sent-146, score-0.109]

71 For each eigenvector of with halflife , compute the halflife sensitivities, for each edge in the graph. [sent-152, score-0.225]

72 That is, suppress if there is a strictly more negative value or for some vertex the sensitivity in the neighbourhood of , or some in the neighbourhood of . [sent-156, score-0.275]

73 for which x … over all non-suppressed edges 4 F2 of . [sent-163, score-0.15]

74 { © •¥ § }  | ' r h| ¡ ~ a 1X ¡   r ¡ r %| ¡ z @ § ƒ ¡ In step 5 we perform a non-maximal suppression on the sensitivities for the eigenvector. [sent-171, score-0.119]

75 We have observed that at strong borders the computed sensitivities can be less than in a band along the border few pixels thick. [sent-172, score-0.156]

76  € g ~ In step 6 we wish to select one particular eigenmode to base the edge cutting on at this iteration. [sent-175, score-0.38]

77 The reason for not considering all the modes simultaneously is that we have found the locations of the cuts can vary by a few pixels for different modes. [sent-176, score-0.131]

78 If nearby edges are cut as a result of different eigenmodes, then small isolated fragments can result in the final clustering. [sent-177, score-0.267]

79 Therefore we wish to select just one eigenmode to base cuts on each iteration. [sent-178, score-0.259]

80 The particular eigenmode selected can, of course, vary from one iteration to the next. [sent-179, score-0.194]

81 That is, we compute , where is the change of affinities for any edge left to otherwise. [sent-181, score-0.117]

82 When the process does terminate, the selected succession of cuts provides a modified affinity matrix which has well separated clusters. [sent-184, score-0.11]

83 For the final clustering result, we can use either a connected components algorithm or the -means algorithm of [5] with set to the number of modes having large halflives. [sent-185, score-0.159]

84 ¨ V £ £ 6 Experiments We compare the quality of E IGEN C UTS with two other methods: a -means based spectral clustering algorithm of [5] and an efficient segmentation algorithm proposed in [1] based on a pairwise region comparison function. [sent-186, score-0.234]

85 Our strategy was to select thresholds that are likely to generate a small number of stable partitions. [sent-187, score-0.106]

86 To allow for comparison with -means, we needed to determine the number of clusters a priori. [sent-189, score-0.117]

87 We therefore set to be the same as the number of clusters that E IGEN C UTS generated. [sent-190, score-0.117]

88 A crucial observation with E IGEN C UTS is that, although the number of clusters changed slightly with a change in , the regions they defined were qualitatively preserved across the thresholds and corresponded to a naive observer’s intuitive segmentation of the image. [sent-194, score-0.275]

89 The performance on the eye images is also interesting in that the largely uniform regions around the center of the eye remain as part of one cluster. [sent-196, score-0.093]

90 r ¡ In comparison, both the -means algorithm and the image segmentation algorithm of [1] (rows 3-6 in Fig. [sent-197, score-0.111]

91 £ 7 Discussion We have demonstrated that the common piecewise constant approximation to eigenvectors arising in spectral clustering problems limits the applicability of previous methods to situations in which the clusters are only relatively weakly coupled. [sent-199, score-0.587]

92 We have proposed a new edge cutting criterion which avoids this piecewise constant approximation. [sent-200, score-0.272]

93 Bottleneck edges between distinct clusters are identified through the observed sensitivity of an eigenflow’s halflife on changes in the edges’ affinity weights. [sent-201, score-0.355]

94 The basic algorithm we propose is computationally demanding in that the eigenvectors of the Markov matrix must be recomputed after each iteration of edge cutting. [sent-202, score-0.363]

95 However, the point of this algorithm is to simply demonstrate the partitioning that can be achieved through the computation of the sensitivity of eigenflow halflives to changes in edge weights. [sent-203, score-0.244]

96 More efficient updates of the eigenvalue computation, taking advantage of low-rank changes in the matrix from one iteration to the next, or a multi-scale technique, are important areas for further study. [sent-204, score-0.143]

97 Pairs of rows correspond to results from applying: E IGEN C UTS with and (Rows 1&2), -Means spectral clustering where , the number of clusters, is determined by the results of E IGEN C UTS (Rows 3&4) and Falsenszwalb & Huttenlocher (Rows 5&6). [sent-207, score-0.206]

98 Longuet-Higgins Feature grouping by relocalization of eigenvectors of the proximity matrix. [sent-264, score-0.162]

99 Appendix We compute the derivative of the log of half-life of an eigenvalue with respect to an element of the affinity matrix . [sent-280, score-0.117]

100 So eigenvectors with half-lives smaller than are effectively ignored. [sent-285, score-0.162]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('hal', 0.486), ('ife', 0.281), ('ow', 0.269), ('eigen', 0.222), ('ccd', 0.195), ('wcd', 0.193), ('eigenmode', 0.168), ('eigenvectors', 0.162), ('af', 0.158), ('edges', 0.15), ('igen', 0.15), ('bottlenecks', 0.131), ('ows', 0.125), ('edge', 0.117), ('clusters', 0.117), ('uts', 0.114), ('nities', 0.114), ('vertex', 0.113), ('nity', 0.11), ('eigenvector', 0.108), ('piecewise', 0.099), ('cut', 0.091), ('sensitivity', 0.088), ('sensitivities', 0.087), ('clustering', 0.086), ('vertices', 0.085), ('particle', 0.083), ('graph', 0.078), ('spectral', 0.076), ('bottleneck', 0.075), ('ives', 0.075), ('foreground', 0.074), ('segmentation', 0.072), ('perturbations', 0.068), ('markov', 0.064), ('eigenvalue', 0.059), ('transition', 0.058), ('matrix', 0.058), ('chain', 0.057), ('cutting', 0.056), ('eigenflows', 0.056), ('cuts', 0.052), ('symmetric', 0.051), ('dq', 0.05), ('weakly', 0.047), ('walk', 0.046), ('coupled', 0.045), ('pixel', 0.044), ('rows', 0.044), ('identify', 0.043), ('pixels', 0.043), ('decay', 0.041), ('cluster', 0.039), ('select', 0.039), ('image', 0.039), ('partitioning', 0.039), ('stationary', 0.038), ('mode', 0.038), ('gxfp', 0.037), ('huttenlocher', 0.037), ('occluder', 0.037), ('wbe', 0.037), ('zheng', 0.037), ('neighbourhood', 0.037), ('connected', 0.037), ('stable', 0.037), ('modes', 0.036), ('wb', 0.036), ('diag', 0.034), ('absolute', 0.034), ('regions', 0.033), ('eigenmodes', 0.033), ('identi', 0.032), ('suppression', 0.032), ('relaxation', 0.032), ('ng', 0.03), ('thresholds', 0.03), ('eye', 0.03), ('vx', 0.03), ('weights', 0.03), ('qx', 0.03), ('sw', 0.028), ('shi', 0.028), ('markovian', 0.028), ('eigenvalues', 0.027), ('notice', 0.027), ('net', 0.027), ('routes', 0.026), ('terminate', 0.026), ('neighbouring', 0.026), ('neighbours', 0.026), ('fragments', 0.026), ('border', 0.026), ('iteration', 0.026), ('leading', 0.024), ('mass', 0.024), ('convenient', 0.024), ('across', 0.023), ('characterized', 0.023), ('weiss', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 100 nips-2002-Half-Lives of EigenFlows for Spectral Clustering

Author: Chakra Chennubhotla, Allan D. Jepson

Abstract: Using a Markov chain perspective of spectral clustering we present an algorithm to automatically find the number of stable clusters in a dataset. The Markov chain’s behaviour is characterized by the spectral properties of the matrix of transition probabilities, from which we derive eigenflows along with their halflives. An eigenflow describes the flow of probability mass due to the Markov chain, and it is characterized by its eigenvalue, or equivalently, by the halflife of its decay as the Markov chain is iterated. A ideal stable cluster is one with zero eigenflow and infinite half-life. The key insight in this paper is that bottlenecks between weakly coupled clusters can be identified by computing the sensitivity of the eigenflow’s halflife to variations in the edge weights. We propose a novel E IGEN C UTS algorithm to perform clustering that removes these identified bottlenecks in an iterative fashion.

2 0.16504554 202 nips-2002-Unsupervised Color Constancy

Author: Kinh Tieu, Erik G. Miller

Abstract: In [1] we introduced a linear statistical model of joint color changes in images due to variation in lighting and certain non-geometric camera parameters. We did this by measuring the mappings of colors in one image of a scene to colors in another image of the same scene under different lighting conditions. Here we increase the flexibility of this color flow model by allowing flow coefficients to vary according to a low order polynomial over the image. This allows us to better fit smoothly varying lighting conditions as well as curved surfaces without endowing our model with too much capacity. We show results on image matching and shadow removal and detection.

3 0.10953733 136 nips-2002-Linear Combinations of Optic Flow Vectors for Estimating Self-Motion - a Real-World Test of a Neural Model

Author: Matthias O. Franz, Javaan S. Chahl

Abstract: The tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during self-motion. In this study, we examine whether a simplified linear model of these neurons can be used to estimate self-motion from the optic flow. We present a theory for the construction of an estimator consisting of a linear combination of optic flow vectors that incorporates prior knowledge both about the distance distribution of the environment, and about the noise and self-motion statistics of the sensor. The estimator is tested on a gantry carrying an omnidirectional vision sensor. The experiments show that the proposed approach leads to accurate and robust estimates of rotation rates, whereas translation estimates turn out to be less reliable. 1

4 0.10843387 57 nips-2002-Concurrent Object Recognition and Segmentation by Graph Partitioning

Author: Stella X. Yu, Ralph Gross, Jianbo Shi

Abstract: Segmentation and recognition have long been treated as two separate processes. We propose a mechanism based on spectral graph partitioning that readily combine the two processes into one. A part-based recognition system detects object patches, supplies their partial segmentations as well as knowledge about the spatial configurations of the object. The goal of patch grouping is to find a set of patches that conform best to the object configuration, while the goal of pixel grouping is to find a set of pixels that have the best low-level feature similarity. Through pixel-patch interactions and between-patch competition encoded in the solution space, these two processes are realized in one joint optimization problem. The globally optimal partition is obtained by solving a constrained eigenvalue problem. We demonstrate that the resulting object segmentation eliminates false positives for the part detection, while overcoming occlusion and weak contours for the low-level edge detection.

5 0.093521543 27 nips-2002-An Impossibility Theorem for Clustering

Author: Jon M. Kleinberg

Abstract: Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median. 1

6 0.087735668 203 nips-2002-Using Tarjan's Red Rule for Fast Dependency Tree Construction

7 0.081577592 52 nips-2002-Cluster Kernels for Semi-Supervised Learning

8 0.070304856 122 nips-2002-Learning About Multiple Objects in Images: Factorial Learning without Factorial Search

9 0.065546028 83 nips-2002-Extracting Relevant Structures with Side Information

10 0.058700748 70 nips-2002-Distance Metric Learning with Application to Clustering with Side-Information

11 0.057190023 74 nips-2002-Dynamic Structure Super-Resolution

12 0.056794997 98 nips-2002-Going Metric: Denoising Pairwise Data

13 0.056746706 132 nips-2002-Learning to Detect Natural Image Boundaries Using Brightness and Texture

14 0.054138783 53 nips-2002-Clustering with the Fisher Score

15 0.053887565 169 nips-2002-Real-Time Particle Filters

16 0.053716827 51 nips-2002-Classifying Patterns of Visual Motion - a Neuromorphic Approach

17 0.051592559 90 nips-2002-Feature Selection in Mixture-Based Clustering

18 0.051453628 188 nips-2002-Stability-Based Model Selection

19 0.049028337 99 nips-2002-Graph-Driven Feature Extraction From Microarray Data Using Diffusion Kernels and Kernel CCA

20 0.047602233 10 nips-2002-A Model for Learning Variance Components of Natural Images


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.152), (1, -0.018), (2, -0.019), (3, 0.108), (4, -0.068), (5, 0.026), (6, 0.07), (7, -0.059), (8, -0.082), (9, 0.081), (10, 0.053), (11, 0.032), (12, -0.041), (13, 0.006), (14, -0.064), (15, -0.092), (16, -0.073), (17, -0.056), (18, -0.03), (19, -0.04), (20, -0.012), (21, 0.146), (22, 0.028), (23, 0.118), (24, 0.055), (25, -0.018), (26, -0.057), (27, 0.056), (28, -0.042), (29, -0.023), (30, 0.092), (31, -0.14), (32, 0.073), (33, 0.127), (34, 0.003), (35, -0.091), (36, 0.136), (37, -0.023), (38, -0.043), (39, -0.104), (40, -0.058), (41, 0.024), (42, -0.03), (43, -0.206), (44, -0.089), (45, 0.047), (46, 0.148), (47, -0.197), (48, 0.092), (49, -0.059)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95716172 100 nips-2002-Half-Lives of EigenFlows for Spectral Clustering

Author: Chakra Chennubhotla, Allan D. Jepson

Abstract: Using a Markov chain perspective of spectral clustering we present an algorithm to automatically find the number of stable clusters in a dataset. The Markov chain’s behaviour is characterized by the spectral properties of the matrix of transition probabilities, from which we derive eigenflows along with their halflives. An eigenflow describes the flow of probability mass due to the Markov chain, and it is characterized by its eigenvalue, or equivalently, by the halflife of its decay as the Markov chain is iterated. A ideal stable cluster is one with zero eigenflow and infinite half-life. The key insight in this paper is that bottlenecks between weakly coupled clusters can be identified by computing the sensitivity of the eigenflow’s halflife to variations in the edge weights. We propose a novel E IGEN C UTS algorithm to perform clustering that removes these identified bottlenecks in an iterative fashion.

2 0.52866203 136 nips-2002-Linear Combinations of Optic Flow Vectors for Estimating Self-Motion - a Real-World Test of a Neural Model

Author: Matthias O. Franz, Javaan S. Chahl

Abstract: The tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during self-motion. In this study, we examine whether a simplified linear model of these neurons can be used to estimate self-motion from the optic flow. We present a theory for the construction of an estimator consisting of a linear combination of optic flow vectors that incorporates prior knowledge both about the distance distribution of the environment, and about the noise and self-motion statistics of the sensor. The estimator is tested on a gantry carrying an omnidirectional vision sensor. The experiments show that the proposed approach leads to accurate and robust estimates of rotation rates, whereas translation estimates turn out to be less reliable. 1

3 0.45247394 27 nips-2002-An Impossibility Theorem for Clustering

Author: Jon M. Kleinberg

Abstract: Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median. 1

4 0.45164075 57 nips-2002-Concurrent Object Recognition and Segmentation by Graph Partitioning

Author: Stella X. Yu, Ralph Gross, Jianbo Shi

Abstract: Segmentation and recognition have long been treated as two separate processes. We propose a mechanism based on spectral graph partitioning that readily combine the two processes into one. A part-based recognition system detects object patches, supplies their partial segmentations as well as knowledge about the spatial configurations of the object. The goal of patch grouping is to find a set of patches that conform best to the object configuration, while the goal of pixel grouping is to find a set of pixels that have the best low-level feature similarity. Through pixel-patch interactions and between-patch competition encoded in the solution space, these two processes are realized in one joint optimization problem. The globally optimal partition is obtained by solving a constrained eigenvalue problem. We demonstrate that the resulting object segmentation eliminates false positives for the part detection, while overcoming occlusion and weak contours for the low-level edge detection.

5 0.43348339 172 nips-2002-Recovering Articulated Model Topology from Observed Rigid Motion

Author: Leonid Taycher, John Iii, Trevor Darrell

Abstract: Accurate representation of articulated motion is a challenging problem for machine perception. Several successful tracking algorithms have been developed that model human body as an articulated tree. We propose a learning-based method for creating such articulated models from observations of multiple rigid motions. This paper is concerned with recovering topology of the articulated model, when the rigid motion of constituent segments is known. Our approach is based on finding the Maximum Likelihood tree shaped factorization of the joint probability density function (PDF) of rigid segment motions. The topology of graphical model formed from this factorization corresponds to topology of the underlying articulated body. We demonstrate the performance of our algorithm on both synthetic and real motion capture data.

6 0.3744919 202 nips-2002-Unsupervised Color Constancy

7 0.36802253 203 nips-2002-Using Tarjan's Red Rule for Fast Dependency Tree Construction

8 0.32596457 70 nips-2002-Distance Metric Learning with Application to Clustering with Side-Information

9 0.32271528 52 nips-2002-Cluster Kernels for Semi-Supervised Learning

10 0.31205818 84 nips-2002-Fast Exact Inference with a Factored Model for Natural Language Parsing

11 0.30829632 87 nips-2002-Fast Transformation-Invariant Factor Analysis

12 0.29493141 83 nips-2002-Extracting Relevant Structures with Side Information

13 0.28884715 132 nips-2002-Learning to Detect Natural Image Boundaries Using Brightness and Texture

14 0.27967209 35 nips-2002-Automatic Acquisition and Efficient Representation of Syntactic Structures

15 0.2638756 169 nips-2002-Real-Time Particle Filters

16 0.25915849 138 nips-2002-Manifold Parzen Windows

17 0.25775191 188 nips-2002-Stability-Based Model Selection

18 0.25491321 197 nips-2002-The Stability of Kernel Principal Components Analysis and its Relation to the Process Eigenspectrum

19 0.24920483 98 nips-2002-Going Metric: Denoising Pairwise Data

20 0.24629287 168 nips-2002-Real-Time Monitoring of Complex Industrial Processes with Particle Filters


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(11, 0.044), (14, 0.06), (40, 0.277), (42, 0.081), (54, 0.091), (55, 0.033), (67, 0.026), (68, 0.025), (74, 0.131), (87, 0.011), (92, 0.025), (98, 0.064)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.87171042 1 nips-2002-"Name That Song!" A Probabilistic Approach to Querying on Music and Text

Author: Brochu Eric, Nando de Freitas

Abstract: We present a novel, flexible statistical approach for modelling music and text jointly. The approach is based on multi-modal mixture models and maximum a posteriori estimation using EM. The learned models can be used to browse databases with documents containing music and text, to search for music using queries consisting of music and text (lyrics and other contextual information), to annotate text documents with music, and to automatically recommend or identify similar songs.

same-paper 2 0.80534738 100 nips-2002-Half-Lives of EigenFlows for Spectral Clustering

Author: Chakra Chennubhotla, Allan D. Jepson

Abstract: Using a Markov chain perspective of spectral clustering we present an algorithm to automatically find the number of stable clusters in a dataset. The Markov chain’s behaviour is characterized by the spectral properties of the matrix of transition probabilities, from which we derive eigenflows along with their halflives. An eigenflow describes the flow of probability mass due to the Markov chain, and it is characterized by its eigenvalue, or equivalently, by the halflife of its decay as the Markov chain is iterated. A ideal stable cluster is one with zero eigenflow and infinite half-life. The key insight in this paper is that bottlenecks between weakly coupled clusters can be identified by computing the sensitivity of the eigenflow’s halflife to variations in the edge weights. We propose a novel E IGEN C UTS algorithm to perform clustering that removes these identified bottlenecks in an iterative fashion.

3 0.56520623 83 nips-2002-Extracting Relevant Structures with Side Information

Author: Gal Chechik, Naftali Tishby

Abstract: The problem of extracting the relevant aspects of data, in face of multiple conflicting structures, is inherent to modeling of complex data. Extracting structure in one random variable that is relevant for another variable has been principally addressed recently via the information bottleneck method [15]. However, such auxiliary variables often contain more information than is actually required due to structures that are irrelevant for the task. In many other cases it is in fact easier to specify what is irrelevant than what is, for the task at hand. Identifying the relevant structures, however, can thus be considerably improved by also minimizing the information about another, irrelevant, variable. In this paper we give a general formulation of this problem and derive its formal, as well as algorithmic, solution. Its operation is demonstrated in a synthetic example and in two real world problems in the context of text categorization and face images. While the original information bottleneck problem is related to rate distortion theory, with the distortion measure replaced by the relevant information, extracting relevant features while removing irrelevant ones is related to rate distortion with side information.

4 0.56013572 150 nips-2002-Multiple Cause Vector Quantization

Author: David A. Ross, Richard S. Zemel

Abstract: We propose a model that can learn parts-based representations of highdimensional data. Our key assumption is that the dimensions of the data can be separated into several disjoint subsets, or factors, which take on values independently of each other. We assume each factor has a small number of discrete states, and model it using a vector quantizer. The selected states of each factor represent the multiple causes of the input. Given a set of training examples, our model learns the association of data dimensions with factors, as well as the states of each VQ. Inference and learning are carried out efficiently via variational algorithms. We present applications of this model to problems in image decomposition, collaborative filtering, and text classification.

5 0.54394507 132 nips-2002-Learning to Detect Natural Image Boundaries Using Brightness and Texture

Author: David R. Martin, Charless C. Fowlkes, Jitendra Malik

Abstract: The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, a classifier is trained using human labeled images as ground truth. We present precision-recall curves showing that the resulting detector outperforms existing approaches.

6 0.54295331 175 nips-2002-Reinforcement Learning to Play an Optimal Nash Equilibrium in Team Markov Games

7 0.54254174 96 nips-2002-Generalized² Linear² Models

8 0.53974271 162 nips-2002-Parametric Mixture Models for Multi-Labeled Text

9 0.53776509 163 nips-2002-Prediction and Semantic Association

10 0.53452748 74 nips-2002-Dynamic Structure Super-Resolution

11 0.53439784 39 nips-2002-Bayesian Image Super-Resolution

12 0.5343383 52 nips-2002-Cluster Kernels for Semi-Supervised Learning

13 0.53366995 2 nips-2002-A Bilinear Model for Sparse Coding

14 0.53317684 152 nips-2002-Nash Propagation for Loopy Graphical Games

15 0.53125668 122 nips-2002-Learning About Multiple Objects in Images: Factorial Learning without Factorial Search

16 0.52887839 127 nips-2002-Learning Sparse Topographic Representations with Products of Student-t Distributions

17 0.52625495 53 nips-2002-Clustering with the Fisher Score

18 0.5260846 124 nips-2002-Learning Graphical Models with Mercer Kernels

19 0.52495825 169 nips-2002-Real-Time Particle Filters

20 0.52220821 3 nips-2002-A Convergent Form of Approximate Policy Iteration