nips nips2006 nips2006-160 knowledge-graph by maker-knowledge-mining

160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints


Source: pdf

Author: Graham Mcneill, Sethu Vijayakumar

Abstract: Correspondence algorithms typically struggle with shapes that display part-based variation. We present a probabilistic approach that matches shapes using independent part transformations, where the parts themselves are learnt during matching. Ideas from semi-supervised learning are used to bias the algorithm towards finding ‘perceptually valid’ part structures. Shapes are represented by unlabeled point sets of arbitrary size and a background component is used to handle occlusion, local dissimilarity and clutter. Thus, unlike many shape matching techniques, our approach can be applied to shapes extracted from real images. Model parameters are estimated using an EM algorithm that alternates between finding a soft correspondence and computing the optimal part transformations using Procrustes analysis.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 uk Abstract Correspondence algorithms typically struggle with shapes that display part-based variation. [sent-6, score-0.192]

2 We present a probabilistic approach that matches shapes using independent part transformations, where the parts themselves are learnt during matching. [sent-7, score-0.575]

3 Shapes are represented by unlabeled point sets of arbitrary size and a background component is used to handle occlusion, local dissimilarity and clutter. [sent-9, score-0.119]

4 Thus, unlike many shape matching techniques, our approach can be applied to shapes extracted from real images. [sent-10, score-0.409]

5 Model parameters are estimated using an EM algorithm that alternates between finding a soft correspondence and computing the optimal part transformations using Procrustes analysis. [sent-11, score-0.25]

6 Over the last decade, numerous shape matching algorithms have been proposed that perform well on benchmark shape retrieval tests. [sent-13, score-0.437]

7 However, many of these techniques share the same limitations: Firstly, they operate on contiguous shape boundaries (i. [sent-14, score-0.181]

8 the ordering of the boundary points matters) and assume that every point on one boundary has a counterpart on the boundary it is being matched to (c. [sent-16, score-0.198]

9 Finally, they struggle to handle shapes that display significant part-based variation. [sent-21, score-0.217]

10 The first two limitations mean that many algorithms are unsuitable for matching shapes extracted from real images; the latter is important since many common objects (natural and man made) display part-based variation. [sent-22, score-0.271]

11 The methods described in [2, 3, 4] can handle outliers, occlusions and clutter, but are not designed to handle shapes whose parts are independently transformed. [sent-26, score-0.381]

12 In this paper, we introduce a probabilistic model that retains the desirable properties of these techniques but handles parts explicitly by learning the most likely part structure and correspondence simultaneously. [sent-27, score-0.428]

13 In this framework, a part is defined as a set of points that undergo a common transformation. [sent-28, score-0.136]

14 Learning these variation-based parts from scratch is an underconstrained problem. [sent-29, score-0.196]

15 Firstly, the distributions of our hierarchical mixture model are chosen so that the learnt parts are spatially localized. [sent-31, score-0.394]

16 Secondly, ideas from semi-supervised learning [5] are used to encourage a perceptually meaningful part decomposition. [sent-32, score-0.146]

17 Localized dissimilarity Figure 1: Examples of probabilistic point matching (PPM) using the technique described in [4]. [sent-43, score-0.207]

18 In each case, the initial alignment and the final match are shown. [sent-44, score-0.199]

19 Natural Part Decomposition (NPD): Most shapes have a natural part decomposition (NPD) (Fig. [sent-47, score-0.253]

20 We note that in tasks such as object recognition and CBIR, the query image is frequently a template shape (e. [sent-51, score-0.283]

21 Throughout this paper, it is assumed that we have obtained a sensible NPD for the query shape only1 – it is not reasonable to assume that an NPD can be computed for each database shape/image. [sent-55, score-0.181]

22 Variation-based Part Decomposition (VPD): A different notion of parts has been used in computer vision [7], where a part is defined as a set of pixels that undergo the same transformations across images. [sent-56, score-0.349]

23 We refer to this type of part decomposition (PD) as a variation-based part decomposition (VPD). [sent-57, score-0.188]

24 point sets), PBPM matches them by applying a different transformation to each variation-based part of the generating shape. [sent-60, score-0.163]

25 These variation-based parts are learnt during matching, where the known NPD of the data shape is used to bias the algorithm towards choosing a ‘perceptually valid’ VPD. [sent-61, score-0.492]

26 This is achieved using the equivalence constraint Constraint 1 (C1): Points that belong to the same natural part should belong to the same variation-based part. [sent-62, score-0.221]

27 3, this influences the learnt VPD by changing the generative model from one that generates individual data points to one that generates natural parts (subsets of data points). [sent-64, score-0.397]

28 To further increase the perceptual validity of the learnt VPD, we assume that variation-based parts are composed of spatially localized points of the generating shape. [sent-65, score-0.444]

29 PBPM aims to find the correct correspondence at the level of individual points, i. [sent-66, score-0.113]

30 each point of the generating shape should be mapped to the correct position on the data shape despite the lack of an exact point wise correspondence (e. [sent-68, score-0.507]

31 Soft correspondence techniques that achieve this using a single nonlinear transformation [2, 3] perform well on some challenging problems. [sent-72, score-0.146]

32 However, the smoothness constraints used to control the nonlinearity of the transformation will prevent these techniques from selecting the discontinuous transformations associated with part-based movements. [sent-73, score-0.128]

33 PBPM learns an independent linear transformation for each part and hence, can find the correct global match. [sent-74, score-0.112]

34 In relation to the point matching literature, PBPM is motivated by the success of the techniques described in [8, 2, 3, 4] on non-part-based problems. [sent-75, score-0.149]

35 [8]) in that we use ‘structural information’ about the point sets to constrain the matching problem. [sent-78, score-0.131]

36 In addition to learning multiple parts and transformations, our work differs in the type of structural information used (the NPD rather then the Delauney triangulation) and the way in which this information is incorporated. [sent-79, score-0.196]

37 With respect to the shape-matching literature, PBPM can be seen as a novel correspondence technique for use with established NPD algorithms. [sent-80, score-0.121]

38 Figure 2: The natural part decomposition (NPD) (b-d) for different representations of a shape (a). [sent-86, score-0.281]

39 Siddiqi and Kimia show that the parts used in their NPD algorithm [6] correspond to specific types of shocks when shock graph representations are used. [sent-88, score-0.228]

40 The points need not belong to the shape boundary and the ordering of the points is irrelevant. [sent-93, score-0.334]

41 , V, (2) m=1 with y|(m, v) ∼ N (Tv xm , σ 2 I). [sent-108, score-0.121]

42 (3) Here, Tv is the transformation used to match points of part v on X to points of part v on Y. [sent-109, score-0.318]

43 Finally, we define p(m|v) in such a way that the variation-based parts v are forced to be spatially coherent: µ ) Èexp{−(xm − −vµ exp{−(x Σ−1 (xm − µv )/2} v , T −1 v ) Σv (xm − µv )/2} T p(m|v) = m m (4) where µv ∈ R2 is a mean vector and Σv ∈ R2×2 is a covariance matrix. [sent-110, score-0.222]

44 , M } with the point xm that it indexes and assume that the xm follow a bivariate Gaussian distribution. [sent-114, score-0.283]

45 This assumption means that the xm themselves are essentially generated by a GMM with V components. [sent-122, score-0.121]

46 However, this GMM is embedded in the larger model and maximizing the data likelihood will balance this GMM’s desire for coherent parts against the need for the parts and transformations to explain the actual data (the yn ). [sent-123, score-0.705]

47 3 Parameter Estimation With respect to the model defined in the previous section, C1 states that all yn that belong to the same subset Yl were generated by the same mixture component v. [sent-126, score-0.322]

48 [5] for incorporating equivalence constraints between data points in mixture models. [sent-129, score-0.162]

49 Assuming that subsets and points within subsets are sampled i. [sent-132, score-0.136]

50 p(v|Yl ) log πv + (5) v=0 l=1 yn ∈Yl v=0 l=1 Note that eq. [sent-136, score-0.216]

51 (2) and rearranging slightly, we have L V E L p(v=0|Yl ) log{u|Yl | } p(v|Yl ) log πv + = v=0 l=1 V l=1 L + ´M p(yn |m, v)p(m|v) , p(v|Yl ) log v=1 l=1 yn ∈Yl µ (6) m=1 where u is the constant associated with the uniform distribution p(yn |v=0). [sent-139, score-0.243]

52 Here, we define Tv x ≡ sv Γv x + cv , where sv is a scale parameter, cv ∈ R2 is a translation vector and Γv is a 2D rotation matrix. [sent-158, score-0.136]

53 (12) becomes a weighted Procrustes matching problem between two points sets, each of size N × M – the extent to which xm corresponds to yn in the context of part v is given by p(v|Yl,n )p(m|yn , v). [sent-160, score-0.555]

54 (10-12) are similar to p(v|yn )p(m|yn , v) = p(m, v|yn ), the responsibility of the hidden variables (m, v) for the observed data, yn . [sent-163, score-0.256]

55 4, we initialize πv , µv and Σv by fitting a standard GMM to the xm . [sent-168, score-0.121]

56 5, we describe a sequential algorithm that can be used to select the number of parts V as well as provide initial estimates for all parameters. [sent-170, score-0.333]

57 X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of Y NPD of X Final match Input Output VPD of X with final Gaussians for p(m|v) Transformed X Figure 3: An example of applying PBPM with V =3. [sent-171, score-0.364]

58 PPM PBPM 2 parts 4 parts 5 parts 6 parts VPD of X VPD of Y Final match Figure 4: Results for the problem in Fig. [sent-172, score-0.856]

59 1 and 2, unsupervised matching of shapes with moving parts is a relatively unexplored area – particularly for shapes not composed of single closed boundaries. [sent-175, score-0.577]

60 Here, we provide illustrative examples which demonstrate the various properties of PBPM and then consider more challenging problems involving shapes extracted from real images. [sent-177, score-0.151]

61 The number of parts, V , is fixed prior to matching in these examples; a technique for estimating V is described in Sec. [sent-178, score-0.136]

62 To visualize the matches found by PBPM, each point yn is assigned to a part v using maxv p(v|yn ). [sent-180, score-0.36]

63 Those xm not assigned to any parts are removed from the figure. [sent-186, score-0.381]

64 The means and the ellipses of constant probability density associated with the distributions N (µv , Σv ) are plotted on the original shape X. [sent-187, score-0.163]

65 We also assign the xm to natural parts using the known natural part label of the yn that they are assigned to. [sent-188, score-0.685]

66 3 shows an example of matching two human body shapes using PBPM with V =3. [sent-190, score-0.246]

67 The learnt VPD is intuitive and the match is better than that found using PPM (Fig. [sent-191, score-0.205]

68 When V =5, one of the parts is effectively repeated, suggesting that four parts is sufficient to cover all the interesting variation. [sent-196, score-0.392]

69 However, when V =6 all parts are used and the VPD looks very similar to the NPD – only the lower leg and foot on each side are grouped together. [sent-197, score-0.244]

70 5, there are two genuine variation-based parts and X contains additional features. [sent-199, score-0.196]

71 PBPM effectively ignores the extra points of X and finds the correct parts and matches. [sent-200, score-0.257]

72 We find that deletion from the generating shape tends to be very precise (e. [sent-203, score-0.191]

73 5), whereas PBPM is less inclined to delete points from the data shape when it involves breaking up natural parts (e. [sent-206, score-0.427]

74 This is X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of Y NPD of X Final match Input Output VPD of X with final Gaussians for p(m|v) Transformed X Figure 5: Some features of X are not present on Y; the main building of X is smaller and the tower is more central. [sent-210, score-0.364]

75 X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment NPD of X Final match Input Output VPD of X with final Gaussians for p(m|v) Transformed X VPD of Y Figure 6: The left legs do not match and most of the right leg of X is missing. [sent-211, score-0.484]

76 largely due to the equivalence constraints trying to keep natural parts intact, though the value of the uniform density, u, and the way in which points are assigned to parts is also important. [sent-212, score-0.607]

77 7 and 8, a template shape is matched to the edge detector output from two real images. [sent-214, score-0.317]

78 We have not focused on optimizing the parameters of the edge detector since the aim is to demonstrate the ability of PBPM to handle suboptimal shape representations. [sent-215, score-0.242]

79 The correct correspondence and PDs is estimated in all cases, though the results are less precise for these difficult problems. [sent-216, score-0.113]

80 Note that the left shoulder is not assigned to the same variation-based part as the other points of the torso, i. [sent-220, score-0.148]

81 4 (with V =5) and 8 indicate that it may be possible to start with more parts than are required and either allow extraneous parts to go unused or perhaps prune parts during matching. [sent-225, score-0.588]

82 Finally, one could attempt to learn the parts in a sequential fashion. [sent-229, score-0.279]

83 5 Sequential Algorithm for Initialization When part variation is present, one would expect PBPM with V =1 to find the most significant part and allow the background to explain the remaining parts. [sent-231, score-0.163]

84 This suggests a sequential approach whereby a single part is learnt and removed from further consideration at each stage. [sent-232, score-0.302]

85 Specifically, assume that the first part (v=1) has been learnt and now learn the second part using the Y X = edge detector output. [sent-235, score-0.313]

86 NPD of Y Initial alignment Input Output VPD of X with final Gaussians for p(m|v) Transformed X VPD of Y NPD of X Final match Figure 7: Matching a template shape to an object in a cluttered scene. [sent-236, score-0.543]

87 NPD of Y Initial alignment Input Output VPD of X with final Gaussians for p(m|v) Transformed X VPD of Y NPD of X Final match Figure 8: Matching a template shape to a real image. [sent-238, score-0.52]

88 J2 = (13) l=1 Here, π1 is known and 1 zl ≡ u|Yl | (1 − π1 ) p(Yl |v=1)π1 + u|Yl | (1 − π1 ) (14) is the responsibility of the background component for the subset Yl after learning the first part – the superscript of z indicates the number of components that have already been learnt. [sent-240, score-0.183]

89 Note that we use the responsibilities for the subsets Yl rather than the individual yn [7], in line with the assumption that complete subsets belong to the same part. [sent-243, score-0.387]

90 Having learnt the second part, additional components v = 3, 4, . [sent-252, score-0.133]

91 are learnt in the same way except for minor adjustments to eqs. [sent-255, score-0.133]

92 The sequential algorithm terminates when the uniform component is not significantly responsible for any data or the most recently learnt component is not significantly responsible for any data. [sent-257, score-0.343]

93 As discussed in [7], the sequential algorithm is expected to have fewer problems with local minima since the objective function will be smoother (a single component competes against a uniform component at each stage) and the search space smaller (fewer parameters are learnt at each stage). [sent-258, score-0.305]

94 Preliminary experiments suggest that the sequential algorithm is capable of solving the model selection problem (choosing the number of parts) and providing good initial parameter values for the full model described in Sec. [sent-259, score-0.137]

95 9 and 10 – the initial transformations for each part are not shown. [sent-262, score-0.178]

96 We are currently investigating how the model can be made more robust to this value and also how the used xm should be subtracted (in a probabilistic sense) at each step. [sent-264, score-0.165]

97 X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of Y NPD of X Final match Input Output VPD of X with final Gaussians for p(m|v) Transformed X Figure 9: Results for PBPM; V and initial parameters were found using the sequential approach. [sent-265, score-0.501]

98 X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of Y NPD of X Final match Input Output VPD of X with final Gaussians for p(m|v) Transformed X Figure 10: Results for PBPM; V and initial parameters were found using the sequential approach. [sent-266, score-0.501]

99 In this paper, we have presented a probabilistic correspondence algorithm that handles part-based variation by learning the parts and correspondence simultaneously. [sent-271, score-0.443]

100 Shape matching and recognition using generative models and informative features. [sent-292, score-0.127]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('npd', 0.507), ('vpd', 0.399), ('pbpm', 0.38), ('yl', 0.265), ('yn', 0.216), ('parts', 0.196), ('final', 0.165), ('shape', 0.163), ('shapes', 0.135), ('learnt', 0.133), ('xm', 0.121), ('matching', 0.111), ('correspondence', 0.096), ('tv', 0.095), ('ppm', 0.091), ('gaussians', 0.084), ('sequential', 0.083), ('alignment', 0.073), ('match', 0.072), ('occlusion', 0.063), ('part', 0.063), ('equivalence', 0.062), ('transformations', 0.061), ('initial', 0.054), ('perceptually', 0.054), ('leg', 0.048), ('template', 0.047), ('subsets', 0.046), ('gmm', 0.046), ('points', 0.044), ('responsibilities', 0.043), ('transformed', 0.041), ('assigned', 0.041), ('responsibility', 0.04), ('clutter', 0.04), ('mixture', 0.039), ('belong', 0.036), ('cbir', 0.036), ('mcneill', 0.036), ('npds', 0.036), ('procrustes', 0.036), ('siddiqi', 0.036), ('edinburgh', 0.036), ('detector', 0.036), ('em', 0.035), ('sv', 0.034), ('cv', 0.034), ('transformation', 0.032), ('shock', 0.032), ('struggle', 0.032), ('decomposition', 0.031), ('component', 0.031), ('boundary', 0.03), ('soft', 0.03), ('ideas', 0.029), ('zl', 0.029), ('shental', 0.029), ('undergo', 0.029), ('ling', 0.029), ('generating', 0.028), ('valid', 0.028), ('probabilistic', 0.028), ('handles', 0.027), ('uniform', 0.027), ('matched', 0.027), ('output', 0.026), ('spatially', 0.026), ('handle', 0.025), ('technique', 0.025), ('display', 0.025), ('natural', 0.024), ('object', 0.023), ('dissimilarity', 0.023), ('removed', 0.023), ('firstly', 0.022), ('indexes', 0.021), ('matches', 0.02), ('background', 0.02), ('point', 0.02), ('coherent', 0.019), ('pami', 0.019), ('updates', 0.019), ('responsible', 0.019), ('techniques', 0.018), ('explained', 0.018), ('query', 0.018), ('edge', 0.018), ('secondly', 0.018), ('assignments', 0.017), ('constraints', 0.017), ('explain', 0.017), ('ordering', 0.017), ('correct', 0.017), ('localized', 0.017), ('examples', 0.016), ('recognition', 0.016), ('currently', 0.016), ('image', 0.016), ('noam', 0.016), ('jacobs', 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints

Author: Graham Mcneill, Sethu Vijayakumar

Abstract: Correspondence algorithms typically struggle with shapes that display part-based variation. We present a probabilistic approach that matches shapes using independent part transformations, where the parts themselves are learnt during matching. Ideas from semi-supervised learning are used to bias the algorithm towards finding ‘perceptually valid’ part structures. Shapes are represented by unlabeled point sets of arbitrary size and a background component is used to handle occlusion, local dissimilarity and clutter. Thus, unlike many shape matching techniques, our approach can be applied to shapes extracted from real images. Model parameters are estimated using an EM algorithm that alternates between finding a soft correspondence and computing the optimal part transformations using Procrustes analysis.

2 0.1012048 185 nips-2006-Subordinate class recognition using relational object models

Author: Aharon B. Hillel, Daphna Weinshall

Abstract: We address the problem of sub-ordinate class recognition, like the distinction between different types of motorcycles. Our approach is motivated by observations from cognitive psychology, which identify parts as the defining component of basic level categories (like motorcycles), while sub-ordinate categories are more often defined by part properties (like ’jagged wheels’). Accordingly, we suggest a two-stage algorithm: First, a relational part based object model is learnt using unsegmented object images from the inclusive class (e.g., motorcycles in general). The model is then used to build a class-specific vector representation for images, where each entry corresponds to a model’s part. In the second stage we train a standard discriminative classifier to classify subclass instances (e.g., cross motorcycles) based on the class-specific vector representation. We describe extensive experimental results with several subclasses. The proposed algorithm typically gives better results than a competing one-step algorithm, or a two stage algorithm where classification is based on a model of the sub-ordinate class. 1

3 0.083440632 110 nips-2006-Learning Dense 3D Correspondence

Author: Florian Steinke, Volker Blanz, Bernhard Schölkopf

Abstract: Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are difficult to capture in an a priori criterion. While previous work has used a priori criteria which in some cases led to very good results, the present paper explores whether it is possible to learn a combination of features that, for a given training set of aligned human heads, characterizes the notion of correct correspondence. By optimizing this criterion, we are then able to compute correspondence and morphs for novel heads. 1

4 0.079465367 186 nips-2006-Support Vector Machines on a Budget

Author: Ofer Dekel, Yoram Singer

Abstract: The standard Support Vector Machine formulation does not provide its user with the ability to explicitly control the number of support vectors used to define the generated classifier. We present a modified version of SVM that allows the user to set a budget parameter B and focuses on minimizing the loss attained by the B worst-classified examples while ignoring the remaining examples. This idea can be used to derive sparse versions of both L1-SVM and L2-SVM. Technically, we obtain these new SVM variants by replacing the 1-norm in the standard SVM formulation with various interpolation-norms. We also adapt the SMO optimization algorithm to our setting and report on some preliminary experimental results. 1

5 0.071244031 34 nips-2006-Approximate Correspondences in High Dimensions

Author: Kristen Grauman, Trevor Darrell

Abstract: Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel. 1

6 0.067574263 106 nips-2006-Large Margin Hidden Markov Models for Automatic Speech Recognition

7 0.065521412 39 nips-2006-Balanced Graph Matching

8 0.058648583 156 nips-2006-Ordinal Regression by Extended Binary Classification

9 0.054429322 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment

10 0.052875381 147 nips-2006-Non-rigid point set registration: Coherent Point Drift

11 0.04746937 122 nips-2006-Learning to parse images of articulated bodies

12 0.046733651 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

13 0.044853948 201 nips-2006-Using Combinatorial Optimization within Max-Product Belief Propagation

14 0.042841587 66 nips-2006-Detecting Humans via Their Pose

15 0.042090073 150 nips-2006-On Transductive Regression

16 0.041713759 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing

17 0.037711043 113 nips-2006-Learning Structural Equation Models for fMRI

18 0.037618656 52 nips-2006-Clustering appearance and shape by learning jigsaws

19 0.03742199 40 nips-2006-Bayesian Detection of Infrequent Differences in Sets of Time Series with Shared Structure

20 0.035300177 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.123), (1, 0.023), (2, 0.065), (3, -0.025), (4, 0.037), (5, -0.0), (6, -0.057), (7, -0.058), (8, 0.006), (9, -0.026), (10, -0.005), (11, 0.001), (12, 0.009), (13, 0.023), (14, 0.114), (15, -0.046), (16, -0.012), (17, -0.085), (18, 0.019), (19, -0.024), (20, 0.028), (21, 0.041), (22, 0.001), (23, 0.046), (24, 0.022), (25, -0.046), (26, 0.047), (27, 0.067), (28, 0.117), (29, -0.012), (30, 0.065), (31, 0.049), (32, 0.023), (33, -0.052), (34, -0.075), (35, -0.062), (36, 0.042), (37, 0.018), (38, -0.104), (39, 0.126), (40, -0.092), (41, 0.177), (42, -0.069), (43, -0.097), (44, -0.055), (45, 0.065), (46, 0.069), (47, -0.069), (48, -0.068), (49, -0.231)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94931072 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints

Author: Graham Mcneill, Sethu Vijayakumar

Abstract: Correspondence algorithms typically struggle with shapes that display part-based variation. We present a probabilistic approach that matches shapes using independent part transformations, where the parts themselves are learnt during matching. Ideas from semi-supervised learning are used to bias the algorithm towards finding ‘perceptually valid’ part structures. Shapes are represented by unlabeled point sets of arbitrary size and a background component is used to handle occlusion, local dissimilarity and clutter. Thus, unlike many shape matching techniques, our approach can be applied to shapes extracted from real images. Model parameters are estimated using an EM algorithm that alternates between finding a soft correspondence and computing the optimal part transformations using Procrustes analysis.

2 0.50232702 110 nips-2006-Learning Dense 3D Correspondence

Author: Florian Steinke, Volker Blanz, Bernhard Schölkopf

Abstract: Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are difficult to capture in an a priori criterion. While previous work has used a priori criteria which in some cases led to very good results, the present paper explores whether it is possible to learn a combination of features that, for a given training set of aligned human heads, characterizes the notion of correct correspondence. By optimizing this criterion, we are then able to compute correspondence and morphs for novel heads. 1

3 0.48943979 185 nips-2006-Subordinate class recognition using relational object models

Author: Aharon B. Hillel, Daphna Weinshall

Abstract: We address the problem of sub-ordinate class recognition, like the distinction between different types of motorcycles. Our approach is motivated by observations from cognitive psychology, which identify parts as the defining component of basic level categories (like motorcycles), while sub-ordinate categories are more often defined by part properties (like ’jagged wheels’). Accordingly, we suggest a two-stage algorithm: First, a relational part based object model is learnt using unsegmented object images from the inclusive class (e.g., motorcycles in general). The model is then used to build a class-specific vector representation for images, where each entry corresponds to a model’s part. In the second stage we train a standard discriminative classifier to classify subclass instances (e.g., cross motorcycles) based on the class-specific vector representation. We describe extensive experimental results with several subclasses. The proposed algorithm typically gives better results than a competing one-step algorithm, or a two stage algorithm where classification is based on a model of the sub-ordinate class. 1

4 0.46109965 34 nips-2006-Approximate Correspondences in High Dimensions

Author: Kristen Grauman, Trevor Darrell

Abstract: Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel. 1

5 0.45331031 106 nips-2006-Large Margin Hidden Markov Models for Automatic Speech Recognition

Author: Fei Sha, Lawrence K. Saul

Abstract: We study the problem of parameter estimation in continuous density hidden Markov models (CD-HMMs) for automatic speech recognition (ASR). As in support vector machines, we propose a learning algorithm based on the goal of margin maximization. Unlike earlier work on max-margin Markov networks, our approach is specifically geared to the modeling of real-valued observations (such as acoustic feature vectors) using Gaussian mixture models. Unlike previous discriminative frameworks for ASR, such as maximum mutual information and minimum classification error, our framework leads to a convex optimization, without any spurious local minima. The objective function for large margin training of CD-HMMs is defined over a parameter space of positive semidefinite matrices. Its optimization can be performed efficiently with simple gradient-based methods that scale well to large problems. We obtain competitive results for phonetic recognition on the TIMIT speech corpus.

6 0.42359161 39 nips-2006-Balanced Graph Matching

7 0.38716605 40 nips-2006-Bayesian Detection of Infrequent Differences in Sets of Time Series with Shared Structure

8 0.35370228 201 nips-2006-Using Combinatorial Optimization within Max-Product Belief Propagation

9 0.32977477 147 nips-2006-Non-rigid point set registration: Coherent Point Drift

10 0.32066217 91 nips-2006-Hierarchical Dirichlet Processes with Random Effects

11 0.31415865 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

12 0.31104425 66 nips-2006-Detecting Humans via Their Pose

13 0.30368644 156 nips-2006-Ordinal Regression by Extended Binary Classification

14 0.28189754 186 nips-2006-Support Vector Machines on a Budget

15 0.27798957 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing

16 0.27097642 50 nips-2006-Chained Boosting

17 0.27056044 176 nips-2006-Single Channel Speech Separation Using Factorial Dynamics

18 0.26567549 52 nips-2006-Clustering appearance and shape by learning jigsaws

19 0.26114333 114 nips-2006-Learning Time-Intensity Profiles of Human Activity using Non-Parametric Bayesian Models

20 0.25901023 58 nips-2006-Context Effects in Category Learning: An Investigation of Four Probabilistic Models


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.098), (3, 0.028), (7, 0.083), (9, 0.046), (12, 0.016), (20, 0.016), (22, 0.035), (44, 0.063), (57, 0.101), (64, 0.024), (65, 0.062), (69, 0.077), (70, 0.221), (71, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.82789195 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints

Author: Graham Mcneill, Sethu Vijayakumar

Abstract: Correspondence algorithms typically struggle with shapes that display part-based variation. We present a probabilistic approach that matches shapes using independent part transformations, where the parts themselves are learnt during matching. Ideas from semi-supervised learning are used to bias the algorithm towards finding ‘perceptually valid’ part structures. Shapes are represented by unlabeled point sets of arbitrary size and a background component is used to handle occlusion, local dissimilarity and clutter. Thus, unlike many shape matching techniques, our approach can be applied to shapes extracted from real images. Model parameters are estimated using an EM algorithm that alternates between finding a soft correspondence and computing the optimal part transformations using Procrustes analysis.

2 0.70417243 65 nips-2006-Denoising and Dimension Reduction in Feature Space

Author: Mikio L. Braun, Klaus-Robert Müller, Joachim M. Buhmann

Abstract: We show that the relevant information about a classification problem in feature space is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem. Thus, kernels not only transform data sets such that good generalization can be achieved even by linear discriminant functions, but this transformation is also performed in a manner which makes economic use of feature space dimensions. In the best case, kernels provide efficient implicit representations of the data to perform classification. Practically, we propose an algorithm which enables us to recover the subspace and dimensionality relevant for good classification. Our algorithm can therefore be applied (1) to analyze the interplay of data set and kernel in a geometric fashion, (2) to help in model selection, and to (3) de-noise in feature space in order to yield better classification results. 1

3 0.66056007 34 nips-2006-Approximate Correspondences in High Dimensions

Author: Kristen Grauman, Trevor Darrell

Abstract: Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel. 1

4 0.65519398 167 nips-2006-Recursive ICA

Author: Honghao Shan, Lingyun Zhang, Garrison W. Cottrell

Abstract: Independent Component Analysis (ICA) is a popular method for extracting independent features from visual data. However, as a fundamentally linear technique, there is always nonlinear residual redundancy that is not captured by ICA. Hence there have been many attempts to try to create a hierarchical version of ICA, but so far none of the approaches have a natural way to apply them more than once. Here we show that there is a relatively simple technique that transforms the absolute values of the outputs of a previous application of ICA into a normal distribution, to which ICA maybe applied again. This results in a recursive ICA algorithm that may be applied any number of times in order to extract higher order structure from previous layers. 1

5 0.64858955 110 nips-2006-Learning Dense 3D Correspondence

Author: Florian Steinke, Volker Blanz, Bernhard Schölkopf

Abstract: Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are difficult to capture in an a priori criterion. While previous work has used a priori criteria which in some cases led to very good results, the present paper explores whether it is possible to learn a combination of features that, for a given training set of aligned human heads, characterizes the notion of correct correspondence. By optimizing this criterion, we are then able to compute correspondence and morphs for novel heads. 1

6 0.64729273 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

7 0.6447044 119 nips-2006-Learning to Rank with Nonsmooth Cost Functions

8 0.64453894 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation

9 0.64388233 118 nips-2006-Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields

10 0.643686 43 nips-2006-Bayesian Model Scoring in Markov Random Fields

11 0.64280176 158 nips-2006-PG-means: learning the number of clusters in data

12 0.64231467 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model

13 0.63953143 201 nips-2006-Using Combinatorial Optimization within Max-Product Belief Propagation

14 0.63767231 80 nips-2006-Fundamental Limitations of Spectral Clustering

15 0.63762784 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

16 0.63759238 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment

17 0.63582832 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds

18 0.63460433 42 nips-2006-Bayesian Image Super-resolution, Continued

19 0.63444304 32 nips-2006-Analysis of Empirical Bayesian Methods for Neuroelectromagnetic Source Localization

20 0.63237107 117 nips-2006-Learning on Graph with Laplacian Regularization