nips nips2006 nips2006-110 knowledge-graph by maker-knowledge-mining

110 nips-2006-Learning Dense 3D Correspondence


Source: pdf

Author: Florian Steinke, Volker Blanz, Bernhard Schölkopf

Abstract: Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are difficult to capture in an a priori criterion. While previous work has used a priori criteria which in some cases led to very good results, the present paper explores whether it is possible to learn a combination of features that, for a given training set of aligned human heads, characterizes the notion of correct correspondence. By optimizing this criterion, we are then able to compute correspondence and morphs for novel heads. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 de Abstract Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are difficult to capture in an a priori criterion. [sent-5, score-0.963]

2 By optimizing this criterion, we are then able to compute correspondence and morphs for novel heads. [sent-7, score-0.472]

3 1 Introduction Establishing 3D correspondence between surfaces such as human faces is a crucial element of classspecific representations of objects in computer vision and graphics. [sent-8, score-0.761]

4 Dense correspondence is a mapping or ”warp” from all points of a surface onto another surface (in some cases, including the present work, extending from the surface to the embedding space). [sent-10, score-1.877]

5 The practical relevance of surface correspondence has been increasing over the last years. [sent-13, score-0.866]

6 In computer vision, an increasing number of algorithms for face and object recognition based on 2D images or 3D scans, as well as shape retrieval in databases and 3D surface reconstruction from images, rely on shape representations that are built upon dense surface correspondence. [sent-15, score-1.266]

7 Unlike existing algorithms that define some ad-hoc criteria for identifying corresponding points on two objects, we treat correspondence as a machine learning problem and propose a data-driven approach that learns the relevant criteria from a dataset of given object correspondences. [sent-16, score-0.75]

8 In stereo vision and optical flow [2, 3], a correspondence is correct if and only if it maps a point in one scene to a point in another scene which stems from the same physical point. [sent-17, score-0.538]

9 In contrast, correspondence between different objects is not a well-defined problem. [sent-18, score-0.497]

10 On a more fundamental level, however, even the problem of matching the eyes is difficult to cast in a formal way, and in fact this matching involves many of the basic problems of computer vision and feature detection. [sent-20, score-0.355]

11 In a given application, the desired correspondence can be dependent on anatomical facts, measures of shape similarity, or the overall layout of features on the surface. [sent-21, score-0.524]

12 However, it may also depend on the properties of human perception, on functional or semantic issues, on the context within a given object class or even on social convention. [sent-22, score-0.244]

13 Given two objects O1 and O2 , we are seeking a correspondence mapping τ such that certain properties of x (relative to O1 ) are preserved in τ (x) (relative to O2 ) — they are invariant. [sent-25, score-0.608]

14 We shall do this by providing a dictionary of potential properties (such as geometric features, or texture properties) and approximating a “true” property characterizing correspondence as an expansion in that dictionary. [sent-28, score-0.7]

15 The remainder of the paper is structured as follows: in Section 2 we review some related work, whereas in Section 3 we set up our general framework for computing correspondence fields. [sent-30, score-0.408]

16 Following this, we explain in Section 4 how to learn the characteristic properties for correspondence and continue to explain two new feature functions in Section 5. [sent-31, score-0.749]

17 2 Related Work The problem of establishing dense correspondence has been addressed in the domain of 2D images, on surfaces embedded in 3D space, and on volumetric data. [sent-33, score-0.598]

18 In the image domain, correspondence from optical flow [2, 3] has been used to describe the transformations of faces with pose changes and facial expressions [4], and to describe the differences in the shapes of individual faces [5]. [sent-34, score-0.717]

19 An algorithm for computing correspondence on parameterized 3D surfaces has been introduced for creating a class-specific representation of human faces [1] and bodies [6]. [sent-35, score-0.644]

20 See the review [9] for an overview of a wide range of additional correspondence algorithms. [sent-39, score-0.408]

21 Algorithms that are applied to 3D faces typically rely on surface parameterizations, such as cylindrical coordinates, and then compute optical flow on the texture map as well as the depth image [1]. [sent-40, score-0.707]

22 One such algorithm is presented in [10]: here, the surfaces are embedded into the surrounding space and a 3D volume deformation is computed. [sent-44, score-0.288]

23 The use of the signed distance function as a guiding feature ensures correct surface to surface mappings. [sent-45, score-1.584]

24 Though implicit surface representations allow the extraction of such features [11], these differential geometric properties are inherently instable with respect to noise. [sent-48, score-0.612]

25 [12] propose a related 3D geometric feature based on integrals and thus more stable to compute. [sent-49, score-0.261]

26 We present a slightly modified version thereof which allows for a much easier computation of this feature from a signed distance function represented as a kernel expansion in comparison to a complete space voxelisation step required in [12]. [sent-50, score-0.76]

27 Given a reference object Or and a target Ot the goal of computing a correspondence can then be expressed as determining the deformation function τ : X → X which maps each point x ∈ X on Or to its corresponding point τ (x) on Ot . [sent-52, score-0.892]

28 We further assume that we can construct a dictionary of so-called feature functions fi : X → R, i = 1, . [sent-53, score-0.304]

29 [10] propose to use the signed distance function, which maps to each point x ∈ X the distance to the objects surface — with positive sign outside the shape and negative sign inside. [sent-56, score-1.141]

30 They also use the first derivative of the signed distance function, which can be interpreted as the surface normal. [sent-57, score-0.881]

31 In section Section 5 we will propose two additional features which are characteristic for 3D shapes, namely a curvature related feature and surface texture. [sent-58, score-0.861]

32 We assume that the warp-invariant feature can be represented or at least approximated by an expansion in this dictionary. [sent-59, score-0.252]

33 The second term measures the local similarity of the warp-invariant feature function extracted on the reference object f r and on the target object f t and integrates it over the volume of interest. [sent-66, score-0.729]

34 This formulation is a modification of [10] where two feature functions were chosen a priori (the signed distance and its derivative) and used instead of fγ . [sent-67, score-0.68]

35 In contrast, the present approach starts from the notion of invariance and estimates a location-dependent linear combination of feature functions with a maximal degree of invariance for correct correspondences (cf. [sent-69, score-0.453]

36 We consider location-dependent linear combinations since one cannot expect that all the feature functions that define correspondence are equally important for all points of an object. [sent-71, score-0.699]

37 As discussed above, it is unclear how to characterize and evaluate correspondence in a principled way. [sent-74, score-0.439]

38 The authors of [10] propose a strategy based on a two-way morph: they first compute a deformation from the reference object to the target, and afterwards vice versa. [sent-75, score-0.465]

39 4 Learning the optimal feature function We assume that a database of D objects that are already in correspondence is available. [sent-81, score-0.758]

40 This could for example be achieved by manually picking many corresponding point pairs and training a regression to map all the points onto each other, or by (semi-)automatic methods optimized for the given object class (e. [sent-82, score-0.26]

41 feature function (as defined in the introduction) that characterizes correspondence using the basic features in our dictionary. [sent-86, score-0.684]

42 Thus for each point x ∈ X , we maximize r d Ed,zd fγ (x) − fγ (zd ) 2 r d Ed fγ (x) − fγ (τd (x)) 2 (2) r d Here, fγ , fγ are the warp-invariant feature functions evaluated on the reference object and the d-th database object respectively. [sent-89, score-0.754]

43 τd (x) is the point matching x on the d-th database object and zd is a random point sampled from it. [sent-90, score-0.297]

44 If not, the smoothness term τ H gets relatively more weight implying that the local correspondence is mostly determined through more global contributions. [sent-97, score-0.437]

45 Note, moreover that while we have described the above for the leading eigenvector only, nothing prevents us from computing several eigenvectors and stacking up the resulting warp-invariant feature 2 m 1 functions fγ , fγ , . [sent-98, score-0.314]

46 5 Basic Feature Functions In our dictionary of basic feature functions we included the signed distance function and its derivative. [sent-102, score-0.766]

47 We added a curvature related feature, the ”signed balls”, and surface texture intensity. [sent-103, score-0.6]

48 Take a ball BR (x) with radius R centered at that point and compute the average of the signed distance function s : X → R over the ball’s volume: Is (x) = 1 VBR (x) s(x )dx − s(x) (6) BR (x) If the surface around x is flat on the scale of the ball, we obtain zero. [sent-106, score-0.968]

49 At points where the surface is bent outwards this value is positive, at concave points it is negative. [sent-107, score-0.58]

50 The normalization to the value B 4mm B 28mm C 0mm C 5mm C 15mm Figure 1: The two figures on the left show the color-coded values of the ”signed balls” feature at different radii R. [sent-108, score-0.263]

51 Convex parts of the surface are assigned positive values (blue), concave parts negative (red). [sent-110, score-0.458]

52 The three figures on the right show how the surface feature function that was trained with texture intensity extends off the surface (for clarity visualized in false colors) and becomes smoother. [sent-111, score-1.279]

53 of the signed distance function at the center of the ball allows us to compute this feature function also for off-surface points, where the interpretation with respect to the other iso-surfaces does not change. [sent-113, score-0.662]

54 Due to the integration, this feature is stable with respect to surface noise, while mean curvature in differential geometry may be affected significantly. [sent-114, score-0.742]

55 We propose to represent the implicit surface function as in [10] where a compactly supported kernel expansion is trained to approximate the signed distance. [sent-116, score-1.071]

56 In this case the integral and the kernel summation can be interchanged, so we only need to evaluate terms of the form BR (x) k(xi , x )dx and then add them in the same way as the signed distance function is computed. [sent-117, score-0.535]

57 For the case where the surface looks locally like a sphere it is easy to show that in the limit of small balls the value of the ”signed balls” feature function is related to the differential geometric mean curvature H by Is (x) = 3π H 2 R2 + O(R3 ). [sent-123, score-0.965]

58 2 Surface properties — Texture The volume deformation approach presented in Section 3 requires the use of feature functions defined on the whole domain X . [sent-125, score-0.438]

59 In order to include information f |∂Ω which is just given on a surface ∂Ω of the object whose interior volume is Ω, e. [sent-126, score-0.658]

60 the texture intensity, we propose to extended the surface feature f |∂Ω into a differentiable feature function f : X → R such that f → f |∂Ω as we get closer to the surface. [sent-128, score-0.964]

61 We propose to use a multi-scale compactly supported kernel regression to determine f : at each scale, from coarse to fine, we select approximately equally spaced points on the surface at a distance related to the kernel width of that scale. [sent-132, score-0.867]

62 Then we compute the feature value at these points averaged over a sphere of radius of the corresponding kernel support. [sent-133, score-0.337]

63 Due to the sub-sampling the kernel regressions do not contain too many kernel centers and the compact support of the kernel ensures sparse kernel matrices. [sent-135, score-0.401]

64 In order to optimize (1) we followed the approach of [10]: we represent the deformation τ as a multi-scale compactly supported kernel expansion, i. [sent-139, score-0.308]

65 Figure 2: Locations that are marked yellow show an above threshold, relative contribution (see text) of a given feature in the warp-invariant feature function. [sent-144, score-0.465]

66 C is the surface intensity feature, B the signed balls feature (R = 6mm), N the surface normals in different directions. [sent-145, score-1.687]

67 Note that points where color has a large contribution (yellow points in C) are clustered around regions with characteristic color information, such as the eyes or the mouth. [sent-146, score-0.412]

68 As a test object class we used 3D heads with known correspondence [1]. [sent-153, score-0.705]

69 100 heads were used for the training object database and 10 to test our correspondence algorithm. [sent-154, score-0.775]

70 As a reference head we used the mean head of the database. [sent-155, score-0.334]

71 However, the correspondence of the objects in the database is only defined on the surface. [sent-157, score-0.567]

72 In order to extend it to the off-surface points xi,s , we generated these locations by first sampling points from the surface and then displacing them along their surface normals. [sent-158, score-1.084]

73 In one run through the database we computed for each head the values of all proposed basic feature functions for all locations, corresponding to kernel centers on the reference head, as well as for 100 randomly sampled points z. [sent-161, score-0.771]

74 Thus, we sampled points up to distances to the surface proportional to the kernel widths used for the deformation τ . [sent-163, score-0.75]

75 The parameters Creg — one for each scale — were determined by optimizing computed deformation fields from the reference head to some of the training database heads. [sent-165, score-0.48]

76 We minimized the mismatch to the correspondence given in the database. [sent-166, score-0.408]

77 In Figure 1, our new feature functions are visualized on an example head. [sent-168, score-0.273]

78 Each feature extracts specific plausible information, and the surface color can be extended off the surface. [sent-169, score-0.735]

79 In Figure 2, we have marked those points on the surface where a given feature has a high relative contribution in the warp-invariant feature function. [sent-171, score-0.95]

80 As a measure of contribution we took the component of the weight vector γ(xi,s ) that corresponds to the feature of interest and multiplied it with the standard deviation of this feature over all heads and all positions. [sent-172, score-0.559]

81 Note that the weight vector is not invariant to rescaling the basic feature functions, unlike the proposed measure. [sent-173, score-0.259]

82 333 Here and below, S is signed distance, N surface normals, C the proposed surface feature function trained with the intensity values on the faces, and B is the ”signed balls” feature with radii given by the percentage numbers scaled to the diameter of the head. [sent-195, score-1.765]

83 The signed distance function is the best preserved feature (e. [sent-196, score-0.66]

84 all surface points take the value zero up to small approximation errors). [sent-198, score-0.519]

85 The resulting large weight of this feature is plausible as a surface-to-surface mapping is a necessary condition for a morph. [sent-199, score-0.265]

86 However, combined with Figure 2 Reference Deformed Target Deformed Target Figure 3: The average head of the database – the reference – is deformed to match four of the target heads of the test set. [sent-200, score-0.534]

87 Correct correspondence deforms the shape of the reference head to the target face with the texture of the mean face well aligned to the shape details. [sent-201, score-0.916]

88 We applied our correspondence algorithm to compute the correspondence to the test set of 10 heads. [sent-204, score-0.816]

89 We compare our method with the results of the correspondence algorithm of [1] on points that are uniformly drawn form the surface (first column) and for 24 selected marker points (second column). [sent-207, score-0.988]

90 These markers were placed at locations around the eyes or the mouth where correspondence can be assumed to be better defined than for example on the forehead. [sent-208, score-0.551]

91 A careful weighting of each feature separately, but independent of location (b) — as could potentially be achieved by [10] — improves the quality of the correspondence. [sent-233, score-0.251]

92 Experiment (g) which is identical to (c) but with the color and signed balls feature omitted demonstrates the usefulness of these additional basic feature functions. [sent-238, score-0.969]

93 For large radii R the signed balls feature becomes quite expensive to compute, since many summands of the signed distance function expansion have to be accumulated. [sent-240, score-1.249]

94 before the optimization is Reference 25% 50% 75% Target Figure 4: A morph between a human head and the head of the character Gollum (available from www. [sent-243, score-0.347]

95 As Gollum’s head falls out of our object class (human heads), we assisted the training procedure with 28 manually placed markers. [sent-246, score-0.278]

96 7 Conclusion We have proposed a new approach to the challenging problem of defining criteria that characterize a valid correspondence between 3D objects of a given class. [sent-248, score-0.567]

97 The learning technique has been implemented efficiently in a correspondence algorithm for textured surfaces. [sent-251, score-0.408]

98 Even though we have concentrated in our experiments on 3D surface data, the method may be applicable also in other fields such as to align CT or MR scans in medical imaging. [sent-253, score-0.567]

99 It would also be intriguing to explore the question whether our paradigm of learning the features characterizing correspondences might reflect some of the cognitive processes that are involved when humans learn about similarities within object classes. [sent-254, score-0.312]

100 The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces. [sent-310, score-0.476]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('surface', 0.458), ('correspondence', 0.408), ('signed', 0.356), ('feature', 0.191), ('object', 0.169), ('deformation', 0.146), ('balls', 0.146), ('heads', 0.128), ('reference', 0.116), ('faces', 0.111), ('head', 0.109), ('correspondences', 0.097), ('texture', 0.09), ('objects', 0.089), ('morph', 0.085), ('kernel', 0.085), ('surfaces', 0.081), ('characteristic', 0.08), ('cz', 0.078), ('compactly', 0.077), ('dictionary', 0.074), ('steinke', 0.073), ('radii', 0.072), ('database', 0.07), ('shape', 0.07), ('eyes', 0.069), ('distance', 0.067), ('morphs', 0.064), ('br', 0.064), ('centers', 0.061), ('points', 0.061), ('expansion', 0.061), ('weighting', 0.06), ('deformed', 0.058), ('blanz', 0.058), ('zd', 0.058), ('correct', 0.054), ('target', 0.053), ('curvature', 0.052), ('fj', 0.049), ('contribution', 0.049), ('creg', 0.049), ('fid', 0.049), ('gollum', 0.049), ('siegen', 0.049), ('eigenvector', 0.048), ('ball', 0.048), ('optical', 0.048), ('preserved', 0.046), ('color', 0.046), ('locations', 0.046), ('features', 0.046), ('human', 0.044), ('graphics', 0.043), ('fir', 0.043), ('visualized', 0.043), ('criterion', 0.042), ('locally', 0.041), ('differential', 0.041), ('dense', 0.041), ('scans', 0.04), ('medical', 0.04), ('plausible', 0.04), ('criteria', 0.039), ('intensity', 0.039), ('scale', 0.039), ('shapes', 0.039), ('basic', 0.039), ('mesh', 0.039), ('normals', 0.039), ('functions', 0.039), ('establishing', 0.038), ('eigenvectors', 0.036), ('nonrigid', 0.036), ('invariance', 0.036), ('geometric', 0.036), ('propose', 0.034), ('mapping', 0.034), ('ow', 0.034), ('ot', 0.034), ('yellow', 0.034), ('deformations', 0.032), ('registration', 0.032), ('concatenation', 0.032), ('characterize', 0.031), ('properties', 0.031), ('synthesis', 0.031), ('volume', 0.031), ('embedded', 0.03), ('picking', 0.03), ('invariant', 0.029), ('smoothness', 0.029), ('ns', 0.029), ('align', 0.029), ('siggraph', 0.029), ('markers', 0.028), ('cast', 0.028), ('vision', 0.028), ('priori', 0.027), ('integral', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999997 110 nips-2006-Learning Dense 3D Correspondence

Author: Florian Steinke, Volker Blanz, Bernhard Schölkopf

Abstract: Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are difficult to capture in an a priori criterion. While previous work has used a priori criteria which in some cases led to very good results, the present paper explores whether it is possible to learn a combination of features that, for a given training set of aligned human heads, characterizes the notion of correct correspondence. By optimizing this criterion, we are then able to compute correspondence and morphs for novel heads. 1

2 0.16516204 95 nips-2006-Implicit Surfaces with Globally Regularised and Compactly Supported Basis Functions

Author: Christian Walder, Olivier Chapelle, Bernhard Schölkopf

Abstract: We consider the problem of constructing a function whose zero set is to represent a surface, given sample points with surface normal vectors. The contributions include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable properties previously only associated with fully supported bases, and show equivalence to a Gaussian process with modified covariance function. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data. 1

3 0.10498143 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

Author: Andrea Frome, Yoram Singer, Jitendra Malik

Abstract: In this paper we introduce and experiment with a framework for learning local perceptual distance functions for visual recognition. We learn a distance function for each training image as a combination of elementary distances between patch-based visual features. We apply these combined local distance functions to the tasks of image retrieval and classification of novel images. On the Caltech 101 object recognition benchmark, we achieve 60.3% mean recognition across classes using 15 training images per class, which is better than the best published performance by Zhang, et al. 1

4 0.09931957 185 nips-2006-Subordinate class recognition using relational object models

Author: Aharon B. Hillel, Daphna Weinshall

Abstract: We address the problem of sub-ordinate class recognition, like the distinction between different types of motorcycles. Our approach is motivated by observations from cognitive psychology, which identify parts as the defining component of basic level categories (like motorcycles), while sub-ordinate categories are more often defined by part properties (like ’jagged wheels’). Accordingly, we suggest a two-stage algorithm: First, a relational part based object model is learnt using unsegmented object images from the inclusive class (e.g., motorcycles in general). The model is then used to build a class-specific vector representation for images, where each entry corresponds to a model’s part. In the second stage we train a standard discriminative classifier to classify subclass instances (e.g., cross motorcycles) based on the class-specific vector representation. We describe extensive experimental results with several subclasses. The proposed algorithm typically gives better results than a competing one-step algorithm, or a two stage algorithm where classification is based on a model of the sub-ordinate class. 1

5 0.098940931 34 nips-2006-Approximate Correspondences in High Dimensions

Author: Kristen Grauman, Trevor Darrell

Abstract: Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel. 1

6 0.097109228 147 nips-2006-Non-rigid point set registration: Coherent Point Drift

7 0.094266444 33 nips-2006-Analysis of Representations for Domain Adaptation

8 0.094029598 65 nips-2006-Denoising and Dimension Reduction in Feature Space

9 0.087950848 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing

10 0.086171679 74 nips-2006-Efficient Structure Learning of Markov Networks using $L 1$-Regularization

11 0.083440632 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints

12 0.076899931 39 nips-2006-Balanced Graph Matching

13 0.0750321 97 nips-2006-Inducing Metric Violations in Human Similarity Judgements

14 0.070347168 66 nips-2006-Detecting Humans via Their Pose

15 0.069904864 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

16 0.069664612 130 nips-2006-Max-margin classification of incomplete data

17 0.06648694 103 nips-2006-Kernels on Structured Objects Through Nested Histograms

18 0.065548375 201 nips-2006-Using Combinatorial Optimization within Max-Product Belief Propagation

19 0.06517294 170 nips-2006-Robotic Grasping of Novel Objects

20 0.06398309 29 nips-2006-An Information Theoretic Framework for Eukaryotic Gradient Sensing


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.221), (1, 0.046), (2, 0.117), (3, 0.001), (4, 0.039), (5, -0.046), (6, -0.111), (7, -0.066), (8, 0.033), (9, -0.056), (10, 0.028), (11, 0.017), (12, -0.003), (13, -0.082), (14, 0.133), (15, -0.057), (16, -0.056), (17, -0.102), (18, -0.099), (19, 0.034), (20, 0.026), (21, 0.162), (22, 0.038), (23, -0.033), (24, -0.065), (25, -0.038), (26, 0.18), (27, -0.008), (28, 0.059), (29, -0.115), (30, 0.093), (31, -0.11), (32, -0.085), (33, 0.041), (34, -0.063), (35, -0.032), (36, -0.025), (37, -0.044), (38, 0.005), (39, 0.166), (40, 0.086), (41, 0.011), (42, -0.013), (43, -0.048), (44, -0.118), (45, -0.046), (46, -0.045), (47, -0.195), (48, 0.015), (49, -0.146)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95637417 110 nips-2006-Learning Dense 3D Correspondence

Author: Florian Steinke, Volker Blanz, Bernhard Schölkopf

Abstract: Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are difficult to capture in an a priori criterion. While previous work has used a priori criteria which in some cases led to very good results, the present paper explores whether it is possible to learn a combination of features that, for a given training set of aligned human heads, characterizes the notion of correct correspondence. By optimizing this criterion, we are then able to compute correspondence and morphs for novel heads. 1

2 0.71045989 95 nips-2006-Implicit Surfaces with Globally Regularised and Compactly Supported Basis Functions

Author: Christian Walder, Olivier Chapelle, Bernhard Schölkopf

Abstract: We consider the problem of constructing a function whose zero set is to represent a surface, given sample points with surface normal vectors. The contributions include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable properties previously only associated with fully supported bases, and show equivalence to a Gaussian process with modified covariance function. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data. 1

3 0.60697019 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints

Author: Graham Mcneill, Sethu Vijayakumar

Abstract: Correspondence algorithms typically struggle with shapes that display part-based variation. We present a probabilistic approach that matches shapes using independent part transformations, where the parts themselves are learnt during matching. Ideas from semi-supervised learning are used to bias the algorithm towards finding ‘perceptually valid’ part structures. Shapes are represented by unlabeled point sets of arbitrary size and a background component is used to handle occlusion, local dissimilarity and clutter. Thus, unlike many shape matching techniques, our approach can be applied to shapes extracted from real images. Model parameters are estimated using an EM algorithm that alternates between finding a soft correspondence and computing the optimal part transformations using Procrustes analysis.

4 0.49419954 185 nips-2006-Subordinate class recognition using relational object models

Author: Aharon B. Hillel, Daphna Weinshall

Abstract: We address the problem of sub-ordinate class recognition, like the distinction between different types of motorcycles. Our approach is motivated by observations from cognitive psychology, which identify parts as the defining component of basic level categories (like motorcycles), while sub-ordinate categories are more often defined by part properties (like ’jagged wheels’). Accordingly, we suggest a two-stage algorithm: First, a relational part based object model is learnt using unsegmented object images from the inclusive class (e.g., motorcycles in general). The model is then used to build a class-specific vector representation for images, where each entry corresponds to a model’s part. In the second stage we train a standard discriminative classifier to classify subclass instances (e.g., cross motorcycles) based on the class-specific vector representation. We describe extensive experimental results with several subclasses. The proposed algorithm typically gives better results than a competing one-step algorithm, or a two stage algorithm where classification is based on a model of the sub-ordinate class. 1

5 0.47269553 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

Author: Andrea Frome, Yoram Singer, Jitendra Malik

Abstract: In this paper we introduce and experiment with a framework for learning local perceptual distance functions for visual recognition. We learn a distance function for each training image as a combination of elementary distances between patch-based visual features. We apply these combined local distance functions to the tasks of image retrieval and classification of novel images. On the Caltech 101 object recognition benchmark, we achieve 60.3% mean recognition across classes using 15 training images per class, which is better than the best published performance by Zhang, et al. 1

6 0.47110143 34 nips-2006-Approximate Correspondences in High Dimensions

7 0.46653596 170 nips-2006-Robotic Grasping of Novel Objects

8 0.45438588 97 nips-2006-Inducing Metric Violations in Human Similarity Judgements

9 0.44911161 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing

10 0.44018412 147 nips-2006-Non-rigid point set registration: Coherent Point Drift

11 0.42991775 4 nips-2006-A Humanlike Predictor of Facial Attractiveness

12 0.42656803 52 nips-2006-Clustering appearance and shape by learning jigsaws

13 0.41105032 33 nips-2006-Analysis of Representations for Domain Adaptation

14 0.39551368 74 nips-2006-Efficient Structure Learning of Markov Networks using $L 1$-Regularization

15 0.38801813 29 nips-2006-An Information Theoretic Framework for Eukaryotic Gradient Sensing

16 0.3755407 180 nips-2006-Speakers optimize information density through syntactic reduction

17 0.36579853 122 nips-2006-Learning to parse images of articulated bodies

18 0.36482561 174 nips-2006-Similarity by Composition

19 0.36204848 6 nips-2006-A Kernel Subspace Method by Stochastic Realization for Learning Nonlinear Dynamical Systems

20 0.35640121 73 nips-2006-Efficient Methods for Privacy Preserving Face Detection


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.083), (3, 0.038), (5, 0.21), (7, 0.104), (8, 0.012), (9, 0.057), (12, 0.038), (22, 0.059), (44, 0.04), (57, 0.131), (65, 0.066), (69, 0.05), (90, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.84447312 110 nips-2006-Learning Dense 3D Correspondence

Author: Florian Steinke, Volker Blanz, Bernhard Schölkopf

Abstract: Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are difficult to capture in an a priori criterion. While previous work has used a priori criteria which in some cases led to very good results, the present paper explores whether it is possible to learn a combination of features that, for a given training set of aligned human heads, characterizes the notion of correct correspondence. By optimizing this criterion, we are then able to compute correspondence and morphs for novel heads. 1

2 0.69442976 34 nips-2006-Approximate Correspondences in High Dimensions

Author: Kristen Grauman, Trevor Darrell

Abstract: Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel. 1

3 0.68673825 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

Author: Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, Bernhard Schölkopf

Abstract: This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that—despite the lack of any biological prior knowledge—our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1

4 0.68567669 119 nips-2006-Learning to Rank with Nonsmooth Cost Functions

Author: Christopher J. Burges, Robert Ragno, Quoc V. Le

Abstract: The quality measures used in information retrieval are particularly difficult to optimize directly, since they depend on the model scores only through the sorted order of the documents returned for a given query. Thus, the derivatives of the cost with respect to the model parameters are either zero, or are undefined. In this paper, we propose a class of simple, flexible algorithms, called LambdaRank, which avoids these difficulties by working with implicit cost functions. We describe LambdaRank using neural network models, although the idea applies to any differentiable function class. We give necessary and sufficient conditions for the resulting implicit cost function to be convex, and we show that the general method has a simple mechanical interpretation. We demonstrate significantly improved accuracy, over a state-of-the-art ranking algorithm, on several datasets. We also show that LambdaRank provides a method for significantly speeding up the training phase of that ranking algorithm. Although this paper is directed towards ranking, the proposed method can be extended to any non-smooth and multivariate cost functions. 1

5 0.68408805 80 nips-2006-Fundamental Limitations of Spectral Clustering

Author: Boaz Nadler, Meirav Galun

Abstract: Spectral clustering methods are common graph-based approaches to clustering of data. Spectral clustering algorithms typically start from local information encoded in a weighted graph on the data and cluster according to the global eigenvectors of the corresponding (normalized) similarity matrix. One contribution of this paper is to present fundamental limitations of this general local to global approach. We show that based only on local information, the normalized cut functional is not a suitable measure for the quality of clustering. Further, even with a suitable similarity measure, we show that the first few eigenvectors of such adjacency matrices cannot successfully cluster datasets that contain structures at different scales of size and density. Based on these findings, a second contribution of this paper is a novel diffusion based measure to evaluate the coherence of individual clusters. Our measure can be used in conjunction with any bottom-up graph-based clustering method, it is scale-free and can determine coherent clusters at all scales. We present both synthetic examples and real image segmentation problems where various spectral clustering algorithms fail. In contrast, using this coherence measure finds the expected clusters at all scales. Keywords: Clustering, kernels, learning theory. 1

6 0.67688298 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints

7 0.67269546 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model

8 0.67268747 122 nips-2006-Learning to parse images of articulated bodies

9 0.67226547 185 nips-2006-Subordinate class recognition using relational object models

10 0.67011577 118 nips-2006-Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields

11 0.66944277 158 nips-2006-PG-means: learning the number of clusters in data

12 0.66928852 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation

13 0.66880065 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

14 0.66847992 76 nips-2006-Emergence of conjunctive visual features by quadratic independent component analysis

15 0.66787636 43 nips-2006-Bayesian Model Scoring in Markov Random Fields

16 0.66686219 195 nips-2006-Training Conditional Random Fields for Maximum Labelwise Accuracy

17 0.6666379 42 nips-2006-Bayesian Image Super-resolution, Continued

18 0.66647166 47 nips-2006-Boosting Structured Prediction for Imitation Learning

19 0.66184223 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

20 0.66182071 39 nips-2006-Balanced Graph Matching