nips nips2003 nips2003-53 knowledge-graph by maker-knowledge-mining

53 nips-2003-Discriminating Deformable Shape Classes


Source: pdf

Author: Salvador Ruiz-correa, Linda G. Shapiro, Marina Meila, Gabriel Berson

Abstract: We present and empirically test a novel approach for categorizing 3-D free form object shapes represented by range data . In contrast to traditional surface-signature based systems that use alignment to match specific objects, we adapted the newly introduced symbolic-signature representation to classify deformable shapes [10]. Our approach constructs an abstract description of shape classes using an ensemble of classifiers that learn object class parts and their corresponding geometrical relationships from a set of numeric and symbolic descriptors. We used our classification engine in a series of large scale discrimination experiments on two well-defined classes that share many common distinctive features. The experimental results suggest that our method outperforms traditional numeric signature-based methodologies. 1 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Our approach constructs an abstract description of shape classes using an ensemble of classifiers that learn object class parts and their corresponding geometrical relationships from a set of numeric and symbolic descriptors. [sent-8, score-0.898]

2 1 1 Introduction Categorizing objects from their shape is an unsolved problem in computer vision that entails the ability of a computer system to represent and generalize shape information on the basis of a finite amount of prior data. [sent-11, score-0.587]

3 As pointed out in [10], how to construct a quantitative description of shape that accounts for the complexities in the categorization process is currently unknown. [sent-13, score-0.296]

4 However, only a handful of studies have addressed the problem of categorizing shapes classes containing a significant amount of shape variation and missing information frequently found in real range scenes. [sent-19, score-0.481]

5 [9] developed a shape representation to match similar objects. [sent-21, score-0.263]

6 The so-called shape distribution encodes the shape information of a complete 3-D object as a probability distribution sampled from a shape function. [sent-22, score-0.864]

7 [1] extended the work on shape distribution by developing a representation of shape for object retrieval. [sent-25, score-0.601]

8 The representation is based on a spherical harmonics expansion of the points of a polygonal surface mesh rasterized into a voxel grid. [sent-30, score-0.755]

9 developed a physical model for studying neuropathological shape deformations using Principal Component Analysis and a Gaussian quadratic classifier. [sent-33, score-0.263]

10 In [10], we developed a shape novelty detector for recognizing classes of 3-D object shapes in cluttered scenes. [sent-36, score-0.71]

11 The detector learns the components of a shapes class and their corresponding geometric configuration from a set of surface signatures embedded in a Hilbert space. [sent-37, score-1.108]

12 The numeric signatures encode characteristic surface features of the components, while the symbolic signatures describe their corresponding spatial arrangement. [sent-38, score-1.7]

13 The encouraging results obtained with our novelty detector motivated us to take a step further and extend our algorithm to accommodate classification by developing a 3-D shape classifier to be described in the next section. [sent-39, score-0.458]

14 We were also motivated by applications in medical diagnosis and human interface design where 3-D shape information plays a significant role. [sent-41, score-0.364]

15 2 Our Approach We develop our shape classifier in this section. [sent-47, score-0.263]

16 The first module processes numeric surface signatures and the second, symbolic ones. [sent-52, score-1.34]

17 These shape descriptors characterize our classes at two different levels of abstraction. [sent-53, score-0.414]

18 1 Surface signatures The surface signatures developed by Johnson and Hebert [5] are used to encode surface shape of free form objects. [sent-55, score-1.872]

19 In contrast to the shape distributions and harmonic descriptors, their spatial scale can be enlarged to take into account local and non-local effects, which makes them robust against the clutter and occlusion generally present in range data. [sent-56, score-0.375]

20 Experimental evidence has shown that the spin image and some of its variants are the preferred choice for encoding surface shape whenever the normal vectors of the surfaces of the objects can be accurately estimated [11]. [sent-57, score-0.825]

21 The symbolic signatures developed in [10] are used at the next level to describe the spatial configuration of labeled surface regions. [sent-58, score-1.067]

22 A spin-image [5] is a two-dimensional histogram computed at an oriented point P of the surface mesh of an object (see Figure 1). [sent-60, score-0.82]

23 Contributing points are those that are within a specified distance of P and for which the surface normal forms an angle of less than the specified size with the surface normal N of P . [sent-62, score-0.904]

24 We use spin images as the numeric signatures in this work. [sent-66, score-0.768]

25 2) are somewhat related to numeric surface signatures in that they also start with a point P on the surface mesh and consider a set of contributing points Q, which are still defined in terms of the distance from P and support angle. [sent-68, score-1.86]

26 The main difference is that they are derived from a labeled surface mesh (shown in Figure 2a); each vertex of the mesh has an associated symbolic label referencing a surface region or component in which it lies. [sent-69, score-1.817]

27 For symbolic surface signature construction, the vector P Q in Figure 2b is projected to the tangent plane at P where a set of orthogonal axes γ and δ have been defined. [sent-72, score-0.599]

28 The resultant array is the symbolic surface signature at point P . [sent-79, score-0.646]

29 Figure 2: The symbolic surface signature for point P on a labeled surface mesh model of a human head. [sent-82, score-1.436]

30 2 Classifying shape classes We consider the classification task for which we are given a set of l surface meshes C = {C1 , · · · , Cl } representing two classes of object shapes. [sent-85, score-0.958]

31 The problem is to use the given meshes and the labels to construct an algorithm that predicts the label y of a new surface mesh C. [sent-87, score-0.896]

32 We let C+1 (C−1 ) denote the shape class labeled with y = +1 (y = −1, respectively). [sent-88, score-0.408]

33 This can be achieved by using a morphable surface models technique such as the one described in [10]. [sent-90, score-0.334]

34 Finding shape class components Before shape class learning can take place, the salient feature components associated with C+1 and C−1 must be specified . [sent-91, score-0.806]

35 Each component of a class is identified by a particular region located on the surface of the class members. [sent-92, score-0.616]

36 For each class C+1 and C−1 the components are constructed one at a time using a region growing algorithm. [sent-93, score-0.271]

37 This algorithm iteratively constructs a classification function (novelty detector), which captures regions in the space of numeric signatures S that approximately correspond to the support of an assumed probability distribution function FS associated with the class component under consideration. [sent-94, score-0.874]

38 In this context, a shape class component is defined as the set of all mesh points of the surface meshes in a shape class whose numeric signatures lie inside of the support region estimated by the classification function. [sent-95, score-2.44]

39 Figure 3: The component R was grown around the critical point p using the algorithm described in the text. [sent-97, score-0.294]

40 The numeric signatures for the critical point p of five of the models are also shown. [sent-99, score-0.816]

41 Their image width is 70 pixels and its region of influence covers about three quarters of the surface mesh models . [sent-100, score-0.775]

42 The input of this phase is a set of surface meshes that are samples of an object class Cy . [sent-102, score-0.672]

43 Select a set of critical points on a training object for class Cy . [sent-104, score-0.341]

44 Note that the critical points chosen for class C+ can differ from the critical points chosen for class C− . [sent-107, score-0.47]

45 For each critical point p of a class Cy , compute the numeric signatures at the corresponding points of every training instance of Cy ; this set of signatures is the training set Tp,y for critical point p of class Cy . [sent-111, score-1.664]

46 For each critical point p of class Cy , train a component detector (implemented as a ν-SVM novelty detector [12]) to learn a component about p, using the training set T p,y . [sent-113, score-0.708]

47 The component detector will actually grow a region about p using the shape information of the numeric signatures in the training sample. [sent-114, score-1.208]

48 Using the classifier for point p, perform an iterative component growing operation to expand the component about p. [sent-118, score-0.273]

49 Figure 3 shows an example of a component grown by this technique about critical point p on a training set of 200 human faces from the University of South Florida database. [sent-128, score-0.397]

50 At the end of step I, there are my component detectors, each of which can identify the component of a particular critical point of the object shape class Cy . [sent-129, score-0.731]

51 That is, when applied to a surface mesh, each component detector will determines which vertices it thinks belong to its learned component (positive surface points), and which vertices do not. [sent-130, score-0.971]

52 The input of this step is the training set of numeric signatures and their corresponding labels for each of the m = m+1 + m−1 components. [sent-132, score-0.714]

53 The output is a component classifier (multi-class νSVM) that, when given a positive surface point of a surface mesh previously processed with the bank of component detectors, will determine the particular component of the m components to which this point belongs. [sent-134, score-1.465]

54 Learning spatial relationships The ensemble of component detectors and the component classifier described above define our classification module mentioned at the beginning of the section. [sent-135, score-0.478]

55 A central feature of this module is that it can be used for learning the spatial configuration of the labeled components just by providing as input the set C of training surface meshes with each vertex labeled with the label of its component or zero if it does not belong to a component. [sent-136, score-1.079]

56 The algorithm proceeds in the same fashion as described above except that the classifiers operate on the symbolic surface signatures of the labeled mesh. [sent-137, score-1.039]

57 The signatures are embedded in a Hilbert space by means of a Mercer kernel that is constructed as follows. [sent-138, score-0.482]

58 Since symbolic surface signatures are defined up to a rotation, we use the virtual SV method for training all the classifiers involved. [sent-144, score-1.018]

59 The method consists of training a component detector on the signatures to calculate the support vectors. [sent-145, score-0.699]

60 Finally, the novelty detector used by the algorithm is trained with the enlarged data set consisting of the original training data and the set of virtual support vectors. [sent-148, score-0.325]

61 The worse case complexity of the classification module is O(nc2 s), where n is the number of vertices of the input mesh, s is the size of the input signatures (either numeric or symbolic) and c is the number of novelty detectors. [sent-149, score-0.94]

62 A classification example An architecture capable of discriminating two shape classes consists of a cascade of two classification modules. [sent-151, score-0.37]

63 The first module identifies the components of each shape class, while the second verifies the geometric consistency (spatial relationships) of the components. [sent-152, score-0.512]

64 Figure 4 illustrates the classification procedure on two sample surface meshes from a test set of 200 human heads. [sent-153, score-0.541]

65 The first mesh (Figure 4 a) belongs to the class of healthy individuals, while the second (Figure 4 e) belongs to the class of individuals with a congenital syndrome that produces a pathological craniofacial deformation. [sent-154, score-0.713]

66 The input classification module was trained with a set of 400 surface meshes and 4 critical points per class to recognize the eight components shown in Figure 4 b and f. [sent-155, score-0.981]

67 Each of the test surface meshes was individually processed as follows. [sent-157, score-0.502]

68 Given an input surface mesh to the first classification module, the classifier ensemble (component detectors and components classifier) is applied to the numeric surface signatures of its points (Figure 4 a and e). [sent-158, score-1.917]

69 A connected components algorithm is then applied to the result and components of size below a threshold (10 mesh points) are discarded. [sent-159, score-0.506]

70 After this process the resulting labeled mesh is fed to the second classification module that was trained with 400 labeled meshes and two critical points to recognize two new components. [sent-160, score-1.023]

71 Consequently, the points of the output mesh of the second module will be set to “+1” if they belong to learned symbolic signatures associated with the healthy heads (Figure 4 c) , and “-1” otherwise (Figure 4 g). [sent-164, score-1.365]

72 Figure 4 c (g) shows the region found by our algorithm that corresponds to the shape class model of normal (respectively abnormal) head. [sent-166, score-0.458]

73 For classification with human heads the data consisted of 600 surface mesh models (400 training samples and 200 testing samples). [sent-177, score-0.902]

74 For the faces, the data sets consisted of 300 surface meshes (200 training samples and 100 testing samples). [sent-179, score-0.582]

75 The corresponding mesh resolution was set to about 0. [sent-180, score-0.366]

76 All the surface models considered here were obtained from range data scanners and all the deformable models were constructed using the methods described in [10]. [sent-182, score-0.434]

77 We tested the stability in the formation of shape class components using the faces data set. [sent-183, score-0.436]

78 This set contains a significant amount of shape variability. [sent-184, score-0.287]

79 The first module of our classifier must generate stable components to allow the second module to discriminate their corresponding geometric configurations. [sent-188, score-0.445]

80 We trained the first classification module with a set of 200 faces using critical points arbitrarily located on the cheek, chin, forehead and philtrum of the surface models. [sent-189, score-0.783]

81 This rate is reasonably high considering the amount of shape variability in the data set (Fig. [sent-192, score-0.287]

82 We performed classification of normal versus abnormal human heads, a task that often occurs in medical settings. [sent-195, score-0.336]

83 The surface meshes of each of these examples were convex combinations of normal and abnormal heads. [sent-202, score-0.761]

84 Our shape classifier was able to discriminate with high accuracy between normal and abnormal models. [sent-207, score-0.567]

85 It was also able to discriminate classes that share a significant amount of common shape features ( see II-B∗ in Table 1). [sent-208, score-0.391]

86 The methods cited in the introduction were not considered for direct comparison, because they use global shape representations that were designed for classifying complete 3-D models. [sent-211, score-0.263]

87 Our approach using symbolic signatures can operate on single-view data sets containing partial model information, as shown by the experimental results performed on several shape classes [10]. [sent-212, score-0.952]

88 4 Discussion and Conclusion We presented a supervised approach to classification of 3-D shapes represented by range data that learns class components and their geometrical relationships from surface descriptors. [sent-214, score-0.59]

89 abnormal) and studied the stability in the formation of class components using a collection of real face models containing a large amount of shape variability. [sent-216, score-0.427]

90 The numeric and symbolic shape descriptors considered here are important. [sent-220, score-0.752]

91 For example, the spin image defined on the forehead (point P) in Figure 3 encodes information about the shape of most of the face (including the chin). [sent-222, score-0.422]

92 Spin images and some variants [11] are reliable for encoding surface shape in the present context. [sent-224, score-0.597]

93 Other descriptors such as curvature-based or harmonic signatures are not descriptive enough or lack robustness to scene clutter and occlusion. [sent-225, score-0.584]

94 Nevertheless, the shape descriptors cap2 Test samples were obtained from models with craniofacial features based upon either the Greig cephalopolysyndactyly (A) or the trisomy 9 mosaic (B) syndromes [6]. [sent-227, score-0.502]

95 tured enough global information to allow a classifier to discriminate between the distinctive features of normal and abnormal heads. [sent-228, score-0.34]

96 The structure of the classification module (bank of novelty detectors and multi-class classifier) is important. [sent-229, score-0.332]

97 The experimental results showed us that the output of the novelty detectors is not always reliable and the multi-class classifier becomes critical for constructing stable and consistent class components. [sent-230, score-0.361]

98 The essential point consists of generating groups of neighboring surface points whose shape descriptors are similar but distinctive enough from the signatures of other components. [sent-234, score-1.261]

99 1) Our method is able to model shape classes containing significant shape variance and can absorb about 20% of scale changes. [sent-236, score-0.585]

100 However, larger sets are required in order to capture the shape variability of the abnormal craniofacial features due to race, age and gender. [sent-239, score-0.52]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('signatures', 0.458), ('mesh', 0.366), ('surface', 0.334), ('shape', 0.263), ('numeric', 0.225), ('abnormal', 0.181), ('symbolic', 0.172), ('cy', 0.168), ('meshes', 0.168), ('module', 0.151), ('classi', 0.14), ('critical', 0.11), ('novelty', 0.106), ('component', 0.095), ('signature', 0.093), ('descriptors', 0.092), ('detector', 0.089), ('spin', 0.085), ('heads', 0.083), ('normal', 0.078), ('craniofacial', 0.076), ('object', 0.075), ('detectors', 0.075), ('labeled', 0.075), ('components', 0.07), ('class', 0.07), ('er', 0.068), ('grown', 0.066), ('growing', 0.06), ('shapes', 0.059), ('classes', 0.059), ('discrimination', 0.058), ('healthy', 0.056), ('points', 0.055), ('deformable', 0.053), ('categorizing', 0.053), ('cation', 0.049), ('discriminating', 0.048), ('region', 0.047), ('congenital', 0.046), ('forehead', 0.046), ('funkhouser', 0.046), ('syndromes', 0.046), ('discriminate', 0.045), ('chin', 0.04), ('meil', 0.04), ('human', 0.039), ('contributing', 0.039), ('medical', 0.038), ('objects', 0.037), ('shapiro', 0.036), ('distinctive', 0.036), ('recognizing', 0.036), ('guration', 0.035), ('clutter', 0.034), ('relationships', 0.034), ('categorization', 0.033), ('faces', 0.033), ('training', 0.031), ('cheek', 0.031), ('dobkin', 0.031), ('golland', 0.031), ('malformed', 0.031), ('osada', 0.031), ('philtrum', 0.031), ('svp', 0.031), ('bank', 0.03), ('ab', 0.03), ('individuals', 0.029), ('label', 0.028), ('spatial', 0.028), ('geometric', 0.028), ('image', 0.028), ('enlarged', 0.027), ('race', 0.027), ('abnormalities', 0.027), ('heisele', 0.027), ('tp', 0.027), ('support', 0.026), ('ij', 0.026), ('facial', 0.026), ('angle', 0.025), ('encode', 0.025), ('samples', 0.025), ('constructed', 0.024), ('ers', 0.024), ('diagnosis', 0.024), ('johnson', 0.024), ('testing', 0.024), ('amount', 0.024), ('array', 0.024), ('belong', 0.024), ('range', 0.023), ('point', 0.023), ('trained', 0.023), ('virtual', 0.023), ('correspondences', 0.023), ('cluttered', 0.023), ('hebert', 0.023), ('histogram', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 53 nips-2003-Discriminating Deformable Shape Classes

Author: Salvador Ruiz-correa, Linda G. Shapiro, Marina Meila, Gabriel Berson

Abstract: We present and empirically test a novel approach for categorizing 3-D free form object shapes represented by range data . In contrast to traditional surface-signature based systems that use alignment to match specific objects, we adapted the newly introduced symbolic-signature representation to classify deformable shapes [10]. Our approach constructs an abstract description of shape classes using an ensemble of classifiers that learn object class parts and their corresponding geometrical relationships from a set of numeric and symbolic descriptors. We used our classification engine in a series of large scale discrimination experiments on two well-defined classes that share many common distinctive features. The experimental results suggest that our method outperforms traditional numeric signature-based methodologies. 1 1

2 0.19157751 81 nips-2003-Geometric Analysis of Constrained Curves

Author: Anuj Srivastava, Washington Mio, Xiuwen Liu, Eric Klassen

Abstract: We present a geometric approach to statistical shape analysis of closed curves in images. The basic idea is to specify a space of closed curves satisfying given constraints, and exploit the differential geometry of this space to solve optimization and inference problems. We demonstrate this approach by: (i) defining and computing statistics of observed shapes, (ii) defining and learning a parametric probability model on shape space, and (iii) designing a binary hypothesis test on this space. 1

3 0.10834397 192 nips-2003-Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes

Author: Kevin P. Murphy, Antonio Torralba, William T. Freeman

Abstract: Standard approaches to object detection focus on local patches of the image, and try to classify them as background or not. We propose to use the scene context (image as a whole) as an extra source of (global) information, to help resolve local ambiguities. We present a conditional random field for jointly solving the tasks of object detection and scene classification. 1

4 0.10578137 106 nips-2003-Learning Non-Rigid 3D Shape from 2D Motion

Author: Lorenzo Torresani, Aaron Hertzmann, Christoph Bregler

Abstract: This paper presents an algorithm for learning the time-varying shape of a non-rigid 3D object from uncalibrated 2D tracking data. We model shape motion as a rigid component (rotation and translation) combined with a non-rigid deformation. Reconstruction is ill-posed if arbitrary deformations are allowed. We constrain the problem by assuming that the object shape at each time instant is drawn from a Gaussian distribution. Based on this assumption, the algorithm simultaneously estimates 3D shape and motion for each time frame, learns the parameters of the Gaussian, and robustly fills-in missing data points. We then extend the algorithm to model temporal smoothness in object shape, thus allowing it to handle severe cases of missing data. 1

5 0.090446264 113 nips-2003-Learning with Local and Global Consistency

Author: Dengyong Zhou, Olivier Bousquet, Thomas N. Lal, Jason Weston, Bernhard Schölkopf

Abstract: We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data. 1

6 0.08549393 95 nips-2003-Insights from Machine Learning Applied to Human Visual Classification

7 0.084375612 109 nips-2003-Learning a Rare Event Detection Cascade by Direct Feature Selection

8 0.077132195 186 nips-2003-Towards Social Robots: Automatic Evaluation of Human-Robot Interaction by Facial Expression Classification

9 0.066785276 133 nips-2003-Mutual Boosting for Contextual Inference

10 0.065103747 28 nips-2003-Application of SVMs for Colour Classification and Collision Detection with AIBO Robots

11 0.061528936 9 nips-2003-A Kullback-Leibler Divergence Based Kernel for SVM Classification in Multimedia Applications

12 0.061416779 188 nips-2003-Training fMRI Classifiers to Detect Cognitive States across Multiple Human Subjects

13 0.055307645 8 nips-2003-A Holistic Approach to Compositional Semantics: a connectionist model and robot experiments

14 0.053747345 90 nips-2003-Increase Information Transfer Rates in BCI by CSP Extension to Multi-class

15 0.050275605 43 nips-2003-Bounded Invariance and the Formation of Place Fields

16 0.049982268 35 nips-2003-Attractive People: Assembling Loose-Limbed Models using Non-parametric Belief Propagation

17 0.049570281 160 nips-2003-Prediction on Spike Data Using Kernel Algorithms

18 0.04788325 124 nips-2003-Max-Margin Markov Networks

19 0.047330901 153 nips-2003-Parameterized Novelty Detectors for Environmental Sensor Monitoring

20 0.043599896 12 nips-2003-A Model for Learning the Semantics of Pictures


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.165), (1, -0.056), (2, 0.041), (3, -0.132), (4, -0.146), (5, -0.095), (6, -0.013), (7, -0.028), (8, 0.021), (9, -0.001), (10, -0.006), (11, 0.031), (12, 0.066), (13, 0.031), (14, 0.065), (15, -0.028), (16, 0.038), (17, 0.037), (18, 0.123), (19, -0.122), (20, 0.179), (21, -0.031), (22, 0.019), (23, -0.014), (24, -0.056), (25, 0.057), (26, 0.153), (27, 0.039), (28, -0.098), (29, 0.191), (30, 0.009), (31, -0.182), (32, 0.095), (33, -0.01), (34, 0.012), (35, 0.139), (36, 0.088), (37, 0.031), (38, -0.047), (39, -0.101), (40, -0.029), (41, 0.05), (42, 0.117), (43, 0.177), (44, -0.006), (45, 0.116), (46, 0.042), (47, -0.022), (48, 0.044), (49, -0.002)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94928586 53 nips-2003-Discriminating Deformable Shape Classes

Author: Salvador Ruiz-correa, Linda G. Shapiro, Marina Meila, Gabriel Berson

Abstract: We present and empirically test a novel approach for categorizing 3-D free form object shapes represented by range data . In contrast to traditional surface-signature based systems that use alignment to match specific objects, we adapted the newly introduced symbolic-signature representation to classify deformable shapes [10]. Our approach constructs an abstract description of shape classes using an ensemble of classifiers that learn object class parts and their corresponding geometrical relationships from a set of numeric and symbolic descriptors. We used our classification engine in a series of large scale discrimination experiments on two well-defined classes that share many common distinctive features. The experimental results suggest that our method outperforms traditional numeric signature-based methodologies. 1 1

2 0.79455733 81 nips-2003-Geometric Analysis of Constrained Curves

Author: Anuj Srivastava, Washington Mio, Xiuwen Liu, Eric Klassen

Abstract: We present a geometric approach to statistical shape analysis of closed curves in images. The basic idea is to specify a space of closed curves satisfying given constraints, and exploit the differential geometry of this space to solve optimization and inference problems. We demonstrate this approach by: (i) defining and computing statistics of observed shapes, (ii) defining and learning a parametric probability model on shape space, and (iii) designing a binary hypothesis test on this space. 1

3 0.56797296 106 nips-2003-Learning Non-Rigid 3D Shape from 2D Motion

Author: Lorenzo Torresani, Aaron Hertzmann, Christoph Bregler

Abstract: This paper presents an algorithm for learning the time-varying shape of a non-rigid 3D object from uncalibrated 2D tracking data. We model shape motion as a rigid component (rotation and translation) combined with a non-rigid deformation. Reconstruction is ill-posed if arbitrary deformations are allowed. We constrain the problem by assuming that the object shape at each time instant is drawn from a Gaussian distribution. Based on this assumption, the algorithm simultaneously estimates 3D shape and motion for each time frame, learns the parameters of the Gaussian, and robustly fills-in missing data points. We then extend the algorithm to model temporal smoothness in object shape, thus allowing it to handle severe cases of missing data. 1

4 0.53423584 28 nips-2003-Application of SVMs for Colour Classification and Collision Detection with AIBO Robots

Author: Michael J. Quinlan, Stephan K. Chalup, Richard H. Middleton

Abstract: This article addresses the issues of colour classification and collision detection as they occur in the legged league robot soccer environment of RoboCup. We show how the method of one-class classification with support vector machines (SVMs) can be applied to solve these tasks satisfactorily using the limited hardware capacity of the prescribed Sony AIBO quadruped robots. The experimental evaluation shows an improvement over our previous methods of ellipse fitting for colour classification and the statistical approach used for collision detection.

5 0.45958796 192 nips-2003-Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes

Author: Kevin P. Murphy, Antonio Torralba, William T. Freeman

Abstract: Standard approaches to object detection focus on local patches of the image, and try to classify them as background or not. We propose to use the scene context (image as a whole) as an extra source of (global) information, to help resolve local ambiguities. We present a conditional random field for jointly solving the tasks of object detection and scene classification. 1

6 0.44790843 95 nips-2003-Insights from Machine Learning Applied to Human Visual Classification

7 0.433065 85 nips-2003-Human and Ideal Observers for Detecting Image Curves

8 0.41110855 188 nips-2003-Training fMRI Classifiers to Detect Cognitive States across Multiple Human Subjects

9 0.40342116 109 nips-2003-Learning a Rare Event Detection Cascade by Direct Feature Selection

10 0.39595374 90 nips-2003-Increase Information Transfer Rates in BCI by CSP Extension to Multi-class

11 0.3895897 153 nips-2003-Parameterized Novelty Detectors for Environmental Sensor Monitoring

12 0.37180752 181 nips-2003-Statistical Debugging of Sampled Programs

13 0.3679336 9 nips-2003-A Kullback-Leibler Divergence Based Kernel for SVM Classification in Multimedia Applications

14 0.36053672 186 nips-2003-Towards Social Robots: Automatic Evaluation of Human-Robot Interaction by Facial Expression Classification

15 0.35662982 113 nips-2003-Learning with Local and Global Consistency

16 0.35279438 3 nips-2003-AUC Optimization vs. Error Rate Minimization

17 0.30620128 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

18 0.2863394 147 nips-2003-Online Learning via Global Feedback for Phrase Recognition

19 0.27916753 39 nips-2003-Bayesian Color Constancy with Non-Gaussian Models

20 0.27472615 133 nips-2003-Mutual Boosting for Contextual Inference


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.486), (11, 0.022), (30, 0.018), (35, 0.024), (53, 0.076), (71, 0.058), (76, 0.033), (85, 0.084), (91, 0.085)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.9449203 29 nips-2003-Applying Metric-Trees to Belief-Point POMDPs

Author: Joelle Pineau, Geoffrey J. Gordon, Sebastian Thrun

Abstract: Recent developments in grid-based and point-based approximation algorithms for POMDPs have greatly improved the tractability of POMDP planning. These approaches operate on sets of belief points by individually learning a value function for each point. In reality, belief points exist in a highly-structured metric simplex, but current POMDP algorithms do not exploit this property. This paper presents a new metric-tree algorithm which can be used in the context of POMDP planning to sort belief points spatially, and then perform fast value function updates over groups of points. We present results showing that this approach can reduce computation in point-based POMDP algorithms for a wide range of problems. 1

same-paper 2 0.94025415 53 nips-2003-Discriminating Deformable Shape Classes

Author: Salvador Ruiz-correa, Linda G. Shapiro, Marina Meila, Gabriel Berson

Abstract: We present and empirically test a novel approach for categorizing 3-D free form object shapes represented by range data . In contrast to traditional surface-signature based systems that use alignment to match specific objects, we adapted the newly introduced symbolic-signature representation to classify deformable shapes [10]. Our approach constructs an abstract description of shape classes using an ensemble of classifiers that learn object class parts and their corresponding geometrical relationships from a set of numeric and symbolic descriptors. We used our classification engine in a series of large scale discrimination experiments on two well-defined classes that share many common distinctive features. The experimental results suggest that our method outperforms traditional numeric signature-based methodologies. 1 1

3 0.93347752 171 nips-2003-Semi-Definite Programming by Perceptron Learning

Author: Thore Graepel, Ralf Herbrich, Andriy Kharechko, John S. Shawe-taylor

Abstract: We present a modified version of the perceptron learning algorithm (PLA) which solves semidefinite programs (SDPs) in polynomial time. The algorithm is based on the following three observations: (i) Semidefinite programs are linear programs with infinitely many (linear) constraints; (ii) every linear program can be solved by a sequence of constraint satisfaction problems with linear constraints; (iii) in general, the perceptron learning algorithm solves a constraint satisfaction problem with linear constraints in finitely many updates. Combining the PLA with a probabilistic rescaling algorithm (which, on average, increases the size of the feasable region) results in a probabilistic algorithm for solving SDPs that runs in polynomial time. We present preliminary results which demonstrate that the algorithm works, but is not competitive with state-of-the-art interior point methods. 1

4 0.85826021 22 nips-2003-An Improved Scheme for Detection and Labelling in Johansson Displays

Author: Claudio Fanti, Marzia Polito, Pietro Perona

Abstract: Consider a number of moving points, where each point is attached to a joint of the human body and projected onto an image plane. Johannson showed that humans can effortlessly detect and recognize the presence of other humans from such displays. This is true even when some of the body points are missing (e.g. because of occlusion) and unrelated clutter points are added to the display. We are interested in replicating this ability in a machine. To this end, we present a labelling and detection scheme in a probabilistic framework. Our method is based on representing the joint probability density of positions and velocities of body points with a graphical model, and using Loopy Belief Propagation to calculate a likely interpretation of the scene. Furthermore, we introduce a global variable representing the body’s centroid. Experiments on one motion-captured sequence suggest that our scheme improves on the accuracy of a previous approach based on triangulated graphical models, especially when very few parts are visible. The improvement is due both to the more general graph structure we use and, more significantly, to the introduction of the centroid variable. 1

5 0.67143559 42 nips-2003-Bounded Finite State Controllers

Author: Pascal Poupart, Craig Boutilier

Abstract: We describe a new approximation algorithm for solving partially observable MDPs. Our bounded policy iteration approach searches through the space of bounded-size, stochastic finite state controllers, combining several advantages of gradient ascent (efficiency, search through restricted controller space) and policy iteration (less vulnerability to local optima).

6 0.61848646 33 nips-2003-Approximate Planning in POMDPs with Macro-Actions

7 0.56362224 113 nips-2003-Learning with Local and Global Consistency

8 0.56170702 116 nips-2003-Linear Program Approximations for Factored Continuous-State Markov Decision Processes

9 0.55358952 106 nips-2003-Learning Non-Rigid 3D Shape from 2D Motion

10 0.54789209 78 nips-2003-Gaussian Processes in Reinforcement Learning

11 0.53588504 81 nips-2003-Geometric Analysis of Constrained Curves

12 0.53262389 6 nips-2003-A Fast Multi-Resolution Method for Detection of Significant Spatial Disease Clusters

13 0.52952731 147 nips-2003-Online Learning via Global Feedback for Phrase Recognition

14 0.52751076 109 nips-2003-Learning a Rare Event Detection Cascade by Direct Feature Selection

15 0.52517474 172 nips-2003-Semi-Supervised Learning with Trees

16 0.52489233 138 nips-2003-Non-linear CCA and PCA by Alignment of Local Models

17 0.51958519 48 nips-2003-Convex Methods for Transduction

18 0.51742136 132 nips-2003-Multiple Instance Learning via Disjunctive Programming Boosting

19 0.5140658 34 nips-2003-Approximate Policy Iteration with a Policy Language Bias

20 0.50904715 80 nips-2003-Generalised Propagation for Fast Fourier Transforms with Partial or Missing Data