nips nips2000 nips2000-53 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Frank Dellaert, Steven M. Seitz, Sebastian Thrun, Charles E. Thorpe
Abstract: When trying to recover 3D structure from a set of images, the most difficult problem is establishing the correspondence between the measurements. Most existing approaches assume that features can be tracked across frames, whereas methods that exploit rigidity constraints to facilitate matching do so only under restricted camera motion. In this paper we propose a Bayesian approach that avoids the brittleness associated with singling out one
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract When trying to recover 3D structure from a set of images, the most difficult problem is establishing the correspondence between the measurements. [sent-4, score-0.545]
2 Most existing approaches assume that features can be tracked across frames, whereas methods that exploit rigidity constraints to facilitate matching do so only under restricted camera motion. [sent-5, score-0.343]
3 In this paper we propose a Bayesian approach that avoids the brittleness associated with singling out one "best" correspondence, and instead consider the distribution over all possible correspondences. [sent-6, score-0.147]
4 We treat both a fully Bayesian approach that yields a posterior distribution, and a MAP approach that makes use of EM to maximize this posterior. [sent-7, score-0.251]
5 We show how Markov chain Monte Carlo methods can be used to implement these techniques in practice, and present experimental results on real data. [sent-8, score-0.084]
6 1 Introduction Structure from motion (SFM) addresses the problem of simultaneously recovering camera pose and a three-dimensional model from a collection of images. [sent-9, score-0.306]
7 This problem has received considerable attention in the computer vision community [1 , 2, 3]. [sent-10, score-0.169]
8 Methods that can robustly reconstruct the 3D structure of environments have a potentially large impact in many areas of societal importance, such as architecture, entertainment, space exploration and mobile robotics. [sent-11, score-0.157]
9 , the question of determining correspondence between features observed in different images. [sent-14, score-0.406]
10 This problem has been referred to as the most difficult part of structure recovery [4], and is particularly challenging if the images have been taken from widely separated viewpoints. [sent-15, score-0.212]
11 Virtually all existing approaches assume that either the correspondence is known a priori, or that features can be tracked from frame to frame [1 , 2]. [sent-16, score-0.616]
12 Methods based on the robust recovery of epipolar geometry [3 , 4] can cope with larger inter-frame displacements, but still depend on the ability to identify a set of initial correspondences to seed the robust matching process. [sent-17, score-0.185]
13 In this paper, we are interested in cases where individual camera images are recorded from vastly different viewpoints, which renders existing SFM approaches inapplicable. [sent-18, score-0.295]
14 Traditional approaches for establishing correspondence between sets of 2D features [5 , 6, 7] are of limited use in this domain, as the projected 3D structure can look very different in each image. [sent-19, score-0.538]
15 This paper proposes a Bayesian approach to data association. [sent-20, score-0.046]
16 Instead of considering a single correspondence only (which we conjecture to be brittle), our approach considers whole distributions over correspondences. [sent-21, score-0.437]
17 As a result, our approach is more robust, and from a Bayesian perspective it is also sound. [sent-22, score-0.046]
18 Unfortunately, no closed-form solution exists for calculating these distributions conditioned on the camera images. [sent-23, score-0.179]
19 Therefore, we propose to use the Metropolis-Hastings algorithm, a popular Markov chain Monte Carlo (MCMC) method, to sample from the posterior. [sent-24, score-0.209]
20 The first method, discussed in Section 2, is mathematically more powerful but computationally expensive. [sent-26, score-0.046]
21 It uses MCMC to sample from the joint distribution over both correspondences and threedimensional scene structure. [sent-27, score-0.274]
22 While this approach is mathematically elegant from a Bayesian point of view, we have so far only been able to obtain results for simple, artificial domains. [sent-28, score-0.092]
23 Thus, to cope with large-scale data sets, we propose in Section 3 a maximum a posteriori (MAP) approach using the Expectation-Maximization (EM) algorithm to maximize the posterior. [sent-29, score-0.232]
24 Here we use MCMC sampling only for the data association problem. [sent-30, score-0.24]
25 Simulated annealing is used to reduce the danger of getting stuck in local minima. [sent-31, score-0.298]
26 Experimental results obtained in realistic domains and presented in Section 4 suggest that this approach works well in the general SFM case, and that it scales favorably to complex computer vision problems. [sent-32, score-0.211]
27 The idea of using MCMC for data association has been used before by [8] in the context of a traffic surveillance application. [sent-33, score-0.124]
28 However, their approach is not directly applicable to SFM, as the computer vision domain is characterized by a large number of local minima. [sent-34, score-0.22]
29 Our paper goes beyond theirs in two important aspects: First, we develop a framework for MCMC sampling over both the data association and the model, and second, we apply annealing to smooth the posterior so as to reduce the chance to get stuck in local minima. [sent-35, score-0.624]
30 2 A Fully Bayesian Approach using MCMC Below we derive the general approach for MCMC sampling from the joint posterior over data association and models. [sent-37, score-0.478]
31 We only show results for a simple example from pose estimation, as this approach is computationally very demanding. [sent-38, score-0.123]
32 An EM approach based on the general principles described here, but applicable to largerscale problems, will be described in the next section. [sent-39, score-0.088]
33 1 Structure from Motion The structure from motion problem is this: given a set of images of a scene, taken from different viewpoints, recover the 3D structure of the scene along with the camera parameters. [sent-41, score-0.6]
34 In the feature-based approach to SFM, we consider the situation in which a set of N 3D features Xj is viewed by a set of m cameras with parameters mi. [sent-42, score-0.161]
35 As input data we are given the set of 2D measurements Uik in the images, where k E {l. [sent-43, score-0.149]
36 Ki} and Ki is the number of measurements in the i-th image. [sent-45, score-0.149]
37 To model correspondence information, we introduce for each measurement Uik the indicator variable j ik, indicating that Uik is a measurement of the jik-th feature Xj,k. [sent-46, score-0.597]
38 The choice of feature type and camera model determines the measurement function h(mi,Xj), predicting the measurement Uik given mi and Xj (with j =jik): Uik = h(mi, Xj) + n where n is the measurement noise. [sent-47, score-0.547]
39 Without loss of generality, let us consider the case in which the features Xj are 3D points and the measurements Uik are points in the 2D image. [sent-48, score-0.29]
40 In this case the measurement function can be written as a 3D rigid displacement followed by a projection: (1) where Ri and ti are the rotation matrix and translation of the i-th camera, respectively, and : ~ 3 -+ ~ 2 is the camera projection model. [sent-49, score-0.387]
41 2 Deriving the Posterior Whereas previous methods single out a single "best" correspondence across images, in a Bayesian framework we are interested in characterizing our knowledge about the unknowns conditioned on the data only, averaging over all possible correspondences. [sent-51, score-0.429]
42 Thus, we are interested in the posterior distribution P(OIU), where 0 collects the unknown model parameters mi and Xj. [sent-52, score-0.3]
43 In the case of unknown correspondence, we need to sum over all possible assignments J = {jik} to obtain P(O IU) = L:P(J,O IU) ex P(O) L:P(UIJ,O)P(JIO) J (2) J where we have applied Bayes law and the chain rule. [sent-53, score-0.299]
44 Let us assume for now that there are no occlusions or spurious measurements, so that Ki = Nand J is a set of m permutations J i of the indices l. [sent-54, score-0.046]
45 normally distributed noise on the measurements, each term in (2) can be calculated using (3) if each J i is a permutation, and a otherwise. [sent-60, score-0.082]
46 3 Sampling from the Posterior using MCMC Unfortunately, direct computation of the total posterior distribution P(OIU) in (2) is intractable in general, because the number of correspondence assignments J is combinatorial in the number of features and images. [sent-66, score-0.761]
47 As a solution to this computational challenge we propose to instead sample from P(O IU). [sent-67, score-0.125]
48 Sampling directly from P(O IU) is equally difficult , but if we can obtain a sample {(o(r) , J(r))} from the joint distribution P (0, J IU), we can simply discard the correspondence part J(r) to obtain a sample {o(r)} from the marginal distribution P(OIU). [sent-68, score-0.676]
49 To sample from the joint distribution P(O, JIU) we propose to use MCMC sampling, in particular the Metropolis-Hastings algorithm [10]. [sent-69, score-0.232]
50 This method involves simulating a Markov chain whose equilibrium distribution is the desired posterior distribution P(O,JIU). [sent-70, score-0.292]
51 Propose a new state X' using a chosen proposal density Q(X'; X(r)). [sent-74, score-0.068]
52 "5 ~ 6 Figure 1: Left: A 2D model shape, defined by the 6 feature points XJ' Right: Transformed shape (by a simple rotation) and 6 noisy measurements Uk of the transformed features. [sent-80, score-0.292]
53 The true rotation is 70 degrees, noise is zero-mean Gaussian. [sent-81, score-0.145]
54 The sequence of tuples (e(r), J(r)) thus generated will be a sample from p(e, J IV), if the sampler is run sufficiently long. [sent-82, score-0.264]
55 To calculate the acceptance ratio a, we assume that the noise on the feature measurements is normally distributed and isotropic. [sent-83, score-0.267]
56 N(U;k; a= ;=1 =-' k=l Uik, 0") m; , Xj,k ,0" , X( r) ) Simplifying the notation by defining h~~) ~ h(mY), xt~), we obtain _ Q(X(r); X') Q(X/;X(r)) exp a - [1 '" f;'(lluik 20"2 (r) -hik 2 I 2] II -lluik -hik)11 ) (5) The proposal density Q(. [sent-86, score-0.068]
57 4 Example: A 2D Pose Estimation Problem To illustrate this method , we present a simple example from pose estimation. [sent-90, score-0.077]
58 We observe an image of this shape which has undergone a rotation to be estimated. [sent-92, score-0.178]
59 This rotated shape is shown at right in the figure, along with 6 noisy measurements Uk on the feature points. [sent-93, score-0.252]
60 In Figure 2 at left we show the posterior distribution over the rotation parameter, given the measurements from Figure 1 and with known correspondence. [sent-94, score-0.427]
61 In the case of unknown correspondence, the posterior conditioned on the data alone is shown at right in Figure 2 and is a mixture of 6! [sent-96, score-0.215]
62 =720 functions of the form (3), with 6 equally likely modes induced by the symmetry of the model shape. [sent-97, score-0.053]
63 e In order to perform MCMC sampling, we implement the proposal step by choosing randomly between two strategies. [sent-98, score-0.068]
64 (a) In a "small perturbation" we keep the correspondence assignment J but add a small amount of noise to e. [sent-99, score-0.379]
65 This serves to explore the values of within a mode of the posterior probability. [sent-100, score-0.126]
66 This provides a way to jump between probability modes. [sent-102, score-0.049]
67 Note that Q(X(r); X') /Q(X/; X(r)) = 1 for this proposal density. [sent-103, score-0.068]
68 The result of the sampling procedure is shown as a histogram of the rotation parameter in Figure 3. [sent-104, score-0.306]
69 The histogram is a non-parametric approximation to the analytic posterior shown in Figure 2. [sent-105, score-0.172]
70 The figure shows the results of running a sampler for 100,000 steps, the first 1000 of which were discarded as a transient . [sent-106, score-0.136]
71 Note that even for this simple example, there is still considerable correlation in the sample e e e "' '. [sent-107, score-0.102]
72 \ J \ \ ) Figure 2: (Left) The posterior distribution over rotation B with known correspondence, and (Right) with unknown correspondence, a mixture with 720 components. [sent-109, score-0.319]
73 3 Maximum a Posteriori Estimation using MCEM As illustrated above, sampling from the joint probability over assignments J and parameters 0 using MCMC can be very expensive. [sent-113, score-0.319]
74 However, if only a maximum a posteriori (MAP) estimate is needed, sampling over the joint space can be avoided by means of the EM algorithm. [sent-114, score-0.269]
75 This is intractable in general because of the combinatorial number of terms. [sent-116, score-0.051]
76 The EM algorithm provides a tractable alternative to maximizing P (0 IU), using the correspondence J as a hidden variable [ll). [sent-117, score-0.345]
77 Although the images were originally taken as a sequence in time, the ordering of the images is irrelevant to our method. [sent-123, score-0.178]
78 0 Figure 5: Starting from random structure (t=O) we recover gross 3D structure in the very first iteration (t=l). [sent-129, score-0.288]
79 As the annealing parameter (1' is gradually decreased, successively finer details are resolved (iterations 1,10,20, and 100 are shown). [sent-130, score-0.368]
80 The sampling proceeds as in the previous section, using the Metropolis-Hastings algorithm, but now with a fixed parameter f) f)t. [sent-132, score-0.149]
81 Note that at each iteration the estimate f)t changes and we sample from a different posterior distribution P(JIU, f)t). [sent-133, score-0.266]
82 = In practice it is important to add annealing to this basic EM scheme, to avoid getting stuck in local minima. [sent-134, score-0.298]
83 In simulated annealing we artificially increase the noise parameter (T for the early iterations, gradually decreasing it to its correct value. [sent-135, score-0.285]
84 First, the posterior distribution P(JIU, f)t) is less peaked when (T is high, allowing the MCMC sampler to explore the space of assignments J more easily. [sent-137, score-0.44]
85 The input data in this experiment consisted of 55 manually selected measurements in each of 11 input images, three of which are shown in Figure 4. [sent-140, score-0.149]
86 Note that features are not tracked from frame to frame and the images can be presented in arbitrary order. [sent-141, score-0.321]
87 To initialize the 11 cameras mi are all placed at the origin, looking towards the 55 model points Xj, who themselves are normally distributed at unit distance from the cameras. [sent-142, score-0.198]
88 The EM algorithm was run for 100 iterations, and the sampler for 10000 steps per image. [sent-144, score-0.199]
89 The algorithm converges consistently and fast to an estimate for the structure and motion where the correct correspondence is the most probable one, and where all assignments in the different images agree with each other. [sent-146, score-0.786]
90 A typical run of the algorithm is shown in Figure 5, where we have shown a wireframe model of the recovered structure at several points during the run. [sent-147, score-0.186]
91 There are two important points to note: (a) the gross structure is recovered in the very first iteration, starting from random initial structure, and (b) finer details of the structure are gradually resolved as the annealing parameter (T is decreased. [sent-148, score-0.628]
92 The estimate for the structure after convergence is almost identical to the one found by the factorization method [1] when this is provided with the correct correspondence. [sent-149, score-0.151]
93 5 Conclusions and Future Directions In this paper we presented a theoretically sound method to deal with ambiguous feature correspondence, and have shown how Markov chain Monte Carlo sampling can be used to obtain practical algorithms. [sent-150, score-0.236]
94 We have detailed this for two cases: (1) obtaining a posterior distribution over the parameters fJ, and (2) obtaining a MAP estimate by means of EM. [sent-151, score-0.275]
95 In future work, we would like to apply these methods in other domains where data association plays a central role. [sent-152, score-0.157]
96 In particular, in the highly active area of mobile robot mapping, the data association problem is currently a major obstacle to building large-scale maps [12, 13]. [sent-153, score-0.235]
97 We conjecture that our approach is equally applicable to the robotic mapping problem, and can lead to qualitatively new solutions in that domain. [sent-154, score-0.187]
98 Shape and motion from image streams under orthography: a factorization method. [sent-158, score-0.132]
99 Maintaining multiple motion model hypotheses over many views to recover matching and structure. [sent-184, score-0.217]
100 An algorithm for associating the features of two images. [sent-193, score-0.061]
wordName wordTfidf (topN-words)
[('correspondence', 0.345), ('mcmc', 0.296), ('iu', 0.251), ('sfm', 0.221), ('uik', 0.221), ('annealing', 0.163), ('oiu', 0.158), ('measurements', 0.149), ('assignments', 0.137), ('sampler', 0.136), ('camera', 0.131), ('qt', 0.128), ('jiu', 0.126), ('posterior', 0.126), ('association', 0.124), ('sampling', 0.116), ('rotation', 0.111), ('measurement', 0.108), ('em', 0.107), ('carlo', 0.098), ('monte', 0.098), ('motion', 0.098), ('stuck', 0.095), ('images', 0.089), ('vision', 0.089), ('ot', 0.086), ('chain', 0.084), ('structure', 0.083), ('pose', 0.077), ('mobile', 0.074), ('proposal', 0.068), ('recover', 0.068), ('xj', 0.068), ('shape', 0.067), ('joint', 0.066), ('sample', 0.065), ('run', 0.063), ('dellaert', 0.063), ('finer', 0.063), ('jik', 0.063), ('mcem', 0.063), ('meme', 0.063), ('oth', 0.063), ('thorpe', 0.063), ('torr', 0.063), ('tracked', 0.061), ('features', 0.061), ('map', 0.06), ('propose', 0.06), ('mi', 0.056), ('gradually', 0.055), ('frame', 0.055), ('resolved', 0.054), ('cameras', 0.054), ('correspondences', 0.054), ('gross', 0.054), ('seitz', 0.054), ('equally', 0.053), ('posteriori', 0.053), ('combinatorial', 0.051), ('matching', 0.051), ('bayesian', 0.051), ('establishing', 0.049), ('jump', 0.049), ('viewpoints', 0.049), ('scene', 0.048), ('conditioned', 0.048), ('normally', 0.048), ('histogram', 0.046), ('markov', 0.046), ('approach', 0.046), ('permutations', 0.046), ('mathematically', 0.046), ('conjecture', 0.046), ('logp', 0.043), ('computer', 0.043), ('applicable', 0.042), ('unknown', 0.041), ('distribution', 0.041), ('cope', 0.04), ('getting', 0.04), ('recovery', 0.04), ('ki', 0.04), ('points', 0.04), ('existing', 0.039), ('june', 0.039), ('iterations', 0.038), ('projection', 0.037), ('obtaining', 0.037), ('law', 0.037), ('considerable', 0.037), ('robot', 0.037), ('feature', 0.036), ('interested', 0.036), ('noise', 0.034), ('estimate', 0.034), ('factorization', 0.034), ('parameter', 0.033), ('maximize', 0.033), ('domains', 0.033)]
simIndex simValue paperId paperTitle
same-paper 1 1.000001 53 nips-2000-Feature Correspondence: A Markov Chain Monte Carlo Approach
Author: Frank Dellaert, Steven M. Seitz, Sebastian Thrun, Charles E. Thorpe
Abstract: When trying to recover 3D structure from a set of images, the most difficult problem is establishing the correspondence between the measurements. Most existing approaches assume that features can be tracked across frames, whereas methods that exploit rigidity constraints to facilitate matching do so only under restricted camera motion. In this paper we propose a Bayesian approach that avoids the brittleness associated with singling out one
2 0.12179638 82 nips-2000-Learning and Tracking Cyclic Human Motion
Author: Dirk Ormoneit, Hedvig Sidenbladh, Michael J. Black, Trevor Hastie
Abstract: We present methods for learning and tracking human motion in video. We estimate a statistical model of typical activities from a large set of 3D periodic human motion data by segmenting these data automatically into
3 0.11747338 117 nips-2000-Shape Context: A New Descriptor for Shape Matching and Object Recognition
Author: Serge Belongie, Jitendra Malik, Jan Puzicha
Abstract: We develop an approach to object recognition based on matching shapes and using a resulting measure of similarity in a nearest neighbor classifier. The key algorithmic problem here is that of finding pointwise correspondences between an image shape and a stored prototype shape. We introduce a new shape descriptor, the shape context, which makes this possible, using a simple and robust algorithm. The shape context at a point captures the distribution over relative positions of other shape points and thus summarizes global shape in a rich, local descriptor. We demonstrate that shape contexts greatly simplify recovery of correspondences between points of two given shapes. Once shapes are aligned, shape contexts are used to define a robust score for measuring shape similarity. We have used this score in a nearest-neighbor classifier for recognition of hand written digits as well as 3D objects, using exactly the same distance function. On the benchmark MNIST dataset of handwritten digits, this yields an error rate of 0.63%, outperforming other published techniques. 1
4 0.11065458 133 nips-2000-The Kernel Gibbs Sampler
Author: Thore Graepel, Ralf Herbrich
Abstract: We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise. 1
5 0.095446043 137 nips-2000-The Unscented Particle Filter
Author: Rudolph van der Merwe, Arnaud Doucet, Nando de Freitas, Eric A. Wan
Abstract: In this paper, we propose a new particle filter based on sequential importance sampling. The algorithm uses a bank of unscented filters to obtain the importance proposal distribution. This proposal has two very
6 0.091097362 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
7 0.078890413 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition
8 0.078695774 135 nips-2000-The Manhattan World Assumption: Regularities in Scene Statistics which Enable Bayesian Inference
9 0.078061558 41 nips-2000-Discovering Hidden Variables: A Structure-Based Approach
10 0.077680916 75 nips-2000-Large Scale Bayes Point Machines
11 0.074696802 72 nips-2000-Keeping Flexible Active Contours on Track using Metropolis Updates
12 0.071301185 38 nips-2000-Data Clustering by Markovian Relaxation and the Information Bottleneck Method
13 0.069809221 17 nips-2000-Active Learning for Parameter Estimation in Bayesian Networks
14 0.063105188 83 nips-2000-Machine Learning for Video-Based Rendering
15 0.062412959 122 nips-2000-Sparse Representation for Gaussian Process Models
16 0.062270533 80 nips-2000-Learning Switching Linear Models of Human Motion
17 0.061128706 78 nips-2000-Learning Joint Statistical Models for Audio-Visual Fusion and Segregation
18 0.060755491 2 nips-2000-A Comparison of Image Processing Techniques for Visual Speech Recognition Applications
19 0.059801936 31 nips-2000-Beyond Maximum Likelihood and Density Estimation: A Sample-Based Criterion for Unsupervised Learning of Complex Models
20 0.058125213 65 nips-2000-Higher-Order Statistical Properties Arising from the Non-Stationarity of Natural Signals
topicId topicWeight
[(0, 0.22), (1, -0.03), (2, 0.117), (3, 0.063), (4, -0.006), (5, 0.029), (6, 0.101), (7, 0.051), (8, -0.014), (9, -0.021), (10, 0.118), (11, 0.08), (12, 0.121), (13, -0.074), (14, -0.149), (15, -0.123), (16, -0.013), (17, -0.032), (18, -0.039), (19, 0.014), (20, -0.12), (21, -0.037), (22, -0.032), (23, 0.015), (24, -0.059), (25, 0.081), (26, -0.026), (27, 0.071), (28, -0.059), (29, 0.126), (30, 0.071), (31, -0.098), (32, 0.035), (33, 0.01), (34, 0.023), (35, -0.036), (36, -0.083), (37, 0.148), (38, -0.01), (39, 0.076), (40, 0.247), (41, -0.009), (42, 0.024), (43, -0.174), (44, -0.067), (45, 0.036), (46, 0.037), (47, -0.161), (48, 0.074), (49, -0.107)]
simIndex simValue paperId paperTitle
same-paper 1 0.94652092 53 nips-2000-Feature Correspondence: A Markov Chain Monte Carlo Approach
Author: Frank Dellaert, Steven M. Seitz, Sebastian Thrun, Charles E. Thorpe
Abstract: When trying to recover 3D structure from a set of images, the most difficult problem is establishing the correspondence between the measurements. Most existing approaches assume that features can be tracked across frames, whereas methods that exploit rigidity constraints to facilitate matching do so only under restricted camera motion. In this paper we propose a Bayesian approach that avoids the brittleness associated with singling out one
2 0.63407618 117 nips-2000-Shape Context: A New Descriptor for Shape Matching and Object Recognition
Author: Serge Belongie, Jitendra Malik, Jan Puzicha
Abstract: We develop an approach to object recognition based on matching shapes and using a resulting measure of similarity in a nearest neighbor classifier. The key algorithmic problem here is that of finding pointwise correspondences between an image shape and a stored prototype shape. We introduce a new shape descriptor, the shape context, which makes this possible, using a simple and robust algorithm. The shape context at a point captures the distribution over relative positions of other shape points and thus summarizes global shape in a rich, local descriptor. We demonstrate that shape contexts greatly simplify recovery of correspondences between points of two given shapes. Once shapes are aligned, shape contexts are used to define a robust score for measuring shape similarity. We have used this score in a nearest-neighbor classifier for recognition of hand written digits as well as 3D objects, using exactly the same distance function. On the benchmark MNIST dataset of handwritten digits, this yields an error rate of 0.63%, outperforming other published techniques. 1
3 0.52530015 135 nips-2000-The Manhattan World Assumption: Regularities in Scene Statistics which Enable Bayesian Inference
Author: James M. Coughlan, Alan L. Yuille
Abstract: Preliminary work by the authors made use of the so-called
4 0.49943057 82 nips-2000-Learning and Tracking Cyclic Human Motion
Author: Dirk Ormoneit, Hedvig Sidenbladh, Michael J. Black, Trevor Hastie
Abstract: We present methods for learning and tracking human motion in video. We estimate a statistical model of typical activities from a large set of 3D periodic human motion data by segmenting these data automatically into
5 0.400424 133 nips-2000-The Kernel Gibbs Sampler
Author: Thore Graepel, Ralf Herbrich
Abstract: We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise. 1
6 0.39742544 72 nips-2000-Keeping Flexible Active Contours on Track using Metropolis Updates
7 0.33984974 65 nips-2000-Higher-Order Statistical Properties Arising from the Non-Stationarity of Natural Signals
8 0.3237229 115 nips-2000-Sequentially Fitting ``Inclusive'' Trees for Inference in Noisy-OR Networks
9 0.32030439 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
10 0.31869027 10 nips-2000-A Productive, Systematic Framework for the Representation of Visual Structure
11 0.31575209 137 nips-2000-The Unscented Particle Filter
12 0.3156741 80 nips-2000-Learning Switching Linear Models of Human Motion
13 0.31541118 29 nips-2000-Bayes Networks on Ice: Robotic Search for Antarctic Meteorites
14 0.31456795 75 nips-2000-Large Scale Bayes Point Machines
15 0.31105864 60 nips-2000-Gaussianization
16 0.31038669 73 nips-2000-Kernel-Based Reinforcement Learning in Average-Cost Problems: An Application to Optimal Portfolio Choice
17 0.30295548 15 nips-2000-Accumulator Networks: Suitors of Local Probability Propagation
18 0.30113044 83 nips-2000-Machine Learning for Video-Based Rendering
19 0.29707247 57 nips-2000-Four-legged Walking Gait Control Using a Neuromorphic Chip Interfaced to a Support Vector Learning Algorithm
20 0.29035801 17 nips-2000-Active Learning for Parameter Estimation in Bayesian Networks
topicId topicWeight
[(10, 0.024), (17, 0.11), (32, 0.455), (33, 0.038), (55, 0.03), (62, 0.036), (65, 0.022), (67, 0.047), (76, 0.021), (79, 0.029), (81, 0.025), (90, 0.028), (97, 0.029)]
simIndex simValue paperId paperTitle
same-paper 1 0.90127307 53 nips-2000-Feature Correspondence: A Markov Chain Monte Carlo Approach
Author: Frank Dellaert, Steven M. Seitz, Sebastian Thrun, Charles E. Thorpe
Abstract: When trying to recover 3D structure from a set of images, the most difficult problem is establishing the correspondence between the measurements. Most existing approaches assume that features can be tracked across frames, whereas methods that exploit rigidity constraints to facilitate matching do so only under restricted camera motion. In this paper we propose a Bayesian approach that avoids the brittleness associated with singling out one
2 0.81345439 145 nips-2000-Weak Learners and Improved Rates of Convergence in Boosting
Author: Shie Mannor, Ron Meir
Abstract: The problem of constructing weak classifiers for boosting algorithms is studied. We present an algorithm that produces a linear classifier that is guaranteed to achieve an error better than random guessing for any distribution on the data. While this weak learner is not useful for learning in general, we show that under reasonable conditions on the distribution it yields an effective weak learner for one-dimensional problems. Preliminary simulations suggest that similar behavior can be expected in higher dimensions, a result which is corroborated by some recent theoretical bounds. Additionally, we provide improved convergence rate bounds for the generalization error in situations where the empirical error can be made small, which is exactly the situation that occurs if weak learners with guaranteed performance that is better than random guessing can be established. 1
3 0.42052901 60 nips-2000-Gaussianization
Author: Scott Saobing Chen, Ramesh A. Gopinath
Abstract: High dimensional data modeling is difficult mainly because the so-called
4 0.40944049 79 nips-2000-Learning Segmentation by Random Walks
Author: Marina Meila, Jianbo Shi
Abstract: We present a new view of image segmentation by pairwise similarities. We interpret the similarities as edge flows in a Markov random walk and study the eigenvalues and eigenvectors of the walk's transition matrix. This interpretation shows that spectral methods for clustering and segmentation have a probabilistic foundation. In particular, we prove that the Normalized Cut method arises naturally from our framework. Finally, the framework provides a principled method for learning the similarity function as a combination of features. 1
5 0.39664268 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
Author: Zoubin Ghahramani, Matthew J. Beal
Abstract: Variational approximations are becoming a widespread tool for Bayesian learning of graphical models. We provide some theoretical results for the variational updates in a very general family of conjugate-exponential graphical models. We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian learning. Applying these results to the Bayesian analysis of linear-Gaussian state-space models we obtain a learning procedure that exploits the Kalman smoothing propagation, while integrating over all model parameters. We demonstrate how this can be used to infer the hidden state dimensionality of the state-space model in a variety of synthetic problems and one real high-dimensional data set. 1
6 0.39576018 133 nips-2000-The Kernel Gibbs Sampler
7 0.37780887 122 nips-2000-Sparse Representation for Gaussian Process Models
8 0.37330788 74 nips-2000-Kernel Expansions with Unlabeled Examples
9 0.37325001 72 nips-2000-Keeping Flexible Active Contours on Track using Metropolis Updates
10 0.37285519 37 nips-2000-Convergence of Large Margin Separable Linear Classification
11 0.37196597 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks
12 0.36563799 75 nips-2000-Large Scale Bayes Point Machines
13 0.3652606 119 nips-2000-Some New Bounds on the Generalization Error of Combined Classifiers
14 0.36328995 17 nips-2000-Active Learning for Parameter Estimation in Bayesian Networks
15 0.36076057 22 nips-2000-Algorithms for Non-negative Matrix Factorization
16 0.35926601 94 nips-2000-On Reversing Jensen's Inequality
17 0.35911113 127 nips-2000-Structure Learning in Human Causal Induction
18 0.35856849 83 nips-2000-Machine Learning for Video-Based Rendering
19 0.35757321 80 nips-2000-Learning Switching Linear Models of Human Motion
20 0.35465088 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition