nips nips2007 nips2007-50 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Leonid Sigal, Alexandru Balan, Michael J. Black
Abstract: Estimation of three-dimensional articulated human pose and motion from images is a central problem in computer vision. Much of the previous work has been limited by the use of crude generative models of humans represented as articulated collections of simple parts such as cylinders. Automatic initialization of such models has proved difficult and most approaches assume that the size and shape of the body parts are known a priori. In this paper we propose a method for automatically recovering a detailed parametric model of non-rigid body shape and pose from monocular imagery. Specifically, we represent the body using a parameterized triangulated mesh model that is learned from a database of human range scans. We demonstrate a discriminative method to directly recover the model parameters from monocular images using a conditional mixture of kernel regressors. This predicted pose and shape are used to initialize a generative model for more detailed pose and shape estimation. The resulting approach allows fully automatic pose and shape recovery from monocular and multi-camera imagery. Experimental results show that our method is capable of robustly recovering articulated pose, shape and biometric measurements (e.g. height, weight, etc.) in both calibrated and uncalibrated camera environments. 1
Reference: text
sentIndex sentText sentNum sentScore
1 Combined discriminative and generative articulated pose and non-rigid shape estimation Leonid Sigal Alexandru Balan Michael J. [sent-1, score-1.409]
2 edu Abstract Estimation of three-dimensional articulated human pose and motion from images is a central problem in computer vision. [sent-4, score-0.837]
3 Much of the previous work has been limited by the use of crude generative models of humans represented as articulated collections of simple parts such as cylinders. [sent-5, score-0.308]
4 Automatic initialization of such models has proved difficult and most approaches assume that the size and shape of the body parts are known a priori. [sent-6, score-0.697]
5 In this paper we propose a method for automatically recovering a detailed parametric model of non-rigid body shape and pose from monocular imagery. [sent-7, score-1.332]
6 Specifically, we represent the body using a parameterized triangulated mesh model that is learned from a database of human range scans. [sent-8, score-0.594]
7 We demonstrate a discriminative method to directly recover the model parameters from monocular images using a conditional mixture of kernel regressors. [sent-9, score-0.428]
8 This predicted pose and shape are used to initialize a generative model for more detailed pose and shape estimation. [sent-10, score-1.809]
9 The resulting approach allows fully automatic pose and shape recovery from monocular and multi-camera imagery. [sent-11, score-0.998]
10 Experimental results show that our method is capable of robustly recovering articulated pose, shape and biometric measurements (e. [sent-12, score-0.769]
11 1 Introduction We address the problem of marker-less articulated pose and shape estimation of the human body from images using a detailed parametric body model [3]. [sent-16, score-1.83]
12 Most prior work on marker-less pose estimation and tracking has concentrated on the use of generative Baysian methods [8, 15] that exploit crude models of body shape (e. [sent-17, score-1.301]
13 We argue that a richer representation of shape is needed to make future strides in building better generative models. [sent-20, score-0.503]
14 Discriminative methods [1, 2, 10, 13, 16, 17], more recently introduced specifically for the pose estimation task, do not address estimation of the body shape; in fact, they are specifically designed to be invariant to body shape variations. [sent-21, score-1.582]
15 Any real-world system must be able to estimate both body shape and pose simultaneously. [sent-22, score-1.117]
16 Discriminative approaches to pose estimation attempt to learn a direct mapping from image features to 3D pose from either a single image [1, 14, 17] or multiple approximately calibrated views [9]. [sent-23, score-1.255]
17 These approaches tend to use silhouettes [1, 9, 14] and sometimes edges [16, 17] as image features and learn a probabilistic mapping in the form of Nearest Neighbor (NN) search, regression [1], mixture of regressors [2], mixture of Baysian experts [17], or specialized mappings [14]. [sent-24, score-0.32]
18 More importantly they currently do not address estimation of the body shape itself. [sent-26, score-0.731]
19 Body shape estimation (independent of the pose) has many applications in biometric authentication and consumer application domains. [sent-27, score-0.591]
20 1 Simplified models of body shape have a long history in computer vision and provide a relatively low dimensional description of the human form. [sent-28, score-0.748]
21 More detailed triangulated mesh models obtained from laser range scans have been viewed as too high dimensional for vision applications. [sent-29, score-0.323]
22 The SCAPE model captures correlated body shape deformations of the body due to the identity of the person and their non-rigid muscle deformation due to articulation. [sent-32, score-1.052]
23 This model has been shown to allow tractable estimation of parameters from multi-view silhouette image features [5, 11] and from monocular images in scenes with point lights and cast shadows [4]. [sent-33, score-0.67]
24 In [5] the SCAPE model is projected into multiple calibrated images and an iterative importance sampling method is used for inference of the pose and shape that best explain the observed silhouettes. [sent-34, score-0.97]
25 Alternatively, in [11] visual hulls are constructed from many silhouette images and the Iterative Closest Point (ICP) algorithm is used to extract the pose by registering the volumetric features with SCAPE. [sent-35, score-0.807]
26 In this paper we substitute discriminative articulated pose and shape estimation in place of manual initialization. [sent-37, score-1.346]
27 In doing so, we extend the current models for discriminative pose estimation to deal with the estimation of shape, and couple the discriminative and generative methods for more robust combined estimation. [sent-38, score-1.096]
28 Few combined discriminative and generative pose estimation methods that exist [16], typically require temporal image data and do not address shape estimation problem. [sent-39, score-1.343]
29 For discriminative pose and shape estimation we use a Mixture of Experts model, with kernel linear regression as experts, to learn a direct probabilistic mapping between monocular silhouette contour features and the SCAPE parameters. [sent-40, score-1.639]
30 To our knowledge this is the first work that has attempted to recover the 3D shape of the human body from monocular image directly. [sent-41, score-0.926]
31 For generative optimization we make use of the method proposed in [5] where the silhouettes are predicted in multiple views given the pose and shape parameters of the SCAPE model and are compared to the observed silhouettes using a Chamfer distance measure. [sent-43, score-1.133]
32 For training data we use the SCAPE model to generate pairs of 3D body shapes and projected image silhouettes. [sent-44, score-0.363]
33 We are able to predict pose, shape, and simple biometric measurements for the subjects from images captured by 4 synchronized cameras. [sent-46, score-0.3]
34 We also show results for 3D shape estimation from monocular images. [sent-47, score-0.612]
35 The contributions of this paper are two fold: (1) we formulate a discriminative model for estimating the pose and shape directly from monocular image features, and (2) we couple this discriminative method with a generative stochastic optimization for detailed estimation of pose and the shape. [sent-48, score-2.101]
36 2 SCAPE Body Model In this section we briefly introduce the SCAPE body model; for details the reader is referred to [3]. [sent-49, score-0.279]
37 The SCAPE model is defined by a set of parameterized deformations that are applied to a reference mesh that consists of T triangles {∆xt |t ∈ [1, . [sent-51, score-0.295]
38 Each of the triangles in the reference mesh is defined by three vertices in 3D space, (vt,1 , vt,2 , vt,3 ), and has a corresponding associated body part index pt ∈ [1, . [sent-55, score-0.479]
39 , P ] (we work with the model that has P = 15 body parts corresponding to torso, pelvis, head, and 3 segments for each of the upper and lower extremities). [sent-58, score-0.279]
40 Estimating the shape and articulated pose of the body amounts to estimating parameters, Y, of the deformations required to produce the mesh {∆yt |t ∈ [1, . [sent-60, score-1.56]
41 Radial Distance Function (RDF) encoding of the silhouette contour is illustrated in (a); Shape Context (SC) encoding of a contour sample point in (b). [sent-66, score-0.432]
42 3 Features In this work we make use of silhouette features for both discriminative and generative estimation of pose and shape. [sent-73, score-1.116]
43 Silhouettes are commonly used for human pose estimation [1, 2, 13, 15, 17]; while limited in their representational power, they are easy to estimate from images and fast to synthesize from a mesh model. [sent-74, score-0.835]
44 The use of such richer feature representations will likely improve both discriminative and generative estimation. [sent-78, score-0.314]
45 At every sampled boundary point the shape context descriptor is parameterized by the number of orientation bins, φ, number of radial-distance bins, r, and the minimum and maximum radial distances denoted by rin and rout respectively. [sent-81, score-0.559]
46 As in [1] we achieve scale invariance by making rout a function of the overall silhouette height and normalizing the individual shape context histogram by the sum over all histogram bins. [sent-82, score-0.722]
47 In this, bag-of-words style model, the shape context space is vector quantized into a set of K clusters (a. [sent-86, score-0.393]
48 The K = 100 center codebook is learned by running k-means clustering on the combined set of shape context vectors obtained from the large set of training silhouettes. [sent-90, score-0.438]
49 The resulting codebook shape context representation is translation and scale invariant by definition. [sent-94, score-0.411]
50 Following the prior work [1, 13] we let φ = 12, r = 5, rin = 3, and rout = κh where h is the height of the silhouette and κ is typically 1 ensuring integration of contour points over regions roughly 4 similar to the limb size [1]. [sent-95, score-0.478]
51 For shape estimation, we found that combining features across multiple 1 spatial scales (e. [sent-96, score-0.411]
52 , ||pN −pc ||}, where pc ∈ R2 is the centroid of the image silhouette, and pi is the point on the silhouette outer contour; hence ||pi − pc || ∈ R measures the maximal object extent in the particular direction denoted by i from the centroid. [sent-106, score-0.472]
53 We explicitly ensure that the dimensionality of the RDF descriptor is comparable to that of shape context introduced above. [sent-108, score-0.43]
54 Unlike the shape context descriptor, the RDF feature vector is neither scale nor translation invariant. [sent-109, score-0.37]
55 4 Discriminative estimation of pose and shape To produce initial estimates for the body pose and/or shape in 3D from image features, we need to model the conditional distribution p(Y|X) of the 3D body state Y given the set of 2D features X. [sent-111, score-2.415]
56 For articulated pose and shape we experimented with using both RDF and SC features (global position requires RDF features since SC is location and scale invariant). [sent-145, score-1.126]
57 SC features tend to work better for pose estimation where as RDF features perform better for shape estimation. [sent-146, score-1.002]
58 In cases where calibration is unavailable, we estimate the shape using p(ν|Xsc ) which tends to produce reasonable results but cannot estimate the overall height. [sent-148, score-0.432]
59 5 Generative stochastic optimization of pose and shape Generative stochastic state estimation, as in [5], is handled within an iterative importance sampling framework [8]. [sent-150, score-0.865]
60 Here we make use of the discriminative pose and shape estimate from Section 4 to give us the initial distribution for the posterior. [sent-159, score-1.019]
61 In particular, given the discriminative model for the shape, p(ν|X), position, p(τ |X), and articulated pose, p(θ|X), of the body, we can let (with slight (0) (0) abuse of notation) yi ∼ [p(τ |X), p(θ|X), p(ν|X)] and πi = 1/N for i ∈ [1, . [sent-160, score-0.412]
62 For a given camera view, a foreground silhouette is computed using a shadow-suppressing background subtraction procedure and is compared to the silhouette obtained by projecting the SCAPE model subject to the hypothesized state into the image plane (given calibration parameters of the camera). [sent-166, score-0.673]
63 The training dataset, used to learn discriminative MoE models and codeword dictionary for SC, was generated by synthesizing 3000 silhouette images obtained by projecting corresponding SCAPE body models into an image plane using calibration parameters of the camera. [sent-171, score-0.878]
64 SCAPE body models, in turn, were generated by randomly sampling the pose from a database of motion capture data (consisting of generally non-cyclic random motions) and the body shape coefficient from a uniform distribution centered at the mean shape. [sent-172, score-1.444]
65 Similar synthetic test dataset was constructed consisting of 597 silhouette-SCAPE body 5 (a) (b) (c) Figure 2: Discriminative estimation of weight loss. [sent-173, score-0.401]
66 The estimated shape and pose obtained by our discriminative estimation procedure is shown in (c). [sent-176, score-1.101]
67 In bottom row, we manually rotated the model 90 degrees for better visibility of the shape variation. [sent-177, score-0.37]
68 Our method estimated that the person illustrated in the top row lost 22 lb and the one illustrated in the bottom row – 32 lb; web-reported weight loss for the two subjects was 24 lb and 64 lb respectively. [sent-179, score-0.293]
69 Also, the bottom example pushes the limits of our current shape model which was trained using only 10 scans of people, none close to the desired body shape. [sent-181, score-0.7]
70 Results of using the MoE model, similar to the one introduced here, for pose estimation have previously been reported in [2] and [17]. [sent-186, score-0.572]
71 Our experience with the articulated pose estimation was similar and we omit supporting experiments due to lack of space. [sent-187, score-0.756]
72 For discriminative estimation of shape we quantitatively compared SC and RDF features, by training two MoE models p(ν|Xsc ) and p(ν|Xrdf ), and found the latter to perform better when camera calibration is available (on the average we achieve a 19. [sent-188, score-0.764]
73 We attribute the superior performance of RDF features to their sensitivity to the silhouette position and scale, that allows for better estimation of overall height of the body. [sent-190, score-0.429]
74 Given the shape we can also estimate the volume of the body and assuming constant density of water, compute the weight of the person. [sent-191, score-0.689]
75 To illustrate this we estimate approximate weight loss of a person from monocular uncalibrated images (see Figure 2). [sent-192, score-0.309]
76 In principle, the SCAPE model is not ideal for weight calculations, since non-rigid deformations caused by articulations of the body will result in (unnatural) variations in weight. [sent-194, score-0.386]
77 The weight calculations are, on the other hand, very sensitive to the body shape. [sent-196, score-0.319]
78 Lastly we tested the performance of the combined discriminative and generative framework by estimating articulated pose, shape and biometric measurements for people in our real dataset. [sent-198, score-1.073]
79 Rarely our system does produce poor pose and/or shape estimates. [sent-201, score-0.838]
80 Typically these cases can be classified into two categories: (1) minor errors that only effect the pose and are artifacts of local optima or (2) more significant errors that effect the shape and result from poor initial distribution over the state produced by the discriminative method. [sent-202, score-1.05]
81 The latter arise as a result of 180– degree view ambiguity and/or pose configuration ambiguities, due to symmetry, in the silhouettes. [sent-203, score-0.468]
82 Estimates obtained using discriminative only and discriminative followed by generative shape estimation methods are reported in the next two columns. [sent-247, score-0.916]
83 Last column reports estimates obtained using ground truth pose and mean shape as initialization for the generative fit (this is the algorithm proposed in [5]). [sent-249, score-0.988]
84 Notice that generative estimation significantly refines the discriminative estimates. [sent-250, score-0.365]
85 7 Discussion and Conclusions We have presented a method for automatic estimation of articulated pose and shape of people from images. [sent-252, score-1.171]
86 We found that the discriminative model produced an effective initialization for generative optimization procedure and that biometric measurements from the recovered shape were comparable to those produced by prior approaches that required manual initialization [5]. [sent-254, score-0.957]
87 We also introduced and addressed the problem of discriminative estimation of shape from monocular calibrated and un-calibrated images. [sent-255, score-0.864]
88 More accurate shape estimates from monocular data will require richer image descriptors. [sent-256, score-0.619]
89 Among such, is the use of temporal consistency in the discriminative pose (and perhaps shape) estimation [17] and dense image descriptors [10]. [sent-258, score-0.833]
90 In addition, in this work we estimated the shape space of the SCAPE model from only 10 body scans, as a result the learned shape space is rather limited in its expressive power. [sent-259, score-1.046]
91 Recovering 3D human pose from monocular images, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. [sent-267, score-0.687]
92 Shining a light on human pose: On shadows, shading and the estimation of pose and shape, International Conference on Computer Vision (ICCV), 2007. [sent-291, score-0.609]
93 Detailed human shape and pose from images, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007. [sent-298, score-0.897]
94 7 Subject A Subject B Figure 4: Visualizing pose and shape estimation. [sent-306, score-0.838]
95 Examples of simultaneous pose and shape estimation for subjects A and B are shown on top and bottom respectively. [sent-307, score-0.961]
96 Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture, CVPR, Vol. [sent-317, score-0.674]
97 Articulated body motion capture by annealed particle filtering, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. [sent-324, score-0.35]
98 Inferring 3D structure with a statistical image-based shape model, IEEE International Conference on Computer Vision (ICCV), pp. [sent-331, score-0.37]
99 Accurately measuring human movement using articulated ICP with soft-joint constraints and a repository of articulated models, CVPR, 2007. [sent-342, score-0.471]
100 Comparison of silhouette shape descriptors for example-based human pose recovery, IEEE Conference on Automatic Face and Gesture Recognition (FG 2006), pp. [sent-351, score-1.183]
wordName wordTfidf (topN-words)
[('pose', 0.468), ('shape', 0.37), ('body', 0.279), ('scape', 0.247), ('silhouette', 0.242), ('articulated', 0.206), ('discriminative', 0.181), ('zk', 0.18), ('mesh', 0.17), ('monocular', 0.16), ('rdf', 0.154), ('biometric', 0.139), ('generative', 0.102), ('contour', 0.095), ('moe', 0.093), ('pc', 0.086), ('estimation', 0.082), ('silhouettes', 0.081), ('xrdf', 0.077), ('xsc', 0.077), ('camera', 0.069), ('deformations', 0.067), ('arm', 0.065), ('sc', 0.064), ('height', 0.064), ('lb', 0.062), ('calibration', 0.062), ('cvpr', 0.06), ('human', 0.059), ('image', 0.058), ('images', 0.056), ('rpt', 0.054), ('scans', 0.051), ('calibrated', 0.049), ('motion', 0.048), ('initialization', 0.048), ('balan', 0.046), ('rout', 0.046), ('sigal', 0.046), ('gate', 0.046), ('radial', 0.046), ('people', 0.045), ('experts', 0.044), ('descriptors', 0.044), ('mm', 0.043), ('codebook', 0.041), ('subjects', 0.041), ('features', 0.041), ('sminchisescu', 0.04), ('weight', 0.04), ('vision', 0.04), ('manual', 0.039), ('descriptor', 0.038), ('iccv', 0.035), ('agarwal', 0.034), ('regressors', 0.034), ('synchronized', 0.034), ('bins', 0.034), ('std', 0.032), ('detailed', 0.031), ('mixture', 0.031), ('anguelov', 0.031), ('deformation', 0.031), ('icp', 0.031), ('kanaujia', 0.031), ('kg', 0.031), ('rin', 0.031), ('shadows', 0.031), ('triangulated', 0.031), ('artifacts', 0.031), ('views', 0.031), ('richer', 0.031), ('expert', 0.03), ('measurements', 0.03), ('triangles', 0.03), ('parameterized', 0.028), ('learned', 0.027), ('importance', 0.027), ('davis', 0.027), ('motions', 0.027), ('ownership', 0.027), ('registered', 0.027), ('uncalibrated', 0.027), ('dx', 0.027), ('person', 0.026), ('shapes', 0.026), ('yi', 0.025), ('animation', 0.025), ('baysian', 0.025), ('chamfer', 0.025), ('rigid', 0.025), ('histograms', 0.024), ('pn', 0.024), ('recovering', 0.024), ('annealed', 0.023), ('quantized', 0.023), ('projection', 0.023), ('introduced', 0.022), ('span', 0.022), ('diag', 0.022)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000008 50 nips-2007-Combined discriminative and generative articulated pose and non-rigid shape estimation
Author: Leonid Sigal, Alexandru Balan, Michael J. Black
Abstract: Estimation of three-dimensional articulated human pose and motion from images is a central problem in computer vision. Much of the previous work has been limited by the use of crude generative models of humans represented as articulated collections of simple parts such as cylinders. Automatic initialization of such models has proved difficult and most approaches assume that the size and shape of the body parts are known a priori. In this paper we propose a method for automatically recovering a detailed parametric model of non-rigid body shape and pose from monocular imagery. Specifically, we represent the body using a parameterized triangulated mesh model that is learned from a database of human range scans. We demonstrate a discriminative method to directly recover the model parameters from monocular images using a conditional mixture of kernel regressors. This predicted pose and shape are used to initialize a generative model for more detailed pose and shape estimation. The resulting approach allows fully automatic pose and shape recovery from monocular and multi-camera imagery. Experimental results show that our method is capable of robustly recovering articulated pose, shape and biometric measurements (e.g. height, weight, etc.) in both calibrated and uncalibrated camera environments. 1
2 0.1905499 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model
Author: Zhengdong Lu, Cristian Sminchisescu, Miguel Á. Carreira-Perpiñán
Abstract: Reliably recovering 3D human pose from monocular video requires models that bias the estimates towards typical human poses and motions. We construct priors for people tracking using the Laplacian Eigenmaps Latent Variable Model (LELVM). LELVM is a recently introduced probabilistic dimensionality reduction model that combines the advantages of latent variable models—a multimodal probability density for latent and observed variables, and globally differentiable nonlinear mappings for reconstruction and dimensionality reduction—with those of spectral manifold learning methods—no local optima, ability to unfold highly nonlinear manifolds, and good practical scaling to latent spaces of high dimension. LELVM is computationally efficient, simple to learn from sparse training data, and compatible with standard probabilistic trackers such as particle filters. We analyze the performance of a LELVM-based probabilistic sigma point mixture tracker in several real and synthetic human motion sequences and demonstrate that LELVM not only provides sufficient constraints for robust operation in the presence of missing, noisy and ambiguous image measurements, but also compares favorably with alternative trackers based on PCA or GPLVM priors. Recent research in reconstructing articulated human motion has focused on methods that can exploit available prior knowledge on typical human poses or motions in an attempt to build more reliable algorithms. The high-dimensionality of human ambient pose space—between 30-60 joint angles or joint positions depending on the desired accuracy level, makes exhaustive search prohibitively expensive. This has negative impact on existing trackers, which are often not sufficiently reliable at reconstructing human-like poses, self-initializing or recovering from failure. Such difficulties have stimulated research in algorithms and models that reduce the effective working space, either using generic search focusing methods (annealing, state space decomposition, covariance scaling) or by exploiting specific problem structure (e.g. kinematic jumps). Experience with these procedures has nevertheless shown that any search strategy, no matter how effective, can be made significantly more reliable if restricted to low-dimensional state spaces. This permits a more thorough exploration of the typical solution space, for a given, comparatively similar computational effort as a high-dimensional method. The argument correlates well with the belief that the human pose space, although high-dimensional in its natural ambient parameterization, has a significantly lower perceptual (latent or intrinsic) dimensionality, at least in a practical sense—many poses that are possible are so improbable in many real-world situations that it pays off to encode them with low accuracy. A perceptual representation has to be powerful enough to capture the diversity of human poses in a sufficiently broad domain of applicability (the task domain), yet compact and analytically tractable for search and optimization. This justifies the use of models that are nonlinear and low-dimensional (able to unfold highly nonlinear manifolds with low distortion), yet probabilistically motivated and globally continuous for efficient optimization. Reducing dimensionality is not the only goal: perceptual representations have to preserve critical properties of the ambient space. Reliable tracking needs locality: nearby regions in ambient space have to be mapped to nearby regions in latent space. If this does not hold, the tracker is forced to make unrealistically large, and difficult to predict jumps in latent space in order to follow smooth trajectories in the joint angle ambient space. 1 In this paper we propose to model priors for articulated motion using a recently introduced probabilistic dimensionality reduction method, the Laplacian Eigenmaps Latent Variable Model (LELVM) [1]. Section 1 discusses the requirements of priors for articulated motion in the context of probabilistic and spectral methods for manifold learning, and section 2 describes LELVM and shows how it combines both types of methods in a principled way. Section 3 describes our tracking framework (using a particle filter) and section 4 shows experiments with synthetic and real human motion sequences using LELVM priors learned from motion-capture data. Related work: There is significant work in human tracking, using both generative and discriminative methods. Due to space limitations, we will focus on the more restricted class of 3D generative algorithms based on learned state priors, and not aim at a full literature review. Deriving compact prior representations for tracking people or other articulated objects is an active research field, steadily growing with the increased availability of human motion capture data. Howe et al. and Sidenbladh et al. [2] propose Gaussian mixture representations of short human motion fragments (snippets) and integrate them in a Bayesian MAP estimation framework that uses 2D human joint measurements, independently tracked by scaled prismatic models [3]. Brand [4] models the human pose manifold using a Gaussian mixture and uses an HMM to infer the mixture component index based on a temporal sequence of human silhouettes. Sidenbladh et al. [5] use similar dynamic priors and exploit ideas in texture synthesis—efficient nearest-neighbor search for similar motion fragments at runtime—in order to build a particle-filter tracker with observation model based on contour and image intensity measurements. Sminchisescu and Jepson [6] propose a low-dimensional probabilistic model based on fitting a parametric reconstruction mapping (sparse radial basis function) and a parametric latent density (Gaussian mixture) to the embedding produced with a spectral method. They track humans walking and involved in conversations using a Bayesian multiple hypotheses framework that fuses contour and intensity measurements. Urtasun et al. [7] use a dynamic MAP estimation framework based on a GPLVM and 2D human joint correspondences obtained from an independent image-based tracker. Li et al. [8] use a coordinated mixture of factor analyzers within a particle filtering framework, in order to reconstruct human motion in multiple views using chamfer matching to score different configuration. Wang et al. [9] learn a latent space with associated dynamics where both the dynamics and observation mapping are Gaussian processes, and Urtasun et al. [10] use it for tracking. Taylor et al. [11] also learn a binary latent space with dynamics (using an energy-based model) but apply it to synthesis, not tracking. Our work learns a static, generative low-dimensional model of poses and integrates it into a particle filter for tracking. We show its ability to work with real or partially missing data and to track multiple activities. 1 Priors for articulated human pose We consider the problem of learning a probabilistic low-dimensional model of human articulated motion. Call y ∈ RD the representation in ambient space of the articulated pose of a person. In this paper, y contains the 3D locations of anywhere between 10 and 60 markers located on the person’s joints (other representations such as joint angles are also possible). The values of y have been normalised for translation and rotation in order to remove rigid motion and leave only the articulated motion (see section 3 for how we track the rigid motion). While y is high-dimensional, the motion pattern lives in a low-dimensional manifold because most values of y yield poses that violate body constraints or are simply atypical for the motion type considered. Thus we want to model y in terms of a small number of latent variables x given a collection of poses {yn }N (recorded from a human n=1 with motion-capture technology). The model should satisfy the following: (1) It should define a probability density for x and y, to be able to deal with noise (in the image or marker measurements) and uncertainty (from missing data due to occlusion or markers that drop), and to allow integration in a sequential Bayesian estimation framework. The density model should also be flexible enough to represent multimodal densities. (2) It should define mappings for dimensionality reduction F : y → x and reconstruction f : x → y that apply to any value of x and y (not just those in the training set); and such mappings should be defined on a global coordinate system, be continuous (to avoid physically impossible discontinuities) and differentiable (to allow efficient optimisation when tracking), yet flexible enough to represent the highly nonlinear manifold of articulated poses. From a statistical machine learning point of view, this is precisely what latent variable models (LVMs) do; for example, factor analysis defines linear mappings and Gaussian densities, while the generative topographic mapping (GTM; [12]) defines nonlinear mappings and a Gaussian-mixture density in ambient space. However, factor analysis is too limited to be of practical use, and GTM— 2 while flexible—has two important practical problems: (1) the latent space must be discretised to allow tractable learning and inference, which limits it to very low (2–3) latent dimensions; (2) the parameter estimation is prone to bad local optima that result in highly distorted mappings. Another dimensionality reduction method recently introduced, GPLVM [13], which uses a Gaussian process mapping f (x), partly improves this situation by defining a tunable parameter xn for each data point yn . While still prone to local optima, this allows the use of a better initialisation for {xn }N (obtained from a spectral method, see later). This has prompted the application of n=1 GPLVM for tracking human motion [7]. However, GPLVM has some disadvantages: its training is very costly (each step of the gradient iteration is cubic on the number of training points N , though approximations based on using few points exist); unlike true LVMs, it defines neither a posterior distribution p(x|y) in latent space nor a dimensionality reduction mapping E {x|y}; and the latent representation it obtains is not ideal. For example, for periodic motions such as running or walking, repeated periods (identical up to small noise) can be mapped apart from each other in latent space because nothing constrains xn and xm to be close even when yn = ym (see fig. 3 and [10]). There exists a different type of dimensionality reduction methods, spectral methods (such as Isomap, LLE or Laplacian eigenmaps [14]), that have advantages and disadvantages complementary to those of LVMs. They define neither mappings nor densities but just a correspondence (xn , yn ) between points in latent space xn and ambient space yn . However, the training is efficient (a sparse eigenvalue problem) and has no local optima, and often yields a correspondence that successfully models highly nonlinear, convoluted manifolds such as the Swiss roll. While these attractive properties have spurred recent research in spectral methods, their lack of mappings and densities has limited their applicability in people tracking. However, a new model that combines the advantages of LVMs and spectral methods in a principled way has been recently proposed [1], which we briefly describe next. 2 The Laplacian Eigenmaps Latent Variable Model (LELVM) LELVM is based on a natural way of defining an out-of-sample mapping for Laplacian eigenmaps (LE) which, in addition, results in a density model. In LE, typically we first define a k-nearestneighbour graph on the sample data {yn }N and weigh each edge yn ∼ ym by a Gaussian affinity n=1 2 1 function K(yn , ym ) = wnm = exp (− 2 (yn − ym )/σ ). Then the latent points X result from: min tr XLX⊤ s.t. X ∈ RL×N , XDX⊤ = I, XD1 = 0 (1) where we define the matrix XL×N = (x1 , . . . , xN ), the symmetric affinity matrix WN ×N , the deN gree matrix D = diag ( n=1 wnm ), the graph Laplacian matrix L = D−W, and 1 = (1, . . . , 1)⊤ . The constraints eliminate the two trivial solutions X = 0 (by fixing an arbitrary scale) and x1 = · · · = xN (by removing 1, which is an eigenvector of L associated with a zero eigenvalue). The solution is given by the leading u2 , . . . , uL+1 eigenvectors of the normalised affinity matrix 1 1 1 N = D− 2 WD− 2 , namely X = V⊤ D− 2 with VN ×L = (v2 , . . . , vL+1 ) (an a posteriori translated, rotated or uniformly scaled X is equally valid). Following [1], we now define an out-of-sample mapping F(y) = x for a new point y as a semisupervised learning problem, by recomputing the embedding as in (1) (i.e., augmenting the graph Laplacian with the new point), but keeping the old embedding fixed: L K(y) X⊤ min tr ( X x ) K(y)⊤ 1⊤ K(y) (2) x⊤ x∈RL 2 where Kn (y) = K(y, yn ) = exp (− 1 (y − yn )/σ ) for n = 1, . . . , N is the kernel induced by 2 the Gaussian affinity (applied only to the k nearest neighbours of y, i.e., Kn (y) = 0 if y ≁ yn ). This is one natural way of adding a new point to the embedding by keeping existing embedded points fixed. We need not use the constraints from (1) because they would trivially determine x, and the uninteresting solutions X = 0 and X = constant were already removed in the old embedding anyway. The solution yields an out-of-sample dimensionality reduction mapping x = F(y): x = F(y) = X K(y) 1⊤ K(y) N K(y,yn ) PN x n=1 K(y,yn′ ) n ′ = (3) n =1 applicable to any point y (new or old). This mapping is formally identical to a Nadaraya-Watson estimator (kernel regression; [15]) using as data {(xn , yn )}N and the kernel K. We can take this n=1 a step further by defining a LVM that has as joint distribution a kernel density estimate (KDE): p(x, y) = 1 N N n=1 Ky (y, yn )Kx (x, xn ) p(y) = 3 1 N N n=1 Ky (y, yn ) p(x) = 1 N N n=1 Kx (x, xn ) where Ky is proportional to K so it integrates to 1, and Kx is a pdf kernel in x–space. Consequently, the marginals in observed and latent space are also KDEs, and the dimensionality reduction and reconstruction mappings are given by kernel regression (the conditional means E {y|x}, E {x|y}): F(y) = N n=1 p(n|y)xn f (x) = N K (x,xn ) PN x y n=1 Kx (x,xn′ ) n ′ = n =1 N n=1 p(n|x)yn . (4) We allow the bandwidths to be different in the latent and ambient spaces: 2 2 1 Kx (x, xn ) ∝ exp (− 1 (x − xn )/σx ) and Ky (y, yn ) ∝ exp (− 2 (y − yn )/σy ). They 2 may be tuned to control the smoothness of the mappings and densities [1]. Thus, LELVM naturally extends a LE embedding (efficiently obtained as a sparse eigenvalue problem with a cost O(N 2 )) to global, continuous, differentiable mappings (NW estimators) and potentially multimodal densities having the form of a Gaussian KDE. This allows easy computation of posterior probabilities such as p(x|y) (unlike GPLVM). It can use a continuous latent space of arbitrary dimension L (unlike GTM) by simply choosing L eigenvectors in the LE embedding. It has no local optima since it is based on the LE embedding. LELVM can learn convoluted mappings (e.g. the Swiss roll) and define maps and densities for them [1]. The only parameters to set are the graph parameters (number of neighbours k, affinity width σ) and the smoothing bandwidths σx , σy . 3 Tracking framework We follow the sequential Bayesian estimation framework, where for state variables s and observation variables z we have the recursive prediction and correction equations: p(st |z0:t−1 ) = p(st |st−1 ) p(st−1 |z0:t−1 ) dst−1 p(st |z0:t ) ∝ p(zt |st ) p(st |z0:t−1 ). (5) L We define the state variables as s = (x, d) where x ∈ R is the low-dim. latent space (for pose) and d ∈ R3 is the centre-of-mass location of the body (in the experiments our state also includes the orientation of the body, but for simplicity here we describe only the translation). The observed variables z consist of image features or the perspective projection of the markers on the camera plane. The mapping from state to observations is (for the markers’ case, assuming M markers): P f x ∈ RL − − → y ∈ R3M −→ ⊕ − − − z ∈ R2M −− − − −→ d ∈ R3 (6) where f is the LELVM reconstruction mapping (learnt from mocap data); ⊕ shifts each 3D marker by d; and P is the perspective projection (pinhole camera), applied to each 3D point separately. Here we use a simple observation model p(zt |st ): Gaussian with mean given by the transformation (6) and isotropic covariance (set by the user to control the influence of measurements in the tracking). We assume known correspondences and observations that are obtained either from the 3D markers (for tracking synthetic data) or 2D tracks obtained from a 2D tracker. Our dynamics model is p(st |st−1 ) ∝ pd (dt |dt−1 ) px (xt |xt−1 ) p(xt ) (7) where both dynamics models for d and x are random walks: Gaussians centred at the previous step value dt−1 and xt−1 , respectively, with isotropic covariance (set by the user to control the influence of dynamics in the tracking); and p(xt ) is the LELVM prior. Thus the overall dynamics predicts states that are both near the previous state and yield feasible poses. Of course, more complex dynamics models could be used if e.g. the speed and direction of movement are known. As tracker we use the Gaussian mixture Sigma-point particle filter (GMSPPF) [16]. This is a particle filter that uses a Gaussian mixture representation for the posterior distribution in state space and updates it with a Sigma-point Kalman filter. This Gaussian mixture will be used as proposal distribution to draw the particles. As in other particle filter implementations, the prediction step is carried out by approximating the integral (5) with particles and updating the particles’ weights. Then, a new Gaussian mixture is fitted with a weighted EM algorithm to these particles. This replaces the resampling stage needed by many particle filters and mitigates the problem of sample depletion while also preventing the number of components in the Gaussian mixture from growing over time. The choice of this particular tracker is not critical; we use it to illustrate the fact that LELVM can be introduced in any probabilistic tracker for nonlinear, nongaussian models. Given the corrected distribution p(st |z0:t ), we choose its mean as recovered state (pose and location). It is also possible to choose instead the mode closest to the state at t − 1, which could be found by mean-shift or Newton algorithms [17] since we are using a Gaussian-mixture representation in state space. 4 4 Experiments We demonstrate our low-dimensional tracker on image sequences of people walking and running, both synthetic (fig. 1) and real (fig. 2–3). Fig. 1 shows the model copes well with persistent partial occlusion and severely subsampled training data (A,B), and quantitatively evaluates temporal reconstruction (C). For all our experiments, the LELVM parameters (number of neighbors k, Gaussian affinity σ, and bandwidths σx and σy ) were set manually. We mainly considered 2D latent spaces (for pose, plus 6D for rigid motion), which were expressive enough for our experiments. More complex, higher-dimensional models are straightforward to construct. The initial state distribution p(s0 ) was chosen a broad Gaussian, the dynamics and observation covariance were set manually to control the tracking smoothness, and the GMSPPF tracker used a 5-component Gaussian mixture in latent space (and in the state space of rigid motion) and a small set of 500 particles. The 3D representation we use is a 102-D vector obtained by concatenating the 3D markers coordinates of all the body joints. These would be highly unconstrained if estimated independently, but we only use them as intermediate representation; tracking actually occurs in the latent space, tightly controlled using the LELVM prior. For the synthetic experiments and some of the real experiments (figs. 2–3) the camera parameters and the body proportions were known (for the latter, we used the 2D outputs of [6]). For the CMU mocap video (fig. 2B) we roughly guessed. We used mocap data from several sources (CMU, OSU). As observations we always use 2D marker positions, which, depending on the analyzed sequence were either known (the synthetic case), or provided by an existing tracker [6] or specified manually (fig. 2B). Alternatively 2D point trackers similar to the ones of [7] can be used. The forward generative model is obtained by combining the latent to ambient space mapping (this provides the position of the 3D markers) with a perspective projection transformation. The observation model is a product of Gaussians, each measuring the probability of a particular marker position given its corresponding image point track. Experiments with synthetic data: we analyze the performance of our tracker in controlled conditions (noise perturbed synthetically generated image tracks) both under regular circumstances (reasonable sampling of training data) and more severe conditions with subsampled training points and persistent partial occlusion (the man running behind a fence, with many of the 2D marker tracks obstructed). Fig. 1B,C shows both the posterior (filtered) latent space distribution obtained from our tracker, and its mean (we do not show the distribution of the global rigid body motion; in all experiments this is tracked with good accuracy). In the latent space plot shown in fig. 1B, the onset of running (two cycles were used) appears as a separate region external to the main loop. It does not appear in the subsampled training set in fig. 1B, where only one running cycle was used for training and the onset of running was removed. In each case, one can see that the model is able to track quite competently, with a modest decrease in its temporal accuracy, shown in fig. 1C, where the averages are computed per 3D joint (normalised wrt body height). Subsampling causes some ambiguity in the estimate, e.g. see the bimodality in the right plot in fig. 1C. In another set of experiments (not shown) we also tracked using different subsets of 3D markers. The estimates were accurate even when about 30% of the markers were dropped. Experiments with real images: this shows our tracker’s ability to work with real motions of different people, with different body proportions, not in its latent variable model training set (figs. 2–3). We study walking, running and turns. In all cases, tracking and 3D reconstruction are reasonably accurate. We have also run comparisons against low-dimensional models based on PCA and GPLVM (fig. 3). It is important to note that, for LELVM, errors in the pose estimates are primarily caused by mismatches between the mocap data used to learn the LELVM prior and the body proportions of the person in the video. For example, the body proportions of the OSU motion captured walker are quite different from those of the image in fig. 2–3 (e.g. note how the legs of the stick man are shorter relative to the trunk). Likewise, the style of the runner from the OSU data (e.g. the swinging of the arms) is quite different from that of the video. Finally, the interest points tracked by the 2D tracker do not entirely correspond either in number or location to the motion capture markers, and are noisy and sometimes missing. In future work, we plan to include an optimization step to also estimate the body proportions. This would be complicated for a general, unconstrained model because the dimensions of the body couple with the pose, so either one or the other can be changed to improve the tracking error (the observation likelihood can also become singular). But for dedicated prior pose models like ours these difficulties should be significantly reduced. The model simply cannot assume highly unlikely stances—these are either not representable at all, or have reduced probability—and thus avoids compensatory, unrealistic body proportion estimates. 5 n = 15 n = 40 n = 65 n = 90 n = 115 n = 140 A 1.5 1.5 1.5 1.5 1 −1 −1 −1 −1 −1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 −1 −1 1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1.5 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0.3 0.2 0.1 −1 0.4 0.3 0.2 −1.5 −1.5 n = 60 0.4 0.3 −2 −1 −1.5 −1 0 −0.5 n = 49 0.4 0.1 −1 −1.5 1 0 −0.5 −2 −1 −1.5 −1 0.5 0 −0.5 −1.5 −1.5 0.5 1 0.5 0 −0.5 −1.5 −1.5 −1.5 1 0.5 0 −0.5 n = 37 0.2 0.1 −1 −1 −1.5 −0.5 −1 −1.5 −2 1 0.5 0 −0.5 −0.5 −1.5 −1.5 0.3 0.2 0.1 0 0.4 0.3 0.2 −0.5 n = 25 0.4 0.3 −1 0 −0.5 −1 −1.5 1 0.5 0 0 −0.5 1.5 1 0.5 0 −0.5 −2 1 0.5 0.5 0 −1.5 −1.5 −1.5 1.5 1 0.5 0 −1 −1.5 −2 1 1 0.5 −0.5 −1 −1.5 −1.5 n = 13 0.4 −1 −1 −1.5 −1.5 −1.5 n=1 B −1 −1 −1.5 −1.5 1.5 1 0.5 −0.5 0 −1 −1.5 −2 −2 1 0 −0.5 −0.5 −0.5 −1 −1.5 −2 0.5 0 0 −0.5 0.5 0 −0.5 −1 −1.5 −2 1 0.5 0.5 0 1 0.5 0 −0.5 −1 −1.5 −1 −1.5 −2 1 1 0.5 1.5 1 0.5 0 −0.5 0 −0.5 −1 −1.5 −2 1 −0.5 1.5 1.5 1 0.5 0.5 0 −0.5 −1 −1.5 −2 1.5 1 1 0.5 0 −0.5 −2 0 −0.5 −0.5 1.5 1 0.5 0 −1 −1.5 −1 −1.5 0.5 0 0 −0.5 −1.5 −1.5 −1.5 1.5 1.5 1 0.5 −0.5 0 −0.5 −2 1 0.5 0.5 0 −0.5 −1 −1.5 −1.5 1.5 1 1 0.5 0 −1 −1.5 1 1 0.5 0 −0.5 −0.5 1.5 1 −0.5 −2 1 0.5 0 0 −0.5 0.5 0 −1 −1.5 −2 1 0.5 0.5 0 −0.5 1.5 1.5 1 −0.5 −2 1 1 0.5 0.5 0 −1 −1.5 −1 −1.5 −2 −2 1 0 −0.5 1.5 1 0.5 −0.5 0 −0.5 −1 −1.5 −2 0.5 0 −0.5 0.5 0 −0.5 −1 −1.5 −2 1 0.5 0 −0.5 0.5 0 −0.5 −1 −1.5 −1 −1.5 −2 1 0.5 0 1 0.5 0 −0.5 0 −0.5 −1 −1.5 −2 1 0.5 1.5 1 0.5 0.5 0 −0.5 −1 −1.5 1 1 1 0.5 0 −0.5 −2 1.5 1.5 1 1 0.5 0 −1 −1.5 1.5 1.5 1 0.5 −0.5 0.2 0.1 0.1 0 0 0 0 0 0 −0.1 −0.1 −0.1 −0.1 −0.1 −0.1 −0.2 −0.2 −0.2 −0.2 −0.2 −0.2 −0.3 −0.3 −0.3 −0.3 −0.3 −0.3 −0.4 0.4 0.6 0.8 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 2.4 1.5 1.5 1.5 1.5 1.5 1 1 0.5 0.5 0 0 1 −0.5 −0.5 0 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0.1 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0 −0.5 −1.5 0.5 0 0 −0.5 0 −0.5 −0.5 −0.5 −2 −1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −2 −1 −1.5 −1 −1.5 1 0.5 0.5 0 −1.5 −1 1 1 0.5 −2 −1 −1.5 −1.5 −0.5 −1 −2 1 −1 0 −1 −1.5 −2 −0.5 −2 −1.5 0.5 −0.5 −1 −1.5 0 −0.5 −1 −1.5 0 −1.5 0.5 0 −0.5 −1 −0.5 −1 −1.5 1 0.5 0 −0.5 −1 −0.5 −1 1 0.5 0 −1.5 1 0.5 0 −0.5 −2 1 0.5 −2 −1 −1.5 −1.5 −0.5 0 −1 1 −1 0 1 −1.5 −2 −0.5 −2 −1.5 1 0.5 0 0.5 −0.5 −1 −1.5 0 −0.5 −1 −1.5 0 −1.5 0.5 0 −0.5 −1 −0.5 −1 −1.5 1 0.5 0 −0.5 −1 −0.5 −1 1 0.5 0 −1.5 1 0.5 1 0.5 0 −0.5 −2 1 0.5 −2 −1 −1.5 −1.5 −0.5 0 −1 1 −1 0 1 −1.5 −2 −0.5 −2 −1.5 1 0.5 0 0.5 −0.5 −1 −1.5 0 −0.5 −1 −1.5 0 −1.5 0.5 0 −0.5 −1 −0.5 −1 −1.5 1 0.5 0 −0.5 −1 −0.5 −1 1 0.5 0 −1.5 1 0.5 1 0.5 0 −0.5 −2 1 0.5 −2 −1 −1.5 −1.5 −0.5 0 −1 1 −1 0 1 −1.5 −2 −0.5 −2 −1 −1.5 −0.5 1 0.5 0 0.5 −0.5 −1 −1.5 0 −0.5 −0.5 −1 −1 −1.5 0.5 0 0 −0.5 −2 −1.5 0 −1.5 1 0.5 0.5 0 −2 −1 −1.5 −1.5 −1 1 1 0.5 −0.5 −1 1 0.5 1 0.5 −0.5 −1 −2 1 0 −0.5 −1 −1.5 −1.5 −1.5 −2 −1.5 0.5 0 −0.5 −1 −0.5 0 −0.5 −1.5 −1 −1.5 1 0.5 0 −0.5 0 1 −1 −0.5 −1 1 0.5 0 1 0.5 0 0.5 −0.5 −1 −0.5 −2 1 0.5 1 0.5 1 0.5 0 −1 1 0 1 −1.5 −2 0.5 0 1 0.5 −0.5 −1 −1.5 1 0.5 1 0.5 −1.5 −1.5 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 0.13 0.12 RMSE C RMSE 0.08 0.06 0.04 0.02 0.11 0.1 0.09 0.08 0.07 0.06 0 0 50 100 150 0.05 0 10 time step n 20 30 40 50 60 time step n Figure 1: OSU running man motion capture data. A: we use 217 datapoints for training LELVM (with added noise) and for tracking. Row 1: tracking in the 2D latent space. The contours (very tight in this sequence) are the posterior probability. Row 2: perspective-projection-based observations with occlusions. Row 3: each quadruplet (a, a′ , b, b′ ) show the true pose of the running man from a front and side views (a, b), and the reconstructed pose by tracking with our model (a′ , b′ ). B: we use the first running cycle for training LELVM and the second cycle for tracking. C: RMSE errors for each frame, for the tracking of A (left plot) and B (middle plot), normalised so that 1 equals the M 1 ˆ 2 height of the stick man. RMSE(n) = M j=1 ynj − ynj −1/2 for all 3D locations of the M ˆ markers, i.e., comparison of reconstructed stick man yn with ground-truth stick man yn . Right plot: multimodal posterior distribution in pose space for the model of A (frame 42). Comparison with PCA and GPLVM (fig. 3): for these models, the tracker uses the same GMSPPF setting as for LELVM (number of particles, initialisation, random-walk dynamics, etc.) but with the mapping y = f (x) provided by GPLVM or PCA, and with a uniform prior p(x) in latent space (since neither GPLVM nor the non-probabilistic PCA provide one). The LELVM-tracker uses both its f (x) and latent space prior p(x), as discussed. All methods use a 2D latent space. We ensured the best possible training of GPLVM by model selection based on multiple runs. For PCA, the latent space looks deceptively good, showing non-intersecting loops. However, (1) individual loops do not collect together as they should (for LELVM they do); (2) worse still, the mapping from 2D to pose space yields a poor observation model. The reason is that the loop in 102-D pose space is nonlinearly bent and a plane can at best intersect it at a few points, so the tracker often stays put at one of those (typically an “average” standing position), since leaving it would increase the error a lot. Using more latent dimensions would improve this, but as LELVM shows, this is not necessary. For GPLVM, we found high sensitivity to filter initialisation: the estimates have high variance across runs and are inaccurate ≈ 80% of the time. When it fails, the GPLVM tracker often freezes in latent space, like PCA. When it does succeed, it produces results that are comparable with LELVM, although somewhat less accurate visually. However, even then GPLVM’s latent space consists of continuous chunks spread apart and offset from each other; GPLVM has no incentive to place nearby two xs mapping to the same y. This effect, combined with the lack of a data-sensitive, realistic latent space density p(x), makes GPLVM jump erratically from chunk to chunk, in contrast with LELVM, which smoothly follows the 1D loop. Some GPLVM problems might be alleviated using higher-order dynamics, but our experiments suggest that such modeling sophistication is less 6 0 0.5 1 n=1 n = 15 n = 29 n = 43 n = 55 n = 69 A 100 100 100 100 100 50 50 50 50 50 0 0 0 0 0 0 −50 −50 −50 −50 −50 −50 −100 −100 −100 −100 −100 −100 50 50 40 −40 −20 −50 −30 −10 0 −40 20 −20 40 n=4 −40 10 20 −20 40 50 0 n=9 −40 −20 30 40 50 0 n = 14 −40 20 −20 40 50 30 20 10 0 0 −40 −20 −30 −20 −40 10 20 −30 −10 0 −50 30 50 n = 19 −50 −30 −10 −50 30 −10 −20 −30 −40 10 50 40 30 20 10 0 −50 −30 −10 −50 20 −10 −20 −30 −40 10 50 40 30 20 10 0 −50 −30 −10 −50 30 −10 −20 −30 −40 0 50 50 40 30 20 10 0 −50 −30 −10 −50 30 −10 −20 −30 −40 10 40 30 20 10 0 −10 −20 −30 50 40 30 20 10 0 −10 −50 100 40 −40 10 20 −50 30 50 n = 24 40 50 n = 29 B 20 20 20 20 20 40 40 40 40 40 60 60 60 60 60 80 80 80 80 80 80 100 100 100 100 100 100 120 120 120 120 120 120 140 140 140 140 140 140 160 160 160 160 160 160 180 180 180 180 180 200 200 200 200 200 220 220 220 220 220 50 100 150 200 250 300 350 50 100 150 200 250 300 350 50 100 150 200 250 300 350 50 100 150 200 250 300 350 20 40 60 180 200 220 50 100 150 200 250 300 350 50 100 150 100 100 100 100 100 50 50 50 50 200 250 300 350 100 50 50 0 0 0 0 0 0 −50 −50 −50 −50 −50 −50 −100 −100 −100 −100 −100 −100 50 50 40 −40 −20 −50 −30 −10 0 −40 10 20 −50 30 40 −40 −20 −50 −30 −10 0 −40 10 20 −50 30 40 −40 −20 −50 −30 −10 0 −40 −20 30 40 50 0 −40 10 20 −50 30 40 50 40 30 30 20 20 10 10 0 −50 −30 −10 −50 20 0 −10 −20 −30 −40 10 40 30 20 10 0 −10 −20 −30 50 50 40 30 20 10 0 −10 −20 −30 50 50 40 30 20 10 0 −10 −20 −30 50 40 30 20 10 0 −10 −50 50 −40 −20 −30 −20 −30 −10 0 −40 10 20 −50 30 40 50 −10 −50 −40 −20 −30 −20 −30 −10 0 −40 10 20 −50 30 40 50 Figure 2: A: tracking of a video from [6] (turning & walking). We use 220 datapoints (3 full walking cycles) for training LELVM. Row 1: tracking in the 2D latent space. The contours are the estimated posterior probability. Row 2: tracking based on markers. The red dots are the 2D tracks and the green stick man is the 3D reconstruction obtained using our model. Row 3: our 3D reconstruction from a different viewpoint. B: tracking of a person running straight towards the camera. Notice the scale changes and possible forward-backward ambiguities in the 3D estimates. We train the LELVM using 180 datapoints (2.5 running cycles); 2D tracks were obtained by manually marking the video. In both A–B the mocap training data was for a person different from the video’s (with different body proportions and motions), and no ground-truth estimate was available for favourable initialisation. LELVM GPLVM PCA tracking in latent space tracking in latent space tracking in latent space 2.5 0.02 30 38 2 38 0.99 0.015 20 1.5 38 0.01 1 10 0.005 0.5 0 0 0 −0.005 −0.5 −10 −0.01 −1 −0.015 −1.5 −20 −0.02 −0.025 −0.025 −2 −0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025 −2.5 −2 −1 0 1 2 3 −30 −80 −60 −40 −20 0 20 40 60 80 Figure 3: Method comparison, frame 38. PCA and GPLVM map consecutive walking cycles to spatially distinct latent space regions. Compounded by a data independent latent prior, the resulting tracker gets easily confused: it jumps across loops and/or remains put, trapped in local optima. In contrast, LELVM is stable and follows tightly a 1D manifold (see videos). crucial if locality constraints are correctly modeled (as in LELVM). We conclude that, compared to LELVM, GPLVM is significantly less robust for tracking, has much higher training overhead and lacks some operations (e.g. computing latent conditionals based on partly missing ambient data). 7 5 Conclusion and future work We have proposed the use of priors based on the Laplacian Eigenmaps Latent Variable Model (LELVM) for people tracking. LELVM is a probabilistic dim. red. method that combines the advantages of latent variable models and spectral manifold learning algorithms: a multimodal probability density over latent and ambient variables, globally differentiable nonlinear mappings for reconstruction and dimensionality reduction, no local optima, ability to unfold highly nonlinear manifolds, and good practical scaling to latent spaces of high dimension. LELVM is computationally efficient, simple to learn from sparse training data, and compatible with standard probabilistic trackers such as particle filters. Our results using a LELVM-based probabilistic sigma point mixture tracker with several real and synthetic human motion sequences show that LELVM provides sufficient constraints for robust operation in the presence of missing, noisy and ambiguous image measurements. Comparisons with PCA and GPLVM show LELVM is superior in terms of accuracy, robustness and computation time. The objective of this paper was to demonstrate the ability of the LELVM prior in a simple setting using 2D tracks obtained automatically or manually, and single-type motions (running, walking). Future work will explore more complex observation models such as silhouettes; the combination of different motion types in the same latent space (whose dimension will exceed 2); and the exploration of multimodal posterior distributions in latent space caused by ambiguities. Acknowledgments This work was partially supported by NSF CAREER award IIS–0546857 (MACP), NSF IIS–0535140 and EC MCEXT–025481 (CS). CMU data: http://mocap.cs.cmu.edu (created with funding from NSF EIA–0196217). OSU data: http://accad.osu.edu/research/mocap/mocap data.htm. References ´ [1] M. A. Carreira-Perpi˜ an and Z. Lu. The Laplacian Eigenmaps Latent Variable Model. In AISTATS, 2007. n´ [2] N. R. Howe, M. E. Leventon, and W. T. Freeman. Bayesian reconstruction of 3D human motion from single-camera video. In NIPS, volume 12, pages 820–826, 2000. [3] T.-J. Cham and J. M. Rehg. A multiple hypothesis approach to figure tracking. In CVPR, 1999. [4] M. Brand. Shadow puppetry. In ICCV, pages 1237–1244, 1999. [5] H. Sidenbladh, M. J. Black, and L. Sigal. Implicit probabilistic models of human motion for synthesis and tracking. In ECCV, volume 1, pages 784–800, 2002. [6] C. Sminchisescu and A. Jepson. Generative modeling for continuous non-linearly embedded visual inference. In ICML, pages 759–766, 2004. [7] R. Urtasun, D. J. Fleet, A. Hertzmann, and P. Fua. Priors for people tracking from small training sets. In ICCV, pages 403–410, 2005. [8] R. Li, M.-H. Yang, S. Sclaroff, and T.-P. Tian. Monocular tracking of 3D human motion with a coordinated mixture of factor analyzers. In ECCV, volume 2, pages 137–150, 2006. [9] J. M. Wang, D. Fleet, and A. Hertzmann. Gaussian process dynamical models. In NIPS, volume 18, 2006. [10] R. Urtasun, D. J. Fleet, and P. Fua. Gaussian process dynamical models for 3D people tracking. In CVPR, pages 238–245, 2006. [11] G. W. Taylor, G. E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. In NIPS, volume 19, 2007. [12] C. M. Bishop, M. Svens´ n, and C. K. I. Williams. GTM: The generative topographic mapping. Neural e Computation, 10(1):215–234, January 1998. [13] N. Lawrence. Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research, 6:1783–1816, November 2005. [14] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, June 2003. [15] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall, 1986. [16] R. van der Merwe and E. A. Wan. Gaussian mixture sigma-point particle filters for sequential probabilistic inference in dynamic state-space models. In ICASSP, volume 6, pages 701–704, 2003. ´ [17] M. A. Carreira-Perpi˜ an. Acceleration strategies for Gaussian mean-shift image segmentation. In CVPR, n´ pages 1160–1167, 2006. 8
3 0.11483984 56 nips-2007-Configuration Estimates Improve Pedestrian Finding
Author: Duan Tran, David A. Forsyth
Abstract: Fair discriminative pedestrian finders are now available. In fact, these pedestrian finders make most errors on pedestrians in configurations that are uncommon in the training data, for example, mounting a bicycle. This is undesirable. However, the human configuration can itself be estimated discriminatively using structure learning. We demonstrate a pedestrian finder which first finds the most likely human pose in the window using a discriminative procedure trained with structure learning on a small dataset. We then present features (local histogram of oriented gradient and local PCA of gradient) based on that configuration to an SVM classifier. We show, using the INRIA Person dataset, that estimates of configuration significantly improve the accuracy of a discriminative pedestrian finder. 1
4 0.084915482 72 nips-2007-Discriminative Log-Linear Grammars with Latent Variables
Author: Slav Petrov, Dan Klein
Abstract: We demonstrate that log-linear grammars with latent variables can be practically trained using discriminative methods. Central to efficient discriminative training is a hierarchical pruning procedure which allows feature expectations to be efficiently approximated in a gradient-based procedure. We compare L1 and L2 regularization and show that L1 regularization is superior, requiring fewer iterations to converge, and yielding sparser solutions. On full-scale treebank parsing experiments, the discriminative latent models outperform both the comparable generative latent models as well as the discriminative non-latent baselines.
5 0.074241288 113 nips-2007-Learning Visual Attributes
Author: Vittorio Ferrari, Andrew Zisserman
Abstract: We present a probabilistic generative model of visual attributes, together with an efficient learning algorithm. Attributes are visual qualities of objects, such as ‘red’, ‘striped’, or ‘spotted’. The model sees attributes as patterns of image segments, repeatedly sharing some characteristic properties. These can be any combination of appearance, shape, or the layout of segments within the pattern. Moreover, attributes with general appearance are taken into account, such as the pattern of alternation of any two colors which is characteristic for stripes. To enable learning from unsegmented training images, the model is learnt discriminatively, by optimizing a likelihood ratio. As demonstrated in the experimental evaluation, our model can learn in a weakly supervised setting and encompasses a broad range of attributes. We show that attributes can be learnt starting from a text query to Google image search, and can then be used to recognize the attribute and determine its spatial extent in novel real-world images.
6 0.068708524 109 nips-2007-Kernels on Attributed Pointsets with Applications
7 0.060596488 193 nips-2007-The Distribution Family of Similarity Distances
8 0.060026597 143 nips-2007-Object Recognition by Scene Alignment
9 0.05813254 18 nips-2007-A probabilistic model for generating realistic lip movements from speech
10 0.053442236 65 nips-2007-DIFFRAC: a discriminative and flexible framework for clustering
11 0.051449828 3 nips-2007-A Bayesian Model of Conditioned Perception
12 0.048138522 69 nips-2007-Discriminative Batch Mode Active Learning
13 0.047724217 111 nips-2007-Learning Horizontal Connections in a Sparse Coding Model of Natural Images
14 0.047175184 183 nips-2007-Spatial Latent Dirichlet Allocation
15 0.046546511 211 nips-2007-Unsupervised Feature Selection for Accurate Recommendation of High-Dimensional Image Data
16 0.046415098 181 nips-2007-Sparse Overcomplete Latent Variable Decomposition of Counts Data
17 0.045472726 145 nips-2007-On Sparsity and Overcompleteness in Image Models
18 0.045235876 125 nips-2007-Markov Chain Monte Carlo with People
19 0.044053644 106 nips-2007-Invariant Common Spatial Patterns: Alleviating Nonstationarities in Brain-Computer Interfacing
20 0.040994842 182 nips-2007-Sparse deep belief net model for visual area V2
topicId topicWeight
[(0, -0.152), (1, 0.038), (2, -0.012), (3, -0.048), (4, -0.002), (5, 0.115), (6, -0.04), (7, 0.098), (8, -0.036), (9, 0.014), (10, 0.002), (11, 0.058), (12, -0.031), (13, -0.012), (14, -0.051), (15, 0.047), (16, -0.115), (17, 0.035), (18, 0.026), (19, -0.045), (20, -0.026), (21, 0.044), (22, 0.122), (23, 0.07), (24, -0.104), (25, 0.01), (26, -0.13), (27, -0.169), (28, 0.096), (29, 0.003), (30, 0.032), (31, -0.058), (32, -0.003), (33, 0.145), (34, 0.041), (35, 0.03), (36, -0.03), (37, 0.114), (38, -0.115), (39, -0.058), (40, 0.18), (41, 0.04), (42, -0.066), (43, -0.065), (44, 0.103), (45, 0.135), (46, 0.135), (47, -0.032), (48, -0.147), (49, -0.251)]
simIndex simValue paperId paperTitle
same-paper 1 0.96473455 50 nips-2007-Combined discriminative and generative articulated pose and non-rigid shape estimation
Author: Leonid Sigal, Alexandru Balan, Michael J. Black
Abstract: Estimation of three-dimensional articulated human pose and motion from images is a central problem in computer vision. Much of the previous work has been limited by the use of crude generative models of humans represented as articulated collections of simple parts such as cylinders. Automatic initialization of such models has proved difficult and most approaches assume that the size and shape of the body parts are known a priori. In this paper we propose a method for automatically recovering a detailed parametric model of non-rigid body shape and pose from monocular imagery. Specifically, we represent the body using a parameterized triangulated mesh model that is learned from a database of human range scans. We demonstrate a discriminative method to directly recover the model parameters from monocular images using a conditional mixture of kernel regressors. This predicted pose and shape are used to initialize a generative model for more detailed pose and shape estimation. The resulting approach allows fully automatic pose and shape recovery from monocular and multi-camera imagery. Experimental results show that our method is capable of robustly recovering articulated pose, shape and biometric measurements (e.g. height, weight, etc.) in both calibrated and uncalibrated camera environments. 1
2 0.78997982 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model
Author: Zhengdong Lu, Cristian Sminchisescu, Miguel Á. Carreira-Perpiñán
Abstract: Reliably recovering 3D human pose from monocular video requires models that bias the estimates towards typical human poses and motions. We construct priors for people tracking using the Laplacian Eigenmaps Latent Variable Model (LELVM). LELVM is a recently introduced probabilistic dimensionality reduction model that combines the advantages of latent variable models—a multimodal probability density for latent and observed variables, and globally differentiable nonlinear mappings for reconstruction and dimensionality reduction—with those of spectral manifold learning methods—no local optima, ability to unfold highly nonlinear manifolds, and good practical scaling to latent spaces of high dimension. LELVM is computationally efficient, simple to learn from sparse training data, and compatible with standard probabilistic trackers such as particle filters. We analyze the performance of a LELVM-based probabilistic sigma point mixture tracker in several real and synthetic human motion sequences and demonstrate that LELVM not only provides sufficient constraints for robust operation in the presence of missing, noisy and ambiguous image measurements, but also compares favorably with alternative trackers based on PCA or GPLVM priors. Recent research in reconstructing articulated human motion has focused on methods that can exploit available prior knowledge on typical human poses or motions in an attempt to build more reliable algorithms. The high-dimensionality of human ambient pose space—between 30-60 joint angles or joint positions depending on the desired accuracy level, makes exhaustive search prohibitively expensive. This has negative impact on existing trackers, which are often not sufficiently reliable at reconstructing human-like poses, self-initializing or recovering from failure. Such difficulties have stimulated research in algorithms and models that reduce the effective working space, either using generic search focusing methods (annealing, state space decomposition, covariance scaling) or by exploiting specific problem structure (e.g. kinematic jumps). Experience with these procedures has nevertheless shown that any search strategy, no matter how effective, can be made significantly more reliable if restricted to low-dimensional state spaces. This permits a more thorough exploration of the typical solution space, for a given, comparatively similar computational effort as a high-dimensional method. The argument correlates well with the belief that the human pose space, although high-dimensional in its natural ambient parameterization, has a significantly lower perceptual (latent or intrinsic) dimensionality, at least in a practical sense—many poses that are possible are so improbable in many real-world situations that it pays off to encode them with low accuracy. A perceptual representation has to be powerful enough to capture the diversity of human poses in a sufficiently broad domain of applicability (the task domain), yet compact and analytically tractable for search and optimization. This justifies the use of models that are nonlinear and low-dimensional (able to unfold highly nonlinear manifolds with low distortion), yet probabilistically motivated and globally continuous for efficient optimization. Reducing dimensionality is not the only goal: perceptual representations have to preserve critical properties of the ambient space. Reliable tracking needs locality: nearby regions in ambient space have to be mapped to nearby regions in latent space. If this does not hold, the tracker is forced to make unrealistically large, and difficult to predict jumps in latent space in order to follow smooth trajectories in the joint angle ambient space. 1 In this paper we propose to model priors for articulated motion using a recently introduced probabilistic dimensionality reduction method, the Laplacian Eigenmaps Latent Variable Model (LELVM) [1]. Section 1 discusses the requirements of priors for articulated motion in the context of probabilistic and spectral methods for manifold learning, and section 2 describes LELVM and shows how it combines both types of methods in a principled way. Section 3 describes our tracking framework (using a particle filter) and section 4 shows experiments with synthetic and real human motion sequences using LELVM priors learned from motion-capture data. Related work: There is significant work in human tracking, using both generative and discriminative methods. Due to space limitations, we will focus on the more restricted class of 3D generative algorithms based on learned state priors, and not aim at a full literature review. Deriving compact prior representations for tracking people or other articulated objects is an active research field, steadily growing with the increased availability of human motion capture data. Howe et al. and Sidenbladh et al. [2] propose Gaussian mixture representations of short human motion fragments (snippets) and integrate them in a Bayesian MAP estimation framework that uses 2D human joint measurements, independently tracked by scaled prismatic models [3]. Brand [4] models the human pose manifold using a Gaussian mixture and uses an HMM to infer the mixture component index based on a temporal sequence of human silhouettes. Sidenbladh et al. [5] use similar dynamic priors and exploit ideas in texture synthesis—efficient nearest-neighbor search for similar motion fragments at runtime—in order to build a particle-filter tracker with observation model based on contour and image intensity measurements. Sminchisescu and Jepson [6] propose a low-dimensional probabilistic model based on fitting a parametric reconstruction mapping (sparse radial basis function) and a parametric latent density (Gaussian mixture) to the embedding produced with a spectral method. They track humans walking and involved in conversations using a Bayesian multiple hypotheses framework that fuses contour and intensity measurements. Urtasun et al. [7] use a dynamic MAP estimation framework based on a GPLVM and 2D human joint correspondences obtained from an independent image-based tracker. Li et al. [8] use a coordinated mixture of factor analyzers within a particle filtering framework, in order to reconstruct human motion in multiple views using chamfer matching to score different configuration. Wang et al. [9] learn a latent space with associated dynamics where both the dynamics and observation mapping are Gaussian processes, and Urtasun et al. [10] use it for tracking. Taylor et al. [11] also learn a binary latent space with dynamics (using an energy-based model) but apply it to synthesis, not tracking. Our work learns a static, generative low-dimensional model of poses and integrates it into a particle filter for tracking. We show its ability to work with real or partially missing data and to track multiple activities. 1 Priors for articulated human pose We consider the problem of learning a probabilistic low-dimensional model of human articulated motion. Call y ∈ RD the representation in ambient space of the articulated pose of a person. In this paper, y contains the 3D locations of anywhere between 10 and 60 markers located on the person’s joints (other representations such as joint angles are also possible). The values of y have been normalised for translation and rotation in order to remove rigid motion and leave only the articulated motion (see section 3 for how we track the rigid motion). While y is high-dimensional, the motion pattern lives in a low-dimensional manifold because most values of y yield poses that violate body constraints or are simply atypical for the motion type considered. Thus we want to model y in terms of a small number of latent variables x given a collection of poses {yn }N (recorded from a human n=1 with motion-capture technology). The model should satisfy the following: (1) It should define a probability density for x and y, to be able to deal with noise (in the image or marker measurements) and uncertainty (from missing data due to occlusion or markers that drop), and to allow integration in a sequential Bayesian estimation framework. The density model should also be flexible enough to represent multimodal densities. (2) It should define mappings for dimensionality reduction F : y → x and reconstruction f : x → y that apply to any value of x and y (not just those in the training set); and such mappings should be defined on a global coordinate system, be continuous (to avoid physically impossible discontinuities) and differentiable (to allow efficient optimisation when tracking), yet flexible enough to represent the highly nonlinear manifold of articulated poses. From a statistical machine learning point of view, this is precisely what latent variable models (LVMs) do; for example, factor analysis defines linear mappings and Gaussian densities, while the generative topographic mapping (GTM; [12]) defines nonlinear mappings and a Gaussian-mixture density in ambient space. However, factor analysis is too limited to be of practical use, and GTM— 2 while flexible—has two important practical problems: (1) the latent space must be discretised to allow tractable learning and inference, which limits it to very low (2–3) latent dimensions; (2) the parameter estimation is prone to bad local optima that result in highly distorted mappings. Another dimensionality reduction method recently introduced, GPLVM [13], which uses a Gaussian process mapping f (x), partly improves this situation by defining a tunable parameter xn for each data point yn . While still prone to local optima, this allows the use of a better initialisation for {xn }N (obtained from a spectral method, see later). This has prompted the application of n=1 GPLVM for tracking human motion [7]. However, GPLVM has some disadvantages: its training is very costly (each step of the gradient iteration is cubic on the number of training points N , though approximations based on using few points exist); unlike true LVMs, it defines neither a posterior distribution p(x|y) in latent space nor a dimensionality reduction mapping E {x|y}; and the latent representation it obtains is not ideal. For example, for periodic motions such as running or walking, repeated periods (identical up to small noise) can be mapped apart from each other in latent space because nothing constrains xn and xm to be close even when yn = ym (see fig. 3 and [10]). There exists a different type of dimensionality reduction methods, spectral methods (such as Isomap, LLE or Laplacian eigenmaps [14]), that have advantages and disadvantages complementary to those of LVMs. They define neither mappings nor densities but just a correspondence (xn , yn ) between points in latent space xn and ambient space yn . However, the training is efficient (a sparse eigenvalue problem) and has no local optima, and often yields a correspondence that successfully models highly nonlinear, convoluted manifolds such as the Swiss roll. While these attractive properties have spurred recent research in spectral methods, their lack of mappings and densities has limited their applicability in people tracking. However, a new model that combines the advantages of LVMs and spectral methods in a principled way has been recently proposed [1], which we briefly describe next. 2 The Laplacian Eigenmaps Latent Variable Model (LELVM) LELVM is based on a natural way of defining an out-of-sample mapping for Laplacian eigenmaps (LE) which, in addition, results in a density model. In LE, typically we first define a k-nearestneighbour graph on the sample data {yn }N and weigh each edge yn ∼ ym by a Gaussian affinity n=1 2 1 function K(yn , ym ) = wnm = exp (− 2 (yn − ym )/σ ). Then the latent points X result from: min tr XLX⊤ s.t. X ∈ RL×N , XDX⊤ = I, XD1 = 0 (1) where we define the matrix XL×N = (x1 , . . . , xN ), the symmetric affinity matrix WN ×N , the deN gree matrix D = diag ( n=1 wnm ), the graph Laplacian matrix L = D−W, and 1 = (1, . . . , 1)⊤ . The constraints eliminate the two trivial solutions X = 0 (by fixing an arbitrary scale) and x1 = · · · = xN (by removing 1, which is an eigenvector of L associated with a zero eigenvalue). The solution is given by the leading u2 , . . . , uL+1 eigenvectors of the normalised affinity matrix 1 1 1 N = D− 2 WD− 2 , namely X = V⊤ D− 2 with VN ×L = (v2 , . . . , vL+1 ) (an a posteriori translated, rotated or uniformly scaled X is equally valid). Following [1], we now define an out-of-sample mapping F(y) = x for a new point y as a semisupervised learning problem, by recomputing the embedding as in (1) (i.e., augmenting the graph Laplacian with the new point), but keeping the old embedding fixed: L K(y) X⊤ min tr ( X x ) K(y)⊤ 1⊤ K(y) (2) x⊤ x∈RL 2 where Kn (y) = K(y, yn ) = exp (− 1 (y − yn )/σ ) for n = 1, . . . , N is the kernel induced by 2 the Gaussian affinity (applied only to the k nearest neighbours of y, i.e., Kn (y) = 0 if y ≁ yn ). This is one natural way of adding a new point to the embedding by keeping existing embedded points fixed. We need not use the constraints from (1) because they would trivially determine x, and the uninteresting solutions X = 0 and X = constant were already removed in the old embedding anyway. The solution yields an out-of-sample dimensionality reduction mapping x = F(y): x = F(y) = X K(y) 1⊤ K(y) N K(y,yn ) PN x n=1 K(y,yn′ ) n ′ = (3) n =1 applicable to any point y (new or old). This mapping is formally identical to a Nadaraya-Watson estimator (kernel regression; [15]) using as data {(xn , yn )}N and the kernel K. We can take this n=1 a step further by defining a LVM that has as joint distribution a kernel density estimate (KDE): p(x, y) = 1 N N n=1 Ky (y, yn )Kx (x, xn ) p(y) = 3 1 N N n=1 Ky (y, yn ) p(x) = 1 N N n=1 Kx (x, xn ) where Ky is proportional to K so it integrates to 1, and Kx is a pdf kernel in x–space. Consequently, the marginals in observed and latent space are also KDEs, and the dimensionality reduction and reconstruction mappings are given by kernel regression (the conditional means E {y|x}, E {x|y}): F(y) = N n=1 p(n|y)xn f (x) = N K (x,xn ) PN x y n=1 Kx (x,xn′ ) n ′ = n =1 N n=1 p(n|x)yn . (4) We allow the bandwidths to be different in the latent and ambient spaces: 2 2 1 Kx (x, xn ) ∝ exp (− 1 (x − xn )/σx ) and Ky (y, yn ) ∝ exp (− 2 (y − yn )/σy ). They 2 may be tuned to control the smoothness of the mappings and densities [1]. Thus, LELVM naturally extends a LE embedding (efficiently obtained as a sparse eigenvalue problem with a cost O(N 2 )) to global, continuous, differentiable mappings (NW estimators) and potentially multimodal densities having the form of a Gaussian KDE. This allows easy computation of posterior probabilities such as p(x|y) (unlike GPLVM). It can use a continuous latent space of arbitrary dimension L (unlike GTM) by simply choosing L eigenvectors in the LE embedding. It has no local optima since it is based on the LE embedding. LELVM can learn convoluted mappings (e.g. the Swiss roll) and define maps and densities for them [1]. The only parameters to set are the graph parameters (number of neighbours k, affinity width σ) and the smoothing bandwidths σx , σy . 3 Tracking framework We follow the sequential Bayesian estimation framework, where for state variables s and observation variables z we have the recursive prediction and correction equations: p(st |z0:t−1 ) = p(st |st−1 ) p(st−1 |z0:t−1 ) dst−1 p(st |z0:t ) ∝ p(zt |st ) p(st |z0:t−1 ). (5) L We define the state variables as s = (x, d) where x ∈ R is the low-dim. latent space (for pose) and d ∈ R3 is the centre-of-mass location of the body (in the experiments our state also includes the orientation of the body, but for simplicity here we describe only the translation). The observed variables z consist of image features or the perspective projection of the markers on the camera plane. The mapping from state to observations is (for the markers’ case, assuming M markers): P f x ∈ RL − − → y ∈ R3M −→ ⊕ − − − z ∈ R2M −− − − −→ d ∈ R3 (6) where f is the LELVM reconstruction mapping (learnt from mocap data); ⊕ shifts each 3D marker by d; and P is the perspective projection (pinhole camera), applied to each 3D point separately. Here we use a simple observation model p(zt |st ): Gaussian with mean given by the transformation (6) and isotropic covariance (set by the user to control the influence of measurements in the tracking). We assume known correspondences and observations that are obtained either from the 3D markers (for tracking synthetic data) or 2D tracks obtained from a 2D tracker. Our dynamics model is p(st |st−1 ) ∝ pd (dt |dt−1 ) px (xt |xt−1 ) p(xt ) (7) where both dynamics models for d and x are random walks: Gaussians centred at the previous step value dt−1 and xt−1 , respectively, with isotropic covariance (set by the user to control the influence of dynamics in the tracking); and p(xt ) is the LELVM prior. Thus the overall dynamics predicts states that are both near the previous state and yield feasible poses. Of course, more complex dynamics models could be used if e.g. the speed and direction of movement are known. As tracker we use the Gaussian mixture Sigma-point particle filter (GMSPPF) [16]. This is a particle filter that uses a Gaussian mixture representation for the posterior distribution in state space and updates it with a Sigma-point Kalman filter. This Gaussian mixture will be used as proposal distribution to draw the particles. As in other particle filter implementations, the prediction step is carried out by approximating the integral (5) with particles and updating the particles’ weights. Then, a new Gaussian mixture is fitted with a weighted EM algorithm to these particles. This replaces the resampling stage needed by many particle filters and mitigates the problem of sample depletion while also preventing the number of components in the Gaussian mixture from growing over time. The choice of this particular tracker is not critical; we use it to illustrate the fact that LELVM can be introduced in any probabilistic tracker for nonlinear, nongaussian models. Given the corrected distribution p(st |z0:t ), we choose its mean as recovered state (pose and location). It is also possible to choose instead the mode closest to the state at t − 1, which could be found by mean-shift or Newton algorithms [17] since we are using a Gaussian-mixture representation in state space. 4 4 Experiments We demonstrate our low-dimensional tracker on image sequences of people walking and running, both synthetic (fig. 1) and real (fig. 2–3). Fig. 1 shows the model copes well with persistent partial occlusion and severely subsampled training data (A,B), and quantitatively evaluates temporal reconstruction (C). For all our experiments, the LELVM parameters (number of neighbors k, Gaussian affinity σ, and bandwidths σx and σy ) were set manually. We mainly considered 2D latent spaces (for pose, plus 6D for rigid motion), which were expressive enough for our experiments. More complex, higher-dimensional models are straightforward to construct. The initial state distribution p(s0 ) was chosen a broad Gaussian, the dynamics and observation covariance were set manually to control the tracking smoothness, and the GMSPPF tracker used a 5-component Gaussian mixture in latent space (and in the state space of rigid motion) and a small set of 500 particles. The 3D representation we use is a 102-D vector obtained by concatenating the 3D markers coordinates of all the body joints. These would be highly unconstrained if estimated independently, but we only use them as intermediate representation; tracking actually occurs in the latent space, tightly controlled using the LELVM prior. For the synthetic experiments and some of the real experiments (figs. 2–3) the camera parameters and the body proportions were known (for the latter, we used the 2D outputs of [6]). For the CMU mocap video (fig. 2B) we roughly guessed. We used mocap data from several sources (CMU, OSU). As observations we always use 2D marker positions, which, depending on the analyzed sequence were either known (the synthetic case), or provided by an existing tracker [6] or specified manually (fig. 2B). Alternatively 2D point trackers similar to the ones of [7] can be used. The forward generative model is obtained by combining the latent to ambient space mapping (this provides the position of the 3D markers) with a perspective projection transformation. The observation model is a product of Gaussians, each measuring the probability of a particular marker position given its corresponding image point track. Experiments with synthetic data: we analyze the performance of our tracker in controlled conditions (noise perturbed synthetically generated image tracks) both under regular circumstances (reasonable sampling of training data) and more severe conditions with subsampled training points and persistent partial occlusion (the man running behind a fence, with many of the 2D marker tracks obstructed). Fig. 1B,C shows both the posterior (filtered) latent space distribution obtained from our tracker, and its mean (we do not show the distribution of the global rigid body motion; in all experiments this is tracked with good accuracy). In the latent space plot shown in fig. 1B, the onset of running (two cycles were used) appears as a separate region external to the main loop. It does not appear in the subsampled training set in fig. 1B, where only one running cycle was used for training and the onset of running was removed. In each case, one can see that the model is able to track quite competently, with a modest decrease in its temporal accuracy, shown in fig. 1C, where the averages are computed per 3D joint (normalised wrt body height). Subsampling causes some ambiguity in the estimate, e.g. see the bimodality in the right plot in fig. 1C. In another set of experiments (not shown) we also tracked using different subsets of 3D markers. The estimates were accurate even when about 30% of the markers were dropped. Experiments with real images: this shows our tracker’s ability to work with real motions of different people, with different body proportions, not in its latent variable model training set (figs. 2–3). We study walking, running and turns. In all cases, tracking and 3D reconstruction are reasonably accurate. We have also run comparisons against low-dimensional models based on PCA and GPLVM (fig. 3). It is important to note that, for LELVM, errors in the pose estimates are primarily caused by mismatches between the mocap data used to learn the LELVM prior and the body proportions of the person in the video. For example, the body proportions of the OSU motion captured walker are quite different from those of the image in fig. 2–3 (e.g. note how the legs of the stick man are shorter relative to the trunk). Likewise, the style of the runner from the OSU data (e.g. the swinging of the arms) is quite different from that of the video. Finally, the interest points tracked by the 2D tracker do not entirely correspond either in number or location to the motion capture markers, and are noisy and sometimes missing. In future work, we plan to include an optimization step to also estimate the body proportions. This would be complicated for a general, unconstrained model because the dimensions of the body couple with the pose, so either one or the other can be changed to improve the tracking error (the observation likelihood can also become singular). But for dedicated prior pose models like ours these difficulties should be significantly reduced. The model simply cannot assume highly unlikely stances—these are either not representable at all, or have reduced probability—and thus avoids compensatory, unrealistic body proportion estimates. 5 n = 15 n = 40 n = 65 n = 90 n = 115 n = 140 A 1.5 1.5 1.5 1.5 1 −1 −1 −1 −1 −1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 −1 −1 1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1.5 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0.3 0.2 0.1 −1 0.4 0.3 0.2 −1.5 −1.5 n = 60 0.4 0.3 −2 −1 −1.5 −1 0 −0.5 n = 49 0.4 0.1 −1 −1.5 1 0 −0.5 −2 −1 −1.5 −1 0.5 0 −0.5 −1.5 −1.5 0.5 1 0.5 0 −0.5 −1.5 −1.5 −1.5 1 0.5 0 −0.5 n = 37 0.2 0.1 −1 −1 −1.5 −0.5 −1 −1.5 −2 1 0.5 0 −0.5 −0.5 −1.5 −1.5 0.3 0.2 0.1 0 0.4 0.3 0.2 −0.5 n = 25 0.4 0.3 −1 0 −0.5 −1 −1.5 1 0.5 0 0 −0.5 1.5 1 0.5 0 −0.5 −2 1 0.5 0.5 0 −1.5 −1.5 −1.5 1.5 1 0.5 0 −1 −1.5 −2 1 1 0.5 −0.5 −1 −1.5 −1.5 n = 13 0.4 −1 −1 −1.5 −1.5 −1.5 n=1 B −1 −1 −1.5 −1.5 1.5 1 0.5 −0.5 0 −1 −1.5 −2 −2 1 0 −0.5 −0.5 −0.5 −1 −1.5 −2 0.5 0 0 −0.5 0.5 0 −0.5 −1 −1.5 −2 1 0.5 0.5 0 1 0.5 0 −0.5 −1 −1.5 −1 −1.5 −2 1 1 0.5 1.5 1 0.5 0 −0.5 0 −0.5 −1 −1.5 −2 1 −0.5 1.5 1.5 1 0.5 0.5 0 −0.5 −1 −1.5 −2 1.5 1 1 0.5 0 −0.5 −2 0 −0.5 −0.5 1.5 1 0.5 0 −1 −1.5 −1 −1.5 0.5 0 0 −0.5 −1.5 −1.5 −1.5 1.5 1.5 1 0.5 −0.5 0 −0.5 −2 1 0.5 0.5 0 −0.5 −1 −1.5 −1.5 1.5 1 1 0.5 0 −1 −1.5 1 1 0.5 0 −0.5 −0.5 1.5 1 −0.5 −2 1 0.5 0 0 −0.5 0.5 0 −1 −1.5 −2 1 0.5 0.5 0 −0.5 1.5 1.5 1 −0.5 −2 1 1 0.5 0.5 0 −1 −1.5 −1 −1.5 −2 −2 1 0 −0.5 1.5 1 0.5 −0.5 0 −0.5 −1 −1.5 −2 0.5 0 −0.5 0.5 0 −0.5 −1 −1.5 −2 1 0.5 0 −0.5 0.5 0 −0.5 −1 −1.5 −1 −1.5 −2 1 0.5 0 1 0.5 0 −0.5 0 −0.5 −1 −1.5 −2 1 0.5 1.5 1 0.5 0.5 0 −0.5 −1 −1.5 1 1 1 0.5 0 −0.5 −2 1.5 1.5 1 1 0.5 0 −1 −1.5 1.5 1.5 1 0.5 −0.5 0.2 0.1 0.1 0 0 0 0 0 0 −0.1 −0.1 −0.1 −0.1 −0.1 −0.1 −0.2 −0.2 −0.2 −0.2 −0.2 −0.2 −0.3 −0.3 −0.3 −0.3 −0.3 −0.3 −0.4 0.4 0.6 0.8 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 2.4 1.5 1.5 1.5 1.5 1.5 1 1 0.5 0.5 0 0 1 −0.5 −0.5 0 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0.1 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0 −0.5 −1.5 0.5 0 0 −0.5 0 −0.5 −0.5 −0.5 −2 −1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −2 −1 −1.5 −1 −1.5 1 0.5 0.5 0 −1.5 −1 1 1 0.5 −2 −1 −1.5 −1.5 −0.5 −1 −2 1 −1 0 −1 −1.5 −2 −0.5 −2 −1.5 0.5 −0.5 −1 −1.5 0 −0.5 −1 −1.5 0 −1.5 0.5 0 −0.5 −1 −0.5 −1 −1.5 1 0.5 0 −0.5 −1 −0.5 −1 1 0.5 0 −1.5 1 0.5 0 −0.5 −2 1 0.5 −2 −1 −1.5 −1.5 −0.5 0 −1 1 −1 0 1 −1.5 −2 −0.5 −2 −1.5 1 0.5 0 0.5 −0.5 −1 −1.5 0 −0.5 −1 −1.5 0 −1.5 0.5 0 −0.5 −1 −0.5 −1 −1.5 1 0.5 0 −0.5 −1 −0.5 −1 1 0.5 0 −1.5 1 0.5 1 0.5 0 −0.5 −2 1 0.5 −2 −1 −1.5 −1.5 −0.5 0 −1 1 −1 0 1 −1.5 −2 −0.5 −2 −1.5 1 0.5 0 0.5 −0.5 −1 −1.5 0 −0.5 −1 −1.5 0 −1.5 0.5 0 −0.5 −1 −0.5 −1 −1.5 1 0.5 0 −0.5 −1 −0.5 −1 1 0.5 0 −1.5 1 0.5 1 0.5 0 −0.5 −2 1 0.5 −2 −1 −1.5 −1.5 −0.5 0 −1 1 −1 0 1 −1.5 −2 −0.5 −2 −1 −1.5 −0.5 1 0.5 0 0.5 −0.5 −1 −1.5 0 −0.5 −0.5 −1 −1 −1.5 0.5 0 0 −0.5 −2 −1.5 0 −1.5 1 0.5 0.5 0 −2 −1 −1.5 −1.5 −1 1 1 0.5 −0.5 −1 1 0.5 1 0.5 −0.5 −1 −2 1 0 −0.5 −1 −1.5 −1.5 −1.5 −2 −1.5 0.5 0 −0.5 −1 −0.5 0 −0.5 −1.5 −1 −1.5 1 0.5 0 −0.5 0 1 −1 −0.5 −1 1 0.5 0 1 0.5 0 0.5 −0.5 −1 −0.5 −2 1 0.5 1 0.5 1 0.5 0 −1 1 0 1 −1.5 −2 0.5 0 1 0.5 −0.5 −1 −1.5 1 0.5 1 0.5 −1.5 −1.5 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 0.13 0.12 RMSE C RMSE 0.08 0.06 0.04 0.02 0.11 0.1 0.09 0.08 0.07 0.06 0 0 50 100 150 0.05 0 10 time step n 20 30 40 50 60 time step n Figure 1: OSU running man motion capture data. A: we use 217 datapoints for training LELVM (with added noise) and for tracking. Row 1: tracking in the 2D latent space. The contours (very tight in this sequence) are the posterior probability. Row 2: perspective-projection-based observations with occlusions. Row 3: each quadruplet (a, a′ , b, b′ ) show the true pose of the running man from a front and side views (a, b), and the reconstructed pose by tracking with our model (a′ , b′ ). B: we use the first running cycle for training LELVM and the second cycle for tracking. C: RMSE errors for each frame, for the tracking of A (left plot) and B (middle plot), normalised so that 1 equals the M 1 ˆ 2 height of the stick man. RMSE(n) = M j=1 ynj − ynj −1/2 for all 3D locations of the M ˆ markers, i.e., comparison of reconstructed stick man yn with ground-truth stick man yn . Right plot: multimodal posterior distribution in pose space for the model of A (frame 42). Comparison with PCA and GPLVM (fig. 3): for these models, the tracker uses the same GMSPPF setting as for LELVM (number of particles, initialisation, random-walk dynamics, etc.) but with the mapping y = f (x) provided by GPLVM or PCA, and with a uniform prior p(x) in latent space (since neither GPLVM nor the non-probabilistic PCA provide one). The LELVM-tracker uses both its f (x) and latent space prior p(x), as discussed. All methods use a 2D latent space. We ensured the best possible training of GPLVM by model selection based on multiple runs. For PCA, the latent space looks deceptively good, showing non-intersecting loops. However, (1) individual loops do not collect together as they should (for LELVM they do); (2) worse still, the mapping from 2D to pose space yields a poor observation model. The reason is that the loop in 102-D pose space is nonlinearly bent and a plane can at best intersect it at a few points, so the tracker often stays put at one of those (typically an “average” standing position), since leaving it would increase the error a lot. Using more latent dimensions would improve this, but as LELVM shows, this is not necessary. For GPLVM, we found high sensitivity to filter initialisation: the estimates have high variance across runs and are inaccurate ≈ 80% of the time. When it fails, the GPLVM tracker often freezes in latent space, like PCA. When it does succeed, it produces results that are comparable with LELVM, although somewhat less accurate visually. However, even then GPLVM’s latent space consists of continuous chunks spread apart and offset from each other; GPLVM has no incentive to place nearby two xs mapping to the same y. This effect, combined with the lack of a data-sensitive, realistic latent space density p(x), makes GPLVM jump erratically from chunk to chunk, in contrast with LELVM, which smoothly follows the 1D loop. Some GPLVM problems might be alleviated using higher-order dynamics, but our experiments suggest that such modeling sophistication is less 6 0 0.5 1 n=1 n = 15 n = 29 n = 43 n = 55 n = 69 A 100 100 100 100 100 50 50 50 50 50 0 0 0 0 0 0 −50 −50 −50 −50 −50 −50 −100 −100 −100 −100 −100 −100 50 50 40 −40 −20 −50 −30 −10 0 −40 20 −20 40 n=4 −40 10 20 −20 40 50 0 n=9 −40 −20 30 40 50 0 n = 14 −40 20 −20 40 50 30 20 10 0 0 −40 −20 −30 −20 −40 10 20 −30 −10 0 −50 30 50 n = 19 −50 −30 −10 −50 30 −10 −20 −30 −40 10 50 40 30 20 10 0 −50 −30 −10 −50 20 −10 −20 −30 −40 10 50 40 30 20 10 0 −50 −30 −10 −50 30 −10 −20 −30 −40 0 50 50 40 30 20 10 0 −50 −30 −10 −50 30 −10 −20 −30 −40 10 40 30 20 10 0 −10 −20 −30 50 40 30 20 10 0 −10 −50 100 40 −40 10 20 −50 30 50 n = 24 40 50 n = 29 B 20 20 20 20 20 40 40 40 40 40 60 60 60 60 60 80 80 80 80 80 80 100 100 100 100 100 100 120 120 120 120 120 120 140 140 140 140 140 140 160 160 160 160 160 160 180 180 180 180 180 200 200 200 200 200 220 220 220 220 220 50 100 150 200 250 300 350 50 100 150 200 250 300 350 50 100 150 200 250 300 350 50 100 150 200 250 300 350 20 40 60 180 200 220 50 100 150 200 250 300 350 50 100 150 100 100 100 100 100 50 50 50 50 200 250 300 350 100 50 50 0 0 0 0 0 0 −50 −50 −50 −50 −50 −50 −100 −100 −100 −100 −100 −100 50 50 40 −40 −20 −50 −30 −10 0 −40 10 20 −50 30 40 −40 −20 −50 −30 −10 0 −40 10 20 −50 30 40 −40 −20 −50 −30 −10 0 −40 −20 30 40 50 0 −40 10 20 −50 30 40 50 40 30 30 20 20 10 10 0 −50 −30 −10 −50 20 0 −10 −20 −30 −40 10 40 30 20 10 0 −10 −20 −30 50 50 40 30 20 10 0 −10 −20 −30 50 50 40 30 20 10 0 −10 −20 −30 50 40 30 20 10 0 −10 −50 50 −40 −20 −30 −20 −30 −10 0 −40 10 20 −50 30 40 50 −10 −50 −40 −20 −30 −20 −30 −10 0 −40 10 20 −50 30 40 50 Figure 2: A: tracking of a video from [6] (turning & walking). We use 220 datapoints (3 full walking cycles) for training LELVM. Row 1: tracking in the 2D latent space. The contours are the estimated posterior probability. Row 2: tracking based on markers. The red dots are the 2D tracks and the green stick man is the 3D reconstruction obtained using our model. Row 3: our 3D reconstruction from a different viewpoint. B: tracking of a person running straight towards the camera. Notice the scale changes and possible forward-backward ambiguities in the 3D estimates. We train the LELVM using 180 datapoints (2.5 running cycles); 2D tracks were obtained by manually marking the video. In both A–B the mocap training data was for a person different from the video’s (with different body proportions and motions), and no ground-truth estimate was available for favourable initialisation. LELVM GPLVM PCA tracking in latent space tracking in latent space tracking in latent space 2.5 0.02 30 38 2 38 0.99 0.015 20 1.5 38 0.01 1 10 0.005 0.5 0 0 0 −0.005 −0.5 −10 −0.01 −1 −0.015 −1.5 −20 −0.02 −0.025 −0.025 −2 −0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025 −2.5 −2 −1 0 1 2 3 −30 −80 −60 −40 −20 0 20 40 60 80 Figure 3: Method comparison, frame 38. PCA and GPLVM map consecutive walking cycles to spatially distinct latent space regions. Compounded by a data independent latent prior, the resulting tracker gets easily confused: it jumps across loops and/or remains put, trapped in local optima. In contrast, LELVM is stable and follows tightly a 1D manifold (see videos). crucial if locality constraints are correctly modeled (as in LELVM). We conclude that, compared to LELVM, GPLVM is significantly less robust for tracking, has much higher training overhead and lacks some operations (e.g. computing latent conditionals based on partly missing ambient data). 7 5 Conclusion and future work We have proposed the use of priors based on the Laplacian Eigenmaps Latent Variable Model (LELVM) for people tracking. LELVM is a probabilistic dim. red. method that combines the advantages of latent variable models and spectral manifold learning algorithms: a multimodal probability density over latent and ambient variables, globally differentiable nonlinear mappings for reconstruction and dimensionality reduction, no local optima, ability to unfold highly nonlinear manifolds, and good practical scaling to latent spaces of high dimension. LELVM is computationally efficient, simple to learn from sparse training data, and compatible with standard probabilistic trackers such as particle filters. Our results using a LELVM-based probabilistic sigma point mixture tracker with several real and synthetic human motion sequences show that LELVM provides sufficient constraints for robust operation in the presence of missing, noisy and ambiguous image measurements. Comparisons with PCA and GPLVM show LELVM is superior in terms of accuracy, robustness and computation time. The objective of this paper was to demonstrate the ability of the LELVM prior in a simple setting using 2D tracks obtained automatically or manually, and single-type motions (running, walking). Future work will explore more complex observation models such as silhouettes; the combination of different motion types in the same latent space (whose dimension will exceed 2); and the exploration of multimodal posterior distributions in latent space caused by ambiguities. Acknowledgments This work was partially supported by NSF CAREER award IIS–0546857 (MACP), NSF IIS–0535140 and EC MCEXT–025481 (CS). CMU data: http://mocap.cs.cmu.edu (created with funding from NSF EIA–0196217). OSU data: http://accad.osu.edu/research/mocap/mocap data.htm. References ´ [1] M. A. Carreira-Perpi˜ an and Z. Lu. The Laplacian Eigenmaps Latent Variable Model. In AISTATS, 2007. n´ [2] N. R. Howe, M. E. Leventon, and W. T. Freeman. Bayesian reconstruction of 3D human motion from single-camera video. In NIPS, volume 12, pages 820–826, 2000. [3] T.-J. Cham and J. M. Rehg. A multiple hypothesis approach to figure tracking. In CVPR, 1999. [4] M. Brand. Shadow puppetry. In ICCV, pages 1237–1244, 1999. [5] H. Sidenbladh, M. J. Black, and L. Sigal. Implicit probabilistic models of human motion for synthesis and tracking. In ECCV, volume 1, pages 784–800, 2002. [6] C. Sminchisescu and A. Jepson. Generative modeling for continuous non-linearly embedded visual inference. In ICML, pages 759–766, 2004. [7] R. Urtasun, D. J. Fleet, A. Hertzmann, and P. Fua. Priors for people tracking from small training sets. In ICCV, pages 403–410, 2005. [8] R. Li, M.-H. Yang, S. Sclaroff, and T.-P. Tian. Monocular tracking of 3D human motion with a coordinated mixture of factor analyzers. In ECCV, volume 2, pages 137–150, 2006. [9] J. M. Wang, D. Fleet, and A. Hertzmann. Gaussian process dynamical models. In NIPS, volume 18, 2006. [10] R. Urtasun, D. J. Fleet, and P. Fua. Gaussian process dynamical models for 3D people tracking. In CVPR, pages 238–245, 2006. [11] G. W. Taylor, G. E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. In NIPS, volume 19, 2007. [12] C. M. Bishop, M. Svens´ n, and C. K. I. Williams. GTM: The generative topographic mapping. Neural e Computation, 10(1):215–234, January 1998. [13] N. Lawrence. Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research, 6:1783–1816, November 2005. [14] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, June 2003. [15] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall, 1986. [16] R. van der Merwe and E. A. Wan. Gaussian mixture sigma-point particle filters for sequential probabilistic inference in dynamic state-space models. In ICASSP, volume 6, pages 701–704, 2003. ´ [17] M. A. Carreira-Perpi˜ an. Acceleration strategies for Gaussian mean-shift image segmentation. In CVPR, n´ pages 1160–1167, 2006. 8
3 0.64736623 72 nips-2007-Discriminative Log-Linear Grammars with Latent Variables
Author: Slav Petrov, Dan Klein
Abstract: We demonstrate that log-linear grammars with latent variables can be practically trained using discriminative methods. Central to efficient discriminative training is a hierarchical pruning procedure which allows feature expectations to be efficiently approximated in a gradient-based procedure. We compare L1 and L2 regularization and show that L1 regularization is superior, requiring fewer iterations to converge, and yielding sparser solutions. On full-scale treebank parsing experiments, the discriminative latent models outperform both the comparable generative latent models as well as the discriminative non-latent baselines.
4 0.50667697 56 nips-2007-Configuration Estimates Improve Pedestrian Finding
Author: Duan Tran, David A. Forsyth
Abstract: Fair discriminative pedestrian finders are now available. In fact, these pedestrian finders make most errors on pedestrians in configurations that are uncommon in the training data, for example, mounting a bicycle. This is undesirable. However, the human configuration can itself be estimated discriminatively using structure learning. We demonstrate a pedestrian finder which first finds the most likely human pose in the window using a discriminative procedure trained with structure learning on a small dataset. We then present features (local histogram of oriented gradient and local PCA of gradient) based on that configuration to an SVM classifier. We show, using the INRIA Person dataset, that estimates of configuration significantly improve the accuracy of a discriminative pedestrian finder. 1
5 0.41069669 113 nips-2007-Learning Visual Attributes
Author: Vittorio Ferrari, Andrew Zisserman
Abstract: We present a probabilistic generative model of visual attributes, together with an efficient learning algorithm. Attributes are visual qualities of objects, such as ‘red’, ‘striped’, or ‘spotted’. The model sees attributes as patterns of image segments, repeatedly sharing some characteristic properties. These can be any combination of appearance, shape, or the layout of segments within the pattern. Moreover, attributes with general appearance are taken into account, such as the pattern of alternation of any two colors which is characteristic for stripes. To enable learning from unsegmented training images, the model is learnt discriminatively, by optimizing a likelihood ratio. As demonstrated in the experimental evaluation, our model can learn in a weakly supervised setting and encompasses a broad range of attributes. We show that attributes can be learnt starting from a text query to Google image search, and can then be used to recognize the attribute and determine its spatial extent in novel real-world images.
6 0.39536557 171 nips-2007-Scan Strategies for Meteorological Radars
7 0.38541177 130 nips-2007-Modeling Natural Sounds with Modulation Cascade Processes
8 0.34511241 3 nips-2007-A Bayesian Model of Conditioned Perception
9 0.33371219 203 nips-2007-The rat as particle filter
10 0.3082943 181 nips-2007-Sparse Overcomplete Latent Variable Decomposition of Counts Data
11 0.30132043 109 nips-2007-Kernels on Attributed Pointsets with Applications
12 0.27153844 193 nips-2007-The Distribution Family of Similarity Distances
13 0.27020061 18 nips-2007-A probabilistic model for generating realistic lip movements from speech
14 0.25886217 131 nips-2007-Modeling homophily and stochastic equivalence in symmetric relational data
15 0.25379437 19 nips-2007-Active Preference Learning with Discrete Choice Data
16 0.25196394 98 nips-2007-Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion
17 0.2463869 28 nips-2007-Augmented Functional Time Series Representation and Forecasting with Gaussian Processes
18 0.24444504 211 nips-2007-Unsupervised Feature Selection for Accurate Recommendation of High-Dimensional Image Data
19 0.23984736 163 nips-2007-Receding Horizon Differential Dynamic Programming
20 0.23934381 145 nips-2007-On Sparsity and Overcompleteness in Image Models
topicId topicWeight
[(5, 0.05), (13, 0.021), (16, 0.017), (18, 0.024), (21, 0.055), (31, 0.013), (34, 0.017), (35, 0.024), (47, 0.072), (49, 0.012), (83, 0.093), (85, 0.017), (87, 0.46), (90, 0.041)]
simIndex simValue paperId paperTitle
1 0.9400773 129 nips-2007-Mining Internet-Scale Software Repositories
Author: Erik Linstead, Paul Rigor, Sushil Bajracharya, Cristina Lopes, Pierre F. Baldi
Abstract: Large repositories of source code create new challenges and opportunities for statistical machine learning. Here we first develop Sourcerer, an infrastructure for the automated crawling, parsing, and database storage of open source software. Sourcerer allows us to gather Internet-scale source code. For instance, in one experiment, we gather 4,632 java projects from SourceForge and Apache totaling over 38 million lines of code from 9,250 developers. Simple statistical analyses of the data first reveal robust power-law behavior for package, SLOC, and lexical containment distributions. We then develop and apply unsupervised author-topic, probabilistic models to automatically discover the topics embedded in the code and extract topic-word and author-topic distributions. In addition to serving as a convenient summary for program function and developer activities, these and other related distributions provide a statistical and information-theoretic basis for quantifying and analyzing developer similarity and competence, topic scattering, and document tangling, with direct applications to software engineering. Finally, by combining software textual content with structural information captured by our CodeRank approach, we are able to significantly improve software retrieval performance, increasing the AUC metric to 0.84– roughly 10-30% better than previous approaches based on text alone. Supplementary material may be found at: http://sourcerer.ics.uci.edu/nips2007/nips07.html. 1
2 0.93991458 183 nips-2007-Spatial Latent Dirichlet Allocation
Author: Xiaogang Wang, Eric Grimson
Abstract: In recent years, the language model Latent Dirichlet Allocation (LDA), which clusters co-occurring words into topics, has been widely applied in the computer vision field. However, many of these applications have difficulty with modeling the spatial and temporal structure among visual words, since LDA assumes that a document is a “bag-of-words”. It is also critical to properly design “words” and “documents” when using a language model to solve vision problems. In this paper, we propose a topic model Spatial Latent Dirichlet Allocation (SLDA), which better encodes spatial structures among visual words that are essential for solving many vision problems. The spatial information is not encoded in the values of visual words but in the design of documents. Instead of knowing the partition of words into documents a priori, the word-document assignment becomes a random hidden variable in SLDA. There is a generative procedure, where knowledge of spatial structure can be flexibly added as a prior, grouping visual words which are close in space into the same document. We use SLDA to discover objects from a collection of images, and show it achieves better performance than LDA. 1
same-paper 3 0.86144447 50 nips-2007-Combined discriminative and generative articulated pose and non-rigid shape estimation
Author: Leonid Sigal, Alexandru Balan, Michael J. Black
Abstract: Estimation of three-dimensional articulated human pose and motion from images is a central problem in computer vision. Much of the previous work has been limited by the use of crude generative models of humans represented as articulated collections of simple parts such as cylinders. Automatic initialization of such models has proved difficult and most approaches assume that the size and shape of the body parts are known a priori. In this paper we propose a method for automatically recovering a detailed parametric model of non-rigid body shape and pose from monocular imagery. Specifically, we represent the body using a parameterized triangulated mesh model that is learned from a database of human range scans. We demonstrate a discriminative method to directly recover the model parameters from monocular images using a conditional mixture of kernel regressors. This predicted pose and shape are used to initialize a generative model for more detailed pose and shape estimation. The resulting approach allows fully automatic pose and shape recovery from monocular and multi-camera imagery. Experimental results show that our method is capable of robustly recovering articulated pose, shape and biometric measurements (e.g. height, weight, etc.) in both calibrated and uncalibrated camera environments. 1
4 0.8173058 59 nips-2007-Continuous Time Particle Filtering for fMRI
Author: Lawrence Murray, Amos J. Storkey
Abstract: We construct a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed Blood Oxygen Level Dependent (BOLD) signal in Functional Magnetic Resonance Imaging (fMRI). The model poses a difficult parameter estimation problem, both theoretically due to the nonlinearity and divergence of the differential system, and computationally due to its time and space complexity. We adapt a particle filter and smoother to the task, and discuss some of the practical approaches used to tackle the difficulties, including use of sparse matrices and parallelisation. Results demonstrate the tractability of the approach in its application to an effective connectivity study. 1
5 0.59619713 189 nips-2007-Supervised Topic Models
Author: Jon D. Mcauliffe, David M. Blei
Abstract: We introduce supervised latent Dirichlet allocation (sLDA), a statistical model of labelled documents. The model accommodates a variety of response types. We derive a maximum-likelihood procedure for parameter estimation, which relies on variational approximations to handle intractable posterior expectations. Prediction problems motivate this research: we use the fitted model to predict response values for new documents. We test sLDA on two real-world problems: movie ratings predicted from reviews, and web page popularity predicted from text descriptions. We illustrate the benefits of sLDA versus modern regularized regression, as well as versus an unsupervised LDA analysis followed by a separate regression. 1
6 0.54475236 73 nips-2007-Distributed Inference for Latent Dirichlet Allocation
7 0.51721376 95 nips-2007-HM-BiTAM: Bilingual Topic Exploration, Word Alignment, and Translation
8 0.49744347 2 nips-2007-A Bayesian LDA-based model for semi-supervised part-of-speech tagging
9 0.49057692 105 nips-2007-Infinite State Bayes-Nets for Structured Domains
10 0.47931522 143 nips-2007-Object Recognition by Scene Alignment
11 0.47755754 47 nips-2007-Collapsed Variational Inference for HDP
12 0.4622466 1 nips-2007-A Bayesian Framework for Cross-Situational Word-Learning
13 0.45678103 172 nips-2007-Scene Segmentation with CRFs Learned from Partially Labeled Images
14 0.45343125 113 nips-2007-Learning Visual Attributes
15 0.44936913 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model
16 0.44367579 211 nips-2007-Unsupervised Feature Selection for Accurate Recommendation of High-Dimensional Image Data
17 0.44160074 56 nips-2007-Configuration Estimates Improve Pedestrian Finding
18 0.43493304 169 nips-2007-Retrieved context and the discovery of semantic structure
19 0.42857978 180 nips-2007-Sparse Feature Learning for Deep Belief Networks
20 0.42595959 154 nips-2007-Predicting Brain States from fMRI Data: Incremental Functional Principal Component Regression