nips nips2006 nips2006-153 knowledge-graph by maker-knowledge-mining

153 nips-2006-Online Clustering of Moving Hyperplanes


Source: pdf

Author: René Vidal

Abstract: We propose a recursive algorithm for clustering trajectories lying in multiple moving hyperplanes. Starting from a given or random initial condition, we use normalized gradient descent to update the coefficients of a time varying polynomial whose degree is the number of hyperplanes and whose derivatives at a trajectory give an estimate of the vector normal to the hyperplane containing that trajectory. As time proceeds, the estimates of the hyperplane normals are shown to track their true values in a stable fashion. The segmentation of the trajectories is then obtained by clustering their associated normal vectors. The final result is a simple recursive algorithm for segmenting a variable number of moving hyperplanes. We test our algorithm on the segmentation of dynamic scenes containing rigid motions and dynamic textures, e.g., a bird floating on water. Our method not only segments the bird motion from the surrounding water motion, but also determines patterns of motion in the scene (e.g., periodic motion) directly from the temporal evolution of the estimated polynomial coefficients. Our experiments also show that our method can deal with appearing and disappearing motions in the scene.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We propose a recursive algorithm for clustering trajectories lying in multiple moving hyperplanes. [sent-5, score-0.71]

2 As time proceeds, the estimates of the hyperplane normals are shown to track their true values in a stable fashion. [sent-7, score-0.329]

3 The segmentation of the trajectories is then obtained by clustering their associated normal vectors. [sent-8, score-0.519]

4 The final result is a simple recursive algorithm for segmenting a variable number of moving hyperplanes. [sent-9, score-0.534]

5 We test our algorithm on the segmentation of dynamic scenes containing rigid motions and dynamic textures, e. [sent-10, score-0.451]

6 Our method not only segments the bird motion from the surrounding water motion, but also determines patterns of motion in the scene (e. [sent-13, score-0.548]

7 , periodic motion) directly from the temporal evolution of the estimated polynomial coefficients. [sent-15, score-0.265]

8 A natural extension i=1 of PCA is subspace clustering, which refers to the problem of fitting a union of n ≥ 1 linear subspaces {Sj ⊂ RD }n of unknown dimensions dj = dim(Sj ), 0 < dj < D, to N points j=1 X = {xi ∈ RD }N drawn from ∪n Sj , without knowing which points belong to which subspace. [sent-18, score-0.211]

9 Existing methods randomly choose a basis for each subspace, and then iterate between data segmentation and standard PCA. [sent-21, score-0.208]

10 Then, a set of polynomials is fitted to the projected data points and a basis for each one of the projected subspaces is obtained from the derivatives of these polynomials at the data points. [sent-25, score-0.223]

11 the subspace bases and the segmentation of the data are obtained after all the data points have been collected. [sent-28, score-0.329]

12 In addition, existing methods are designed for clustering data lying in a collection of static subspaces, i. [sent-29, score-0.218]

13 , dynamic texture segmentation, one typically applies them to a moving time window, under the assumption that the subspaces are static within that window. [sent-34, score-0.403]

14 A major disadvantage of this approach is that it does not incorporate temporal coherence, because the segmentation and the bases at time t + 1 are obtained independently from those at time t. [sent-35, score-0.34]

15 In this paper, we propose a computationally simple and temporally coherent online algorithm for clustering point trajectories lying in a variable number of moving hyperplanes. [sent-37, score-0.477]

16 We model a union of n moving hyperplanes in RD , Sj (t) = {x ∈ RD : b⊤ (t)x = 0}, j = 1, . [sent-38, score-0.612]

17 , n, where b(t) ∈ RD , j as the zero set of a polynomial with time varying coefficients. [sent-41, score-0.272]

18 Starting from an initial polynomial at time t, we compute an update of the polynomial coefficients using normalized gradient descent. [sent-42, score-0.551]

19 The hyperplane normals are then estimated from the derivatives of the new polynomial at each trajectory. [sent-43, score-0.518]

20 The segmentation of the trajectories is obtained by clustering their associated normal vectors. [sent-44, score-0.519]

21 As time proceeds, new data are added, and the estimates of the polynomial coefficients are more accurate, because they are based on more observations. [sent-45, score-0.27]

22 This not only makes the segmentation of the data more accurate, but also allows us to handle a variable number of hyperplanes. [sent-46, score-0.233]

23 We test our approach on the challenging problem of segmenting dynamic textures from rigid motions in video. [sent-47, score-0.327]

24 2 Recursive estimation of a single hyperplane In this section, we review the normalized gradient algorithm for estimating a single hyperplane. [sent-48, score-0.285]

25 We consider both static and moving hyperplanes, and analyze the stability of the algorithm in each case. [sent-49, score-0.26]

26 8, page 77 of [5], the following normalized gradient recursive identifier ˆ (b(t)⊤ x(t) − y(t)) ˆ ˆ b(t + 1) = b(t) − µ x(t), (2) 1 + µ x(t) 2 ˆ where µ > 0 is a fixed parameter, is such that b(t) → b exponentially if the regressors {x(t)} are persistently exciting, i. [sent-56, score-0.581]

27 As shown in [6], if the regressors {x(t)} are persistently exciting and the sequence {b(t+1)−b(t)} is L2 -stable, i. [sent-63, score-0.311]

28 sup b(t+1)−b(t) 2 < ∞, then the normalized t≥1 ˆ ˆ gradient recursive identifier (2) produces an estimate b(t) of b(t) such that {b(t)− b(t)} is L2 -stable. [sent-65, score-0.38]

29 Let {x(t)} be a set of measurements lying in the moving hyperˆ plane S(t) = {x ∈ RD : b⊤ (t)x = 0}. [sent-67, score-0.241]

30 Notice that the main difference between linear regression and hyperplane estimation is that in the latter case the parameter vector b(t) is constrained to lie in the unit sphere SD−1 . [sent-69, score-0.233]

31 Therefore, the update equation for the normalized gradient recursive identifier on the sphere is v(t) ˆ ˆ b(t + 1) = b(t) cos( v(t) ) + sin( v(t) ), (4) v(t) where the negative normalized gradient is computed as ⊤ ˆ (b (t)x(t))x(t) ˆ ˆ⊤ v(t) = −µ ID − b(t)b (t) . [sent-72, score-0.485]

32 1 + µ x(t) 2 (5) Notice that the gradient on the sphere is essentially the same as the Euclidean gradient, except that it ˆ ˆ⊤ needs to be projected onto the subspace orthogonal to ˆ by the matrix ID − b(t)b (t) ∈ RD×(D−1) . [sent-73, score-0.254]

33 Under persistence of excitation condition (6), if b(t) = b the identifier ˆ ˆ (4) is such that b(t) → b exponentially, while if {b(t + 1) − b(t)} is L2 -stable, so is {b(t) − b(t)}. [sent-75, score-0.204]

34 3 Recursive segmentation of a known number of moving hyperplanes In this section, we generalize the recursive identifier (4) and its stability properties to the case of N trajectories {xi (t)}N lying in n hyperplanes {Sj (t)}n . [sent-76, score-1.849]

35 However, as we do not know the segmentation of the data, we do not know which data to use to update each one of the n identifiers. [sent-78, score-0.208]

36 In the approach, the n hyperplanes are represented with a single polynomial whose coefficients do not depend on the segmentation of the data. [sent-79, score-0.888]

37 Then there is a vector bj (t) normal to Sj (t) such that b⊤ (t)x(t) = 0. [sent-83, score-0.3]

38 j Thus, the following homogeneous polynomial of degree n in D variables must vanish at x(t): pn (x(t), t) = b⊤ (t)x(t) 1 b⊤ (t)x(t) · · · b⊤ (t)x(t) = 0. [sent-84, score-0.335]

39 2 n (7) This homogeneous polynomial can be written as a linear combination of all the monomials of degree n in x, xI = xn1 xn2 · · · xnD with 0 ≤ nk ≤ n for k = 1, . [sent-85, score-0.238]

40 Since both the normal vectors and the coefficient vector are defined up to scale, we will assume that bj (t) = c(t) = 1, without loss of generality. [sent-104, score-0.328]

41 Thanks to the polynomial equation (8), we now propose a new online hyperplane clustering algorithm that operates on the polynomial coefficients c(t), rather than on the normal vectors {bj (t)}n . [sent-106, score-0.831]

42 Since in practice we do not know the true polynomial coeffiˆ cients c(t), and we estimate b(t) from ˆ(t), we need to show that both ˆ(t) and b(x(t)) track their c c true values in a stable fashion. [sent-116, score-0.336]

43 Consider the recursive identifier (11)–(13) and assume that the embedded regressors {νn (xi (t))}N are persistently exciting, i. [sent-120, score-0.529]

44 Furthermore, if a trajectory x(t) belongs to the jth ˆ hyperplane, then the corresponding b(x(t)) in (13) is such that bj (t) − ˆ b(x(t)) is L2 -stable. [sent-124, score-0.267]

45 If in ˆ ˆ addition the hyperplanes are static, then c(t) − c(t) → 0 and bj (t) − b(x(t)) → 0 exponentially. [sent-125, score-0.668]

46 [Sketch only] When the hyperplanes are static, the exponential convergence of c(t) to c follows with minor modifications from Theorem 2. [sent-127, score-0.468]

47 , bn are different, the polynomial c⊤ νn (x) has no repeated factor. [sent-133, score-0.212]

48 Consider now the case in which the hyperplanes are moving. [sent-138, score-0.468]

49 Since SD−1 is compact, the sequences {bj (t + 1) − bj (t)}n are trivially L2 -stable, hence so is the sequence j=1 ˆ ˆ {c(t + 1) − c(t)}. [sent-139, score-0.243]

50 Theorem 1 provides us with a method for computing an ˆ estimate b(xi (t)) for the normal to the hyperplane passing through each one of the N trajectories {xi (t) ∈ RD }N at each time instant. [sent-142, score-0.533]

51 We do so by using a recursive version of the K-means algorithm, adapted to vectors on the unit sphere. [sent-144, score-0.286]

52 Essentially, at each t, we seek the normal vectors ˆ bj (t) ∈ SD−1 and the membership of wij (t) ∈ {0, 1} of trajectory i to hyperplane j that maximize N ˆ f ({wij (t)}, {bj (t)}) = n ˆ⊤ ˆ wij (t)(bj (t)b(xi (t)))2 . [sent-145, score-0.724]

53 In order to obtain temporally coherent estimates of the normal vectors, we use the estimates at time t to initialize the iterations at time t + 1. [sent-148, score-0.216]

54 c j=1 i=1 For each t ≥ 1 1: Update the coefficients of the polynomial pn (x(t), t) = ˆ(t)⊤ νn (x(t)) using the recursive procedure ˆ c v(t) sin( v(t) ), v(t) P ` ´ N (ˆ⊤ (t)νn (xi (t)))νn (xi (t))/N i=1 c ˆ c v(t) = −µ IMn (D) − c(t)ˆ⊤ (t) . [sent-150, score-0.567]

55 P 1 + µ N νn (xi (t)) 2 /N i=1 ˆ c(t + 1) = ˆ(t) cos( v(t) ) + c 2: Solve for the normal vectors from the derivatives of pn at the given trajectories ˆ ⊤ Dνn (xi (t))ˆ(t) c ˆ b(xi (t)) = ⊤ (x (t))ˆ(t) Dνn i c i = 1, . [sent-151, score-0.407]

56 , n ˆ ˆ (c) Iterate (a) and (b) until convergence of wij (t), and then set bj (t + 1) = bj (t). [sent-167, score-0.452]

57 4 Recursive segmentation of a variable number of moving hyperplanes In the previous section, we proposed a recursive algorithm for segmenting n moving hyperplanes under the assumption that n is known and constant in time. [sent-168, score-1.822]

58 However, in many practical situations the number of hyperplanes may be unknown and time varying. [sent-169, score-0.502]

59 For example, the number of moving objects in a video sequence may change due to objects entering or leaving the camera field of view. [sent-170, score-0.298]

60 In this section, we consider the problem of segmenting a variable number of moving hyperplanes. [sent-171, score-0.276]

61 We denote by n(t) ∈ N the number of hyperplanes at time t and assume we are given an upper bound n ≥ n(t). [sent-172, score-0.502]

62 We show that if we apply Algorithm 1 with the number of hyperplanes set to n, then we can still recover the correct segmentation of the scene, even if n(t) < n. [sent-173, score-0.676]

63 Since the condition on the right hand side of (15) holds trivially when the regressors xi (t) are bounded, the only important condition is the one on the left hand side. [sent-175, score-0.339]

64 Notice that the condition on the left hand side implies that the spatial-temporal covariance matrix of the embedded regressors must be of rank Mn (D) − 1 in any time window of size S for some integer S. [sent-176, score-0.299]

65 In this case, at each time instant we draw data from all n hyperplanes and the data is rich enough to estimate all n hyperplanes at each time instant. [sent-179, score-1.109]

66 As time proceeds, however, the data must be persistently drawn from at least n hyperplanes in order for (18) to hold. [sent-187, score-0.606]

67 This can be achieved either by having n different static hyperplanes and persistently drawing data from all of them, or by having less than n moving hyperplanes whose motion is rich enough so that (18) holds. [sent-188, score-1.364]

68 In summary, as long as the embedded regressors satisfy condition (15) for some upper bound n on the number of hyperplanes, the recursive identifier (11)-(13) will still provide L2 -stable estimates of the parameters, even if the number of hyperplanes is unknown and variable, and n(t) < n for all t. [sent-189, score-0.962]

69 Since the true segmentation is known, we compute the vectors {bj (t)} normal to each i=1 plane, and use them to generate the vector of coefficients c(t). [sent-195, score-0.336]

70 We also consider the error of the polynomial coefficients and the normal vectors by computing the angles between the estimated and true values. [sent-198, score-0.34]

71 62◦ for the normals, and 4% for the segmentation error. [sent-202, score-0.208]

72 True polynomial coefficients Estimation error of the polynomial (degrees) Estimated polynomial coefficients 1 1 0. [sent-203, score-0.7]

73 We now apply our algorithm to the problem of segmenting video sequences of dynamic textures, i. [sent-213, score-0.25]

74 Since the trajectories of the output of a linear dynamical system live in the so-called observability subspace, the intensity trajectories of pixels associated with a single dynamic texture lie in a subspace. [sent-219, score-0.392]

75 Therefore, the set of all intensity trajectories lie in multiple subspaces, one per dynamic texture. [sent-220, score-0.212]

76 Given γ consecutive frames of a video sequence {I(f )}t =t−γ+1 , we interpret the data as a matrix f W (t) ∈ RN ×3γ , where N is the number of pixels, and 3 corresponds to the three RGB color channels. [sent-221, score-0.23]

77 We obtain a data point xi (t) ∈ RD from image I(t) by projecting the ith row of W (t), w⊤ (t) onto a subspace of dimension D, i. [sent-222, score-0.22]

78 We applied our method to a sequence (110 × 192, 130 frames) containing a bird floating on water, while rotating around a fix point. [sent-228, score-0.338]

79 The task is to segment the bird’s rigid motion from the water’s dynamic texture, while at the same time tracking the motion of the bird. [sent-229, score-0.303]

80 We chose D = 5 principal components of the γ = 5 first frames of the RGB video sequence to project each frame onto a lower dimensional space. [sent-230, score-0.357]

81 Although the convergence is not guaranteed with only 130 frames, it is clear that the polynomial coefficients already capture the periodicity of the motion. [sent-232, score-0.212]

82 As shown in the last row of Figure 2, some coefficients of the polynomial oscillate in time. [sent-233, score-0.212]

83 One can notice that the orientation of the bird is related to the value of the coefficient c8 . [sent-234, score-0.364]

84 If the bird is facing to the right showing her right side, the value of c8 achieves a local maximum. [sent-235, score-0.295]

85 On the contrary if the bird is oriented to the left, the value of c8 achieves a local minimum. [sent-236, score-0.295]

86 One can distinguish three behaviors for the polynomial coefficients: oscillations, pseudooscillations or quasi-linearity. [sent-238, score-0.212]

87 This example shows that the coefficients of the estimated polynomial give useful information about the scene motion. [sent-240, score-0.247]

88 04 50 Time (seconds) 100 0 50 Time (seconds) 100 Figure 2: Segmenting a bird floating on water. [sent-256, score-0.295]

89 To test the performance of our method on a video sequence with a variable number of motions, we extracted a sub-clip of the bird sequence (55 × 192, 130 frames) in which the camera moves up at 1 pixel/frame until the bird disappears at t = 51. [sent-260, score-0.847]

90 The camera stays stationary from t = 56 to t = 66, and then moves down at 1 pixel/frame, the bird reappears at t = 76. [sent-261, score-0.368]

91 For our method we chose D = 5 principal components of the γ = 5 first frames of the RGB video sequence to project each frame onto a fixed lower dimensional space. [sent-264, score-0.357]

92 We set the parameter of the recursive algorithm to µ = 1. [sent-265, score-0.258]

93 Notice that both methods give excellent results during the first few frames, when both the bird and the water are present. [sent-267, score-0.361]

94 Nevertheless, notice that the performance of GPCA deteriorates dramatically when the bird disappears, because GPCA overestimates the number of hyperplanes, whereas our method is robust to this change and keeps segmenting the scene correctly, i. [sent-269, score-0.506]

95 When the bird reappears, our method detects the bird correctly from the first frame whereas GPCA produces a wrong segmentation for the first frames after the bird reappears. [sent-272, score-1.237]

96 In addition the fixed projection and the recursive estimation of the polynomial coefficients make our method much faster than GPCA. [sent-275, score-0.47]

97 Sequence GPCA Our method Figure 3: Segmenting a video sequence with a variable number of dynamic textures. [sent-276, score-0.211]

98 6 Conclusions We have proposed a simple recursive algorithm for segmenting trajectories lying in a variable number of moving hyperplanes. [sent-280, score-0.778]

99 The algorithm updates the coefficients of a polynomial whose derivatives give the normals to the moving hyperplanes as well as the segmentation of the trajectories. [sent-281, score-1.146]

100 We applied our method successfully to the segmentation of videos containing multiple dynamic textures. [sent-282, score-0.273]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('hyperplanes', 0.468), ('bird', 0.295), ('recursive', 0.258), ('gpca', 0.225), ('polynomial', 0.212), ('segmentation', 0.208), ('bj', 0.2), ('hyperplane', 0.192), ('trajectories', 0.147), ('moving', 0.144), ('regressors', 0.126), ('coef', 0.117), ('rd', 0.109), ('frames', 0.109), ('segmenting', 0.107), ('persistently', 0.104), ('normal', 0.1), ('imn', 0.1), ('persistence', 0.1), ('pn', 0.097), ('lying', 0.097), ('cients', 0.095), ('xi', 0.094), ('subspace', 0.083), ('id', 0.082), ('seconds', 0.08), ('sd', 0.079), ('normals', 0.079), ('video', 0.078), ('motion', 0.076), ('subspaces', 0.07), ('notice', 0.069), ('identi', 0.067), ('trajectory', 0.067), ('water', 0.066), ('dynamic', 0.065), ('clustering', 0.064), ('sj', 0.064), ('motions', 0.061), ('ekn', 0.06), ('excitation', 0.059), ('stability', 0.059), ('static', 0.057), ('gradient', 0.054), ('pb', 0.052), ('oating', 0.052), ('rigid', 0.052), ('vidal', 0.052), ('mn', 0.052), ('wij', 0.052), ('principal', 0.049), ('pc', 0.048), ('rich', 0.047), ('en', 0.047), ('condition', 0.045), ('sin', 0.044), ('er', 0.043), ('onto', 0.043), ('sequence', 0.043), ('cos', 0.042), ('textures', 0.042), ('sphere', 0.041), ('embedded', 0.041), ('reappears', 0.04), ('xnd', 0.04), ('rgb', 0.04), ('normalized', 0.039), ('exciting', 0.038), ('bases', 0.038), ('derivatives', 0.035), ('scene', 0.035), ('frame', 0.035), ('disappears', 0.035), ('time', 0.034), ('projected', 0.033), ('seek', 0.033), ('camera', 0.033), ('texture', 0.033), ('coefficients', 0.032), ('rmn', 0.032), ('passing', 0.031), ('planes', 0.029), ('dj', 0.029), ('instant', 0.029), ('estimate', 0.029), ('side', 0.029), ('vectors', 0.028), ('evolution', 0.027), ('polynomials', 0.026), ('homogeneous', 0.026), ('varying', 0.026), ('temporal', 0.026), ('oscillations', 0.025), ('variable', 0.025), ('proceeds', 0.025), ('geodesic', 0.024), ('theorem', 0.024), ('estimates', 0.024), ('window', 0.024), ('operates', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 153 nips-2006-Online Clustering of Moving Hyperplanes

Author: René Vidal

Abstract: We propose a recursive algorithm for clustering trajectories lying in multiple moving hyperplanes. Starting from a given or random initial condition, we use normalized gradient descent to update the coefficients of a time varying polynomial whose degree is the number of hyperplanes and whose derivatives at a trajectory give an estimate of the vector normal to the hyperplane containing that trajectory. As time proceeds, the estimates of the hyperplane normals are shown to track their true values in a stable fashion. The segmentation of the trajectories is then obtained by clustering their associated normal vectors. The final result is a simple recursive algorithm for segmenting a variable number of moving hyperplanes. We test our algorithm on the segmentation of dynamic scenes containing rigid motions and dynamic textures, e.g., a bird floating on water. Our method not only segments the bird motion from the surrounding water motion, but also determines patterns of motion in the scene (e.g., periodic motion) directly from the temporal evolution of the estimated polynomial coefficients. Our experiments also show that our method can deal with appearing and disappearing motions in the scene.

2 0.13220705 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

Author: Dong S. Cheng, Vittorio Murino, Mário Figueiredo

Abstract: This paper proposes a new approach to model-based clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the class labels are only indirectly observed (via the probability density of each class); as finite mixture learning under a grouping prior. To estimate the parameters of the proposed model, we derive a (generalized) EM algorithm with a closed-form E-step, in contrast with other recent approaches to semi-supervised probabilistic clustering which require Gibbs sampling or suboptimal shortcuts. We show that our approach is ideally suited for image segmentation: it avoids the combinatorial nature Markov random field priors, and opens the door to more sophisticated spatial priors (e.g., wavelet-based) in a simple and computationally efficient way. Finally, we extend our formulation to work in unsupervised, semi-supervised, or discriminative modes. 1

3 0.11613467 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo

Author: Oliver Williams

Abstract: This paper describes a Gaussian process framework for inferring pixel-wise disparity and bi-layer segmentation of a scene given a stereo pair of images. The Gaussian process covariance is parameterized by a foreground-backgroundocclusion segmentation label to model both smooth regions and discontinuities. As such, we call our model a switched Gaussian process. We propose a greedy incremental algorithm for adding observations from the data and assigning segmentation labels. Two observation schedules are proposed: the first treats scanlines as independent, the second uses an active learning criterion to select a sparse subset of points to measure. We show that this probabilistic framework has comparable performance to the state-of-the-art. 1

4 0.11597763 75 nips-2006-Efficient sparse coding algorithms

Author: Honglak Lee, Alexis Battle, Rajat Raina, Andrew Y. Ng

Abstract: Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1 -regularized least squares problem and an L2 -constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons. 1

5 0.082482733 134 nips-2006-Modeling Human Motion Using Binary Latent Variables

Author: Graham W. Taylor, Geoffrey E. Hinton, Sam T. Roweis

Abstract: We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued “visible” variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture. Website: http://www.cs.toronto.edu/∼gwtaylor/publications/nips2006mhmublv/

6 0.081810154 111 nips-2006-Learning Motion Style Synthesis from Perceptual Observations

7 0.080277443 80 nips-2006-Fundamental Limitations of Spectral Clustering

8 0.068514608 45 nips-2006-Blind Motion Deblurring Using Image Statistics

9 0.066997588 200 nips-2006-Unsupervised Regression with Applications to Nonlinear System Identification

10 0.063343272 65 nips-2006-Denoising and Dimension Reduction in Feature Space

11 0.06327416 120 nips-2006-Learning to Traverse Image Manifolds

12 0.062632546 31 nips-2006-Analysis of Contour Motions

13 0.060519304 128 nips-2006-Manifold Denoising

14 0.058927443 7 nips-2006-A Local Learning Approach for Clustering

15 0.058196194 84 nips-2006-Generalized Regularized Least-Squares Learning with Predefined Features in a Hilbert Space

16 0.056841895 135 nips-2006-Modelling transcriptional regulation using Gaussian Processes

17 0.055999741 46 nips-2006-Blind source separation for over-determined delayed mixtures

18 0.055898573 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds

19 0.05557676 167 nips-2006-Recursive ICA

20 0.052382831 83 nips-2006-Generalized Maximum Margin Clustering and Unsupervised Kernel Learning


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.182), (1, 0.026), (2, 0.042), (3, 0.04), (4, -0.007), (5, -0.064), (6, -0.019), (7, -0.054), (8, 0.043), (9, 0.172), (10, 0.004), (11, -0.059), (12, -0.043), (13, 0.04), (14, -0.043), (15, 0.075), (16, -0.042), (17, -0.047), (18, 0.031), (19, 0.112), (20, -0.063), (21, 0.078), (22, 0.042), (23, -0.006), (24, 0.072), (25, -0.122), (26, 0.004), (27, -0.088), (28, -0.014), (29, 0.055), (30, -0.033), (31, -0.126), (32, 0.063), (33, -0.073), (34, 0.102), (35, 0.104), (36, -0.121), (37, 0.126), (38, 0.061), (39, -0.05), (40, 0.159), (41, 0.007), (42, 0.063), (43, 0.019), (44, -0.113), (45, -0.023), (46, -0.048), (47, 0.072), (48, 0.016), (49, 0.104)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95523113 153 nips-2006-Online Clustering of Moving Hyperplanes

Author: René Vidal

Abstract: We propose a recursive algorithm for clustering trajectories lying in multiple moving hyperplanes. Starting from a given or random initial condition, we use normalized gradient descent to update the coefficients of a time varying polynomial whose degree is the number of hyperplanes and whose derivatives at a trajectory give an estimate of the vector normal to the hyperplane containing that trajectory. As time proceeds, the estimates of the hyperplane normals are shown to track their true values in a stable fashion. The segmentation of the trajectories is then obtained by clustering their associated normal vectors. The final result is a simple recursive algorithm for segmenting a variable number of moving hyperplanes. We test our algorithm on the segmentation of dynamic scenes containing rigid motions and dynamic textures, e.g., a bird floating on water. Our method not only segments the bird motion from the surrounding water motion, but also determines patterns of motion in the scene (e.g., periodic motion) directly from the temporal evolution of the estimated polynomial coefficients. Our experiments also show that our method can deal with appearing and disappearing motions in the scene.

2 0.56683564 75 nips-2006-Efficient sparse coding algorithms

Author: Honglak Lee, Alexis Battle, Rajat Raina, Andrew Y. Ng

Abstract: Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1 -regularized least squares problem and an L2 -constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons. 1

3 0.53533816 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo

Author: Oliver Williams

Abstract: This paper describes a Gaussian process framework for inferring pixel-wise disparity and bi-layer segmentation of a scene given a stereo pair of images. The Gaussian process covariance is parameterized by a foreground-backgroundocclusion segmentation label to model both smooth regions and discontinuities. As such, we call our model a switched Gaussian process. We propose a greedy incremental algorithm for adding observations from the data and assigning segmentation labels. Two observation schedules are proposed: the first treats scanlines as independent, the second uses an active learning criterion to select a sparse subset of points to measure. We show that this probabilistic framework has comparable performance to the state-of-the-art. 1

4 0.49312681 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

Author: Dong S. Cheng, Vittorio Murino, Mário Figueiredo

Abstract: This paper proposes a new approach to model-based clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the class labels are only indirectly observed (via the probability density of each class); as finite mixture learning under a grouping prior. To estimate the parameters of the proposed model, we derive a (generalized) EM algorithm with a closed-form E-step, in contrast with other recent approaches to semi-supervised probabilistic clustering which require Gibbs sampling or suboptimal shortcuts. We show that our approach is ideally suited for image segmentation: it avoids the combinatorial nature Markov random field priors, and opens the door to more sophisticated spatial priors (e.g., wavelet-based) in a simple and computationally efficient way. Finally, we extend our formulation to work in unsupervised, semi-supervised, or discriminative modes. 1

5 0.48560908 111 nips-2006-Learning Motion Style Synthesis from Perceptual Observations

Author: Lorenzo Torresani, Peggy Hackney, Christoph Bregler

Abstract: This paper presents an algorithm for synthesis of human motion in specified styles. We use a theory of movement observation (Laban Movement Analysis) to describe movement styles as points in a multi-dimensional perceptual space. We cast the task of learning to synthesize desired movement styles as a regression problem: sequences generated via space-time interpolation of motion capture data are used to learn a nonlinear mapping between animation parameters and movement styles in perceptual space. We demonstrate that the learned model can apply a variety of motion styles to pre-recorded motion sequences and it can extrapolate styles not originally included in the training data. 1

6 0.41579157 31 nips-2006-Analysis of Contour Motions

7 0.40786722 95 nips-2006-Implicit Surfaces with Globally Regularised and Compactly Supported Basis Functions

8 0.35729399 140 nips-2006-Multiple Instance Learning for Computer Aided Diagnosis

9 0.33997267 25 nips-2006-An Application of Reinforcement Learning to Aerobatic Helicopter Flight

10 0.33833122 181 nips-2006-Stability of $K$-Means Clustering

11 0.33792272 182 nips-2006-Statistical Modeling of Images with Fields of Gaussian Scale Mixtures

12 0.32860419 6 nips-2006-A Kernel Subspace Method by Stochastic Realization for Learning Nonlinear Dynamical Systems

13 0.32786867 179 nips-2006-Sparse Representation for Signal Classification

14 0.32375297 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation

15 0.31805786 45 nips-2006-Blind Motion Deblurring Using Image Statistics

16 0.31710908 129 nips-2006-Map-Reduce for Machine Learning on Multicore

17 0.31304047 174 nips-2006-Similarity by Composition

18 0.31046 135 nips-2006-Modelling transcriptional regulation using Gaussian Processes

19 0.31033552 194 nips-2006-Towards a general independent subspace analysis

20 0.30963022 170 nips-2006-Robotic Grasping of Novel Objects


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.085), (3, 0.076), (7, 0.066), (8, 0.343), (9, 0.05), (22, 0.089), (44, 0.069), (57, 0.052), (65, 0.024), (69, 0.031), (71, 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.78985089 153 nips-2006-Online Clustering of Moving Hyperplanes

Author: René Vidal

Abstract: We propose a recursive algorithm for clustering trajectories lying in multiple moving hyperplanes. Starting from a given or random initial condition, we use normalized gradient descent to update the coefficients of a time varying polynomial whose degree is the number of hyperplanes and whose derivatives at a trajectory give an estimate of the vector normal to the hyperplane containing that trajectory. As time proceeds, the estimates of the hyperplane normals are shown to track their true values in a stable fashion. The segmentation of the trajectories is then obtained by clustering their associated normal vectors. The final result is a simple recursive algorithm for segmenting a variable number of moving hyperplanes. We test our algorithm on the segmentation of dynamic scenes containing rigid motions and dynamic textures, e.g., a bird floating on water. Our method not only segments the bird motion from the surrounding water motion, but also determines patterns of motion in the scene (e.g., periodic motion) directly from the temporal evolution of the estimated polynomial coefficients. Our experiments also show that our method can deal with appearing and disappearing motions in the scene.

2 0.77540022 202 nips-2006-iLSTD: Eligibility Traces and Convergence Analysis

Author: Alborz Geramifard, Michael Bowling, Martin Zinkevich, Richard S. Sutton

Abstract: We present new theoretical and empirical results with the iLSTD algorithm for policy evaluation in reinforcement learning with linear function approximation. iLSTD is an incremental method for achieving results similar to LSTD, the dataefficient, least-squares version of temporal difference learning, without incurring the full cost of the LSTD computation. LSTD is O(n2 ), where n is the number of parameters in the linear function approximator, while iLSTD is O(n). In this paper, we generalize the previous iLSTD algorithm and present three new results: (1) the first convergence proof for an iLSTD algorithm; (2) an extension to incorporate eligibility traces without changing the asymptotic computational complexity; and (3) the first empirical results with an iLSTD algorithm for a problem (mountain car) with feature vectors large enough (n = 10, 000) to show substantial computational advantages over LSTD. 1

3 0.6823374 134 nips-2006-Modeling Human Motion Using Binary Latent Variables

Author: Graham W. Taylor, Geoffrey E. Hinton, Sam T. Roweis

Abstract: We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued “visible” variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture. Website: http://www.cs.toronto.edu/∼gwtaylor/publications/nips2006mhmublv/

4 0.48188812 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

Author: Dong S. Cheng, Vittorio Murino, Mário Figueiredo

Abstract: This paper proposes a new approach to model-based clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the class labels are only indirectly observed (via the probability density of each class); as finite mixture learning under a grouping prior. To estimate the parameters of the proposed model, we derive a (generalized) EM algorithm with a closed-form E-step, in contrast with other recent approaches to semi-supervised probabilistic clustering which require Gibbs sampling or suboptimal shortcuts. We show that our approach is ideally suited for image segmentation: it avoids the combinatorial nature Markov random field priors, and opens the door to more sophisticated spatial priors (e.g., wavelet-based) in a simple and computationally efficient way. Finally, we extend our formulation to work in unsupervised, semi-supervised, or discriminative modes. 1

5 0.47768724 111 nips-2006-Learning Motion Style Synthesis from Perceptual Observations

Author: Lorenzo Torresani, Peggy Hackney, Christoph Bregler

Abstract: This paper presents an algorithm for synthesis of human motion in specified styles. We use a theory of movement observation (Laban Movement Analysis) to describe movement styles as points in a multi-dimensional perceptual space. We cast the task of learning to synthesize desired movement styles as a regression problem: sequences generated via space-time interpolation of motion capture data are used to learn a nonlinear mapping between animation parameters and movement styles in perceptual space. We demonstrate that the learned model can apply a variety of motion styles to pre-recorded motion sequences and it can extrapolate styles not originally included in the training data. 1

6 0.45956728 87 nips-2006-Graph Laplacian Regularization for Large-Scale Semidefinite Programming

7 0.45295724 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo

8 0.4523544 127 nips-2006-MLLE: Modified Locally Linear Embedding Using Multiple Weights

9 0.45105952 45 nips-2006-Blind Motion Deblurring Using Image Statistics

10 0.44974414 67 nips-2006-Differential Entropic Clustering of Multivariate Gaussians

11 0.44908226 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds

12 0.44752064 175 nips-2006-Simplifying Mixture Models through Function Approximation

13 0.4465867 20 nips-2006-Active learning for misspecified generalized linear models

14 0.44563115 83 nips-2006-Generalized Maximum Margin Clustering and Unsupervised Kernel Learning

15 0.44301754 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments

16 0.44128281 128 nips-2006-Manifold Denoising

17 0.44120884 106 nips-2006-Large Margin Hidden Markov Models for Automatic Speech Recognition

18 0.44050011 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment

19 0.44023275 65 nips-2006-Denoising and Dimension Reduction in Feature Space

20 0.43978772 104 nips-2006-Large-Scale Sparsified Manifold Regularization