nips nips2006 nips2006-31 knowledge-graph by maker-knowledge-mining

31 nips-2006-Analysis of Contour Motions


Source: pdf

Author: Ce Liu, William T. Freeman, Edward H. Adelson

Abstract: A reliable motion estimation algorithm must function under a wide range of conditions. One regime, which we consider here, is the case of moving objects with contours but no visible texture. Tracking distinctive features such as corners can disambiguate the motion of contours, but spurious features such as T-junctions can be badly misleading. It is difficult to determine the reliability of motion from local measurements, since a full rank covariance matrix can result from both real and spurious features. We propose a novel approach that avoids these points altogether, and derives global motion estimates by utilizing information from three levels of contour analysis: edgelets, boundary fragments and contours. Boundary fragment are chains of orientated edgelets, for which we derive motion estimates from local evidence. The uncertainties of the local estimates are disambiguated after the boundary fragments are properly grouped into contours. The grouping is done by constructing a graphical model and marginalizing it using importance sampling. We propose two equivalent representations in this graphical model, reversible switch variables attached to the ends of fragments and fragment chains, to capture both local and global statistics of boundaries. Our system is successfully applied to both synthetic and real video sequences containing high-contrast boundaries and textureless regions. The system produces good motion estimates along with properly grouped and completed contours.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Tracking distinctive features such as corners can disambiguate the motion of contours, but spurious features such as T-junctions can be badly misleading. [sent-7, score-0.527]

2 It is difficult to determine the reliability of motion from local measurements, since a full rank covariance matrix can result from both real and spurious features. [sent-8, score-0.508]

3 We propose a novel approach that avoids these points altogether, and derives global motion estimates by utilizing information from three levels of contour analysis: edgelets, boundary fragments and contours. [sent-9, score-1.182]

4 Boundary fragment are chains of orientated edgelets, for which we derive motion estimates from local evidence. [sent-10, score-0.851]

5 The uncertainties of the local estimates are disambiguated after the boundary fragments are properly grouped into contours. [sent-11, score-0.875]

6 We propose two equivalent representations in this graphical model, reversible switch variables attached to the ends of fragments and fragment chains, to capture both local and global statistics of boundaries. [sent-13, score-1.061]

7 The system produces good motion estimates along with properly grouped and completed contours. [sent-15, score-0.458]

8 1 Introduction Humans can reliably analyze visual motion under a diverse set of conditions, including textured as well as featureless objects. [sent-16, score-0.428]

9 One approach to handling the spurious motions of corners or T-junctions has been to detect such junctions and remove them from the motion analysis [18, 12]. [sent-28, score-0.662]

10 The boundary motion is typically analyzed locally, (a) (b) (c) Figure 1: Illustration of the spurious T-junction motion. [sent-31, score-0.794]

11 (c) Using the boundarybased representation our system is able to correctly estimate the motion and generate the illusory boundary as well. [sent-34, score-0.87]

12 In this paper, we use a boundary-based approach which does not rely on motion estimates at corners or junctions. [sent-37, score-0.386]

13 We develop a graphical model which integrates local information and assigns probabilities to candidate contour groupings in order to favor motion interpretations corresponding to the motions of the underlying objects. [sent-38, score-0.778]

14 Boundary completion and discounting the motions of spurious features result from optimizing the graphical model states to explain the contours and their motions. [sent-39, score-0.549]

15 Our system is able to automatically detect and group the boundary fragments, analyze the motion correctly, as well as exploit both static and dynamic cues to synthesize the illusory boundaries (c). [sent-40, score-1.049]

16 We represent the boundaries at three levels of grouping: edgelets, boundary fragments and contours, where a fragment is a chain of edgelets and a contour is a chain of fragments. [sent-41, score-1.574]

17 Each edgelet within a boundary fragment has a position and an orientation and carries local evidence for motion. [sent-42, score-1.055]

18 The main task of our model is then to group the boundary fragments into contours so that the local motion uncertainties associated with the edgelets are disambiguated and occlusion or other spurious feature events are properly explained. [sent-43, score-1.748]

19 The result is a specialized motion tracking algorithm that properly analyzes the motions of textureless objects. [sent-44, score-0.592]

20 Our system consists of four conceptual steps, discussed over the next three sections (the last two steps happen together while finding the optimal states in the graphical model): (a) Boundary fragment extraction: Boundary fragments are detected in the first frame. [sent-45, score-0.794]

21 (b) Edgelet tracking with uncertainties: Boundary fragments are broken into edgelets, and, based on local evidence, the probability distribution is found for the motion of each edgelet of each boundary fragment. [sent-46, score-1.265]

22 (c) Grouping boundary fragments into contours: Boundary fragments are grouped, using both temporal and spatial cues. [sent-47, score-0.92]

23 (d) Motion estimation: The final fragment groupings disambiguate motion uncertainties and specify the final inferred motions. [sent-48, score-0.868]

24 We use a simple algorithm for boundary extraction, analyzing oriented energy using steerable filters [3] and tracking the boundary in a manner similar to that of the Canny edge detector [2]. [sent-51, score-0.826]

25 A more sophisticated boundary detector can be found in [8]; occluding boundaries can also be detected using special cameras [13]. [sent-52, score-0.562]

26 However, for our motion algorithm designed to handle the special case of textureless objects, we find that our simple boundary detection algorithm works well. [sent-53, score-0.749]

27 Mathematically, given an image I, we seek to obtain a set of fragments B = {b i }, where each fragment bi is a chain of edgelets b i = {eik }ni . [sent-54, score-0.972]

28 Each edgelet e ik = {pik , θik } is a particle which k=1 embeds both location p ik ∈ R2 and orientation θ ik ∈ [0, 2π) information. [sent-55, score-0.602]

29 (a) (b) (c) (d) Figure 2: The local motion vector is estimated for each contour in isolation by selectively comparing orientation energies across frames. [sent-56, score-0.694]

30 (a) A T-junction of the two bar example showing the contour orientation for this motion analysis. [sent-57, score-0.687]

31 (c) The relevant orientation energy along the boundary fragment, both for the 2nd frame. [sent-59, score-0.481]

32 The possible contour motions are unaffected by the occluding contour at a different orientation and no spurious motion is detected at this junction. [sent-62, score-1.22]

33 If that is true and the maximum energy is above a threshold T 1 we call this point a primary boundary point. [sent-66, score-0.414]

34 We collect a pool of primary boundary points after running this test for all the pixels. [sent-67, score-0.386]

35 We find the primary boundary point with the maximum orientation energy from the pool and do bidirectional contour tracking, consisting of prediction and projection steps. [sent-68, score-0.777]

36 3 Edgelet Tracking with Uncertainties We next break the boundary contours into very short edgelets and obtain the probabilities, based on local motion of the boundary fragment, for the motion vector at each edgelet. [sent-77, score-1.809]

37 We cannot use conventional algorithms, such as Lucas-Kanade [5], for local motion estimation since they rely on corners. [sent-78, score-0.39]

38 The orientation θ ik for each edgelet was obtained during boundary fragment extraction. [sent-79, score-1.115]

39 We obtain the motion vector by finding the spatial offsets of the edgelet which match the orientation energy along the boundary fragment in this orientation. [sent-80, score-1.427]

40 Grouping the boundary fragments allows the motion uncertainties to be resolved. [sent-84, score-1.043]

41 1 Two Equivalent Representations for Fragment Grouping The essential part of our model is to find the connection between the boundary fragments. [sent-87, score-0.39]

42 One representation is the connection of each end of the boundary fragment. [sent-89, score-0.439]

43 Similar local and global modeling of contour saliency was proposed in [14]; in [7], both edge saliency and curvilinear continuity were used to extract closed contours from static images. [sent-94, score-0.745]

44 The connections between fragment ends are modeled by switch variables. [sent-96, score-0.617]

45 For each boundary frag(0) ment bi , we use a binary variable {0, 1} to denote the two ends of the fragment, i. [sent-97, score-0.478]

46 Let switch variable S(i, ti ) = (j, tj ) denote the connection from b i to bj j . [sent-100, score-0.489]

47 This b3 b3 b2 b1 b1 (a) ( b10) (b) b2 b2 b1 b3 b3 b3 (c) b2 b1 (d) b2 b1 (e) Figure 3: A simple example illustrating switch variables, reversibility and fragment chains. [sent-101, score-0.568]

48 Figures (d) and (e) show two of the legal contour groupings for the boundary fragments: two open contours and a closed loop contour. [sent-111, score-0.819]

49 each end of the fragment should either connect to one end of the other fragment, or simply have no connection. [sent-114, score-0.551]

50 if S(i, ti ) = (j, tj ), then S(j, tj ) = (i, ti ), or in a more compact form S(S(i, ti )) = (i, ti ). [sent-117, score-1.028]

51 (1) (t ) bi i , When there is no connection to we simply set S(i, ti ) = (i, ti ). [sent-118, score-0.518]

52 From the values of the switch variables we can obtain contours, which are chains of boundary fragments. [sent-124, score-0.526]

53 A fragment chain is defined as a series of the end points c = (x ) (x ) (x ) (x ) {(bi1 1 , bi1 1 ), · · · , (bimm , bimm )}. [sent-125, score-0.519]

54 The chain is specified by fragment label {i 1 , · · · , im } and end label {x1 , · · · , xm }. [sent-126, score-0.488]

55 Two open chains are identical if the fragment and end labels are reversed. [sent-131, score-0.51]

56 2 The Graphical Model Given the observation O, the two images, and the boundary fragments B, we want to estimate the flow vectors V = {vi } and vi = {vik }, where each vik associates with edgelet eik , and the grouping variables S (switches) or equivalently C (fragment chains). [sent-136, score-1.081]

57 1 The Graph for Boundary Fragment Grouping We use two equivalent representations for boundary grouping, switch variables and chains. [sent-140, score-0.46]

58 We use δ[S(S(i, ti )) − (i, ti )] for each end to enforce the reversibility. [sent-141, score-0.484]

59 Intuitively, two ends should be connected if Motion similarity the distributions of the motion of the two end edgelets are similar; Curve smoothness the illusory boundary to connect the two ends is smooth; Contrast consistency the brightness contrast at the two ends consistent with each other. [sent-147, score-1.414]

60 (c) An illusory boundary γ is generated by minimizing the energy of the curve. [sent-153, score-0.539]

61 The second term is the local saliency measure on the illusory boundary γ that connects the two ends. [sent-159, score-0.656]

62 The illusory boundary is simply generated by minimizing the energy of the curve. [sent-160, score-0.539]

63 For self connection we simply set a constant value: λ(S(i, ti ) = (i, ti )) = τ . [sent-166, score-0.47]

64 It was discovered in [10] that convex occluding contours are more salient, and additional T-junctions along the contour may increase or decrease the occlusion perception. [sent-168, score-0.614]

65 Thus, the (discrete) graphical model favoring the desired fragment grouping is Pr(S; B, O) = 1 ZS N 1 M λ(S(i, ti ); B, O)δ[S(S(i, ti )) − (i, ti )] · i=1 ti =0 ψ(cj ; B, O), (5) j=1 where ZS is a normalization constant. [sent-171, score-1.406]

66 Note that this model measures both the switch variables S(i, ti ) for local saliency and the fragment chains c i to enforce global structural saliency. [sent-172, score-0.987]

67 2 Gaussian MRF on Flow Vectors Given the fragment grouping, we model the flow vectors V as a Gaussian Markov random field (GMRF). [sent-175, score-0.395]

68 The edgelet displacement within each boundary fragment should be smooth and match the observation along the fragment. [sent-176, score-0.938]

69 The probability density is formulated as ni ϕ(vi ; bi ) = exp{−(vik − μik ) T k=1 Σ−1 (vik ik ni −1 − μik )} exp{− k=1 1 vik − vi,k+1 2σ 2 2 }, (6) where μik and Σik are the motion parameters of each edgelet estimated in Sect 3. [sent-177, score-0.803]

70 We use V(i, ti ) to denote the flow vector of end t i of fragment b i . [sent-178, score-0.651]

71 We define V(S(i, ti )) = V(j, tj ) if S(i, ti ) = (j, tj ). [sent-179, score-0.614]

72 Intuitively the flow vectors of the two ends should be similar if they are connected, or mathematically 1 ifS(i, ti ) = (i, ti ), 1 φ(V(i, ti ), V(S(i, ti ))) = (7) 2 exp{− 2 V(i, ti )−V(S(i, ti )) } otherwise. [sent-180, score-1.338]

73 2σ The (continuous) graphical model of the flow vectors is therefore defined as Pr(V|S; B) = 1 ZV 1 N ϕ(vi ; bi ) i=1 φ(V(i, ti ), V(S(i, ti ))) (8) ti =0 where ZV is a normalization constant. [sent-181, score-0.715]

74 3 Inference Having defined the graphical model to favor the desired motion and grouping interpretations, we need to find the state parameters that best explain the image observations. [sent-184, score-0.546]

75 We first infer the boundary grouping B, and then infer V based on B. [sent-187, score-0.471]

76 The proposal density of each switch variable is set to be 1 q (S(i, ti ) = (j, tj )) ∝ λ (S(i, ti ) = (j, tj )) λ (S(j, tj ) = (i, ti )) (10) Zq where λ(·) has been normalized to sum to 1 for each end. [sent-192, score-1.047]

77 To sample the proposal density, we first randomly select a boundary fragment, and connect to other fragments based on q(S(i, t i )) to form a contour (a chain of boundary fragments). [sent-194, score-1.276]

78 This procedure is repeated until no fragment is left. [sent-196, score-0.395]

79 5 Experimental Results Figure 6 shows the boundary extraction, grouping, and motion estimation results of our system for both real and synthetic examples 1. [sent-203, score-0.697]

80 The algorithm is implemented in MATLAB, and the running time varies from ten seconds to a few minutes, depending on the number of the boundary fragments found in the image. [sent-205, score-0.627]

81 The two-bar examples in Figure 1(a) yields fourteen detected boundary fragments in Figure 6(a) and two contours in (b). [sent-206, score-0.904]

82 The fragments belonging to the same contour are plotted in the same color and the illusory boundaries are synthesized as shown in (c). [sent-208, score-0.735]

83 Twelve fragments are detected in (a) and five contours are grouped in (b). [sent-212, score-0.62]

84 The estimated motion and generated illusory boundary also match the ground truth and human perception. [sent-213, score-0.847]

85 In this stimulus the right leg moves downwards, but there is weak occluding boundary at the intersection of the legs. [sent-216, score-0.474]

86 boundary fragments are extracted in (a) and five contours are extracted in (b). [sent-222, score-0.927]

87 The hallucinated illusory boundary in (c) and (d) correctly connect the occluded boundary of the right leg and the invisible boundary of the left leg. [sent-224, score-1.313]

88 The final row shows challenging images of a rotating chair (Figure 5 (c) and (d)), also showing proper contour completion and motion analysis. [sent-225, score-0.695]

89 Thirty-seven boundary fragments are extracted and seven contours are grouped. [sent-226, score-0.896]

90 Exploiting motion as well as static information, our system is able to complete the contours properly. [sent-228, score-0.653]

91 Note that the traditional motion analysis algorithms fail at estimating motion for these examples (see supplementary videos) and would thus also fail at correctly grouping the objects based on the motion cues. [sent-229, score-1.187]

92 6 Conclusion We propose a novel boundary-based representation to estimate motion under the challenging visual conditions of moving textureless objects. [sent-230, score-0.432]

93 Ambiguous local motion measurements are resolved through a graphical model relating edgelets, boundary fragments, completed contours, and their motions. [sent-231, score-0.77]

94 The motion cues help the contour completion task, allowing completion of contours that would be difficult or impossible using only low-level information in a static image. [sent-233, score-0.974]

95 A motion analysis algorithm such as this one that correctly handles featureless contour motions is an essential element in a visual system’s toolbox of motion analysis methods. [sent-234, score-1.078]

96 The geometry of the occluding contour and its effect on motion interpretation. [sent-311, score-0.664]

97 Column (a): Boundary fragments are extracted using our boundary tracker. [sent-321, score-0.658]

98 The red dots are the edgelets and the green ones are the boundary fragment ends. [sent-322, score-0.9]

99 Column (b): Boundary fragments are grouped into contours and the flow vectors are estimated. [sent-323, score-0.581]

100 The gap between the fragments belonging to the same contour are linked exploiting both static and motion cues in Eq. [sent-326, score-0.925]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('fragment', 0.395), ('motion', 0.342), ('boundary', 0.334), ('fragments', 0.293), ('contours', 0.238), ('contour', 0.213), ('ti', 0.207), ('edgelet', 0.187), ('edgelets', 0.171), ('illusory', 0.149), ('grouping', 0.137), ('switch', 0.126), ('spurious', 0.118), ('occluding', 0.109), ('ik', 0.108), ('tj', 0.1), ('saliency', 0.097), ('ends', 0.096), ('motions', 0.095), ('ow', 0.094), ('orientation', 0.091), ('boundaries', 0.08), ('vik', 0.078), ('uncertainties', 0.074), ('chains', 0.066), ('pr', 0.065), ('dancer', 0.062), ('downwards', 0.062), ('featureless', 0.062), ('tracking', 0.061), ('connect', 0.058), ('reversible', 0.057), ('connection', 0.056), ('energy', 0.056), ('frame', 0.054), ('occlusion', 0.054), ('chair', 0.054), ('static', 0.052), ('completion', 0.052), ('grouped', 0.05), ('textureless', 0.049), ('end', 0.049), ('bi', 0.048), ('local', 0.048), ('reversibility', 0.047), ('graphical', 0.046), ('properly', 0.045), ('chain', 0.044), ('corners', 0.044), ('moving', 0.041), ('bar', 0.041), ('junctions', 0.041), ('steerable', 0.041), ('detected', 0.039), ('rotating', 0.034), ('groupings', 0.034), ('kl', 0.033), ('extracted', 0.031), ('bidirectional', 0.031), ('bimm', 0.031), ('disambiguated', 0.031), ('fowlkes', 0.031), ('zv', 0.031), ('leg', 0.031), ('exclusive', 0.031), ('connects', 0.028), ('pool', 0.028), ('structural', 0.027), ('dmin', 0.027), ('eik', 0.027), ('hallucinated', 0.027), ('mcdermott', 0.027), ('zs', 0.027), ('extraction', 0.025), ('cues', 0.025), ('curvature', 0.025), ('switches', 0.025), ('canny', 0.025), ('dmax', 0.025), ('gmrf', 0.025), ('pik', 0.025), ('vision', 0.025), ('located', 0.025), ('vi', 0.025), ('detection', 0.024), ('primary', 0.024), ('analyze', 0.024), ('correctly', 0.024), ('salient', 0.023), ('ds', 0.023), ('brightness', 0.023), ('disambiguate', 0.023), ('match', 0.022), ('occluded', 0.022), ('detect', 0.022), ('locally', 0.021), ('system', 0.021), ('enforce', 0.021), ('image', 0.021), ('ni', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 31 nips-2006-Analysis of Contour Motions

Author: Ce Liu, William T. Freeman, Edward H. Adelson

Abstract: A reliable motion estimation algorithm must function under a wide range of conditions. One regime, which we consider here, is the case of moving objects with contours but no visible texture. Tracking distinctive features such as corners can disambiguate the motion of contours, but spurious features such as T-junctions can be badly misleading. It is difficult to determine the reliability of motion from local measurements, since a full rank covariance matrix can result from both real and spurious features. We propose a novel approach that avoids these points altogether, and derives global motion estimates by utilizing information from three levels of contour analysis: edgelets, boundary fragments and contours. Boundary fragment are chains of orientated edgelets, for which we derive motion estimates from local evidence. The uncertainties of the local estimates are disambiguated after the boundary fragments are properly grouped into contours. The grouping is done by constructing a graphical model and marginalizing it using importance sampling. We propose two equivalent representations in this graphical model, reversible switch variables attached to the ends of fragments and fragment chains, to capture both local and global statistics of boundaries. Our system is successfully applied to both synthetic and real video sequences containing high-contrast boundaries and textureless regions. The system produces good motion estimates along with properly grouped and completed contours.

2 0.26599023 111 nips-2006-Learning Motion Style Synthesis from Perceptual Observations

Author: Lorenzo Torresani, Peggy Hackney, Christoph Bregler

Abstract: This paper presents an algorithm for synthesis of human motion in specified styles. We use a theory of movement observation (Laban Movement Analysis) to describe movement styles as points in a multi-dimensional perceptual space. We cast the task of learning to synthesize desired movement styles as a regression problem: sequences generated via space-time interpolation of motion capture data are used to learn a nonlinear mapping between animation parameters and movement styles in perceptual space. We demonstrate that the learned model can apply a variety of motion styles to pre-recorded motion sequences and it can extrapolate styles not originally included in the training data. 1

3 0.15668929 134 nips-2006-Modeling Human Motion Using Binary Latent Variables

Author: Graham W. Taylor, Geoffrey E. Hinton, Sam T. Roweis

Abstract: We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued “visible” variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture. Website: http://www.cs.toronto.edu/∼gwtaylor/publications/nips2006mhmublv/

4 0.096528798 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

Author: Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, Bernhard Schölkopf

Abstract: This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that—despite the lack of any biological prior knowledge—our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1

5 0.091473699 151 nips-2006-On the Relation Between Low Density Separation, Spectral Clustering and Graph Cuts

Author: Hariharan Narayanan, Mikhail Belkin, Partha Niyogi

Abstract: One of the intuitions underlying many graph-based methods for clustering and semi-supervised learning, is that class or cluster boundaries pass through areas of low probability density. In this paper we provide some formal analysis of that notion for a probability distribution. We introduce a notion of weighted boundary volume, which measures the length of the class/cluster boundary weighted by the density of the underlying probability distribution. We show that sizes of the cuts of certain commonly used data adjacency graphs converge to this continuous weighted volume of the boundary. keywords: Clustering, Semi-Supervised Learning 1

6 0.089044571 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment

7 0.07991007 147 nips-2006-Non-rigid point set registration: Coherent Point Drift

8 0.078343108 86 nips-2006-Graph-Based Visual Saliency

9 0.07009656 175 nips-2006-Simplifying Mixture Models through Function Approximation

10 0.062632546 153 nips-2006-Online Clustering of Moving Hyperplanes

11 0.061428867 135 nips-2006-Modelling transcriptional regulation using Gaussian Processes

12 0.05839505 45 nips-2006-Blind Motion Deblurring Using Image Statistics

13 0.056670956 62 nips-2006-Correcting Sample Selection Bias by Unlabeled Data

14 0.054951768 66 nips-2006-Detecting Humans via Their Pose

15 0.052357152 1 nips-2006-A Bayesian Approach to Diffusion Models of Decision-Making and Response Time

16 0.051637031 42 nips-2006-Bayesian Image Super-resolution, Continued

17 0.046901107 9 nips-2006-A Nonparametric Bayesian Method for Inferring Features From Similarity Judgments

18 0.045750223 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation

19 0.043136418 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

20 0.041102868 103 nips-2006-Kernels on Structured Objects Through Nested Histograms


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.151), (1, -0.0), (2, 0.08), (3, -0.05), (4, 0.054), (5, -0.107), (6, -0.03), (7, -0.09), (8, 0.02), (9, 0.16), (10, 0.036), (11, -0.097), (12, -0.098), (13, 0.127), (14, -0.144), (15, -0.022), (16, -0.273), (17, -0.065), (18, -0.182), (19, -0.044), (20, -0.009), (21, 0.027), (22, -0.188), (23, 0.138), (24, 0.046), (25, -0.014), (26, -0.028), (27, 0.057), (28, 0.055), (29, 0.02), (30, 0.02), (31, -0.004), (32, -0.002), (33, 0.041), (34, 0.123), (35, 0.173), (36, -0.047), (37, 0.009), (38, -0.132), (39, -0.037), (40, 0.069), (41, 0.153), (42, 0.094), (43, -0.083), (44, 0.001), (45, 0.045), (46, 0.024), (47, 0.058), (48, 0.004), (49, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98342437 31 nips-2006-Analysis of Contour Motions

Author: Ce Liu, William T. Freeman, Edward H. Adelson

Abstract: A reliable motion estimation algorithm must function under a wide range of conditions. One regime, which we consider here, is the case of moving objects with contours but no visible texture. Tracking distinctive features such as corners can disambiguate the motion of contours, but spurious features such as T-junctions can be badly misleading. It is difficult to determine the reliability of motion from local measurements, since a full rank covariance matrix can result from both real and spurious features. We propose a novel approach that avoids these points altogether, and derives global motion estimates by utilizing information from three levels of contour analysis: edgelets, boundary fragments and contours. Boundary fragment are chains of orientated edgelets, for which we derive motion estimates from local evidence. The uncertainties of the local estimates are disambiguated after the boundary fragments are properly grouped into contours. The grouping is done by constructing a graphical model and marginalizing it using importance sampling. We propose two equivalent representations in this graphical model, reversible switch variables attached to the ends of fragments and fragment chains, to capture both local and global statistics of boundaries. Our system is successfully applied to both synthetic and real video sequences containing high-contrast boundaries and textureless regions. The system produces good motion estimates along with properly grouped and completed contours.

2 0.82677478 111 nips-2006-Learning Motion Style Synthesis from Perceptual Observations

Author: Lorenzo Torresani, Peggy Hackney, Christoph Bregler

Abstract: This paper presents an algorithm for synthesis of human motion in specified styles. We use a theory of movement observation (Laban Movement Analysis) to describe movement styles as points in a multi-dimensional perceptual space. We cast the task of learning to synthesize desired movement styles as a regression problem: sequences generated via space-time interpolation of motion capture data are used to learn a nonlinear mapping between animation parameters and movement styles in perceptual space. We demonstrate that the learned model can apply a variety of motion styles to pre-recorded motion sequences and it can extrapolate styles not originally included in the training data. 1

3 0.58278209 134 nips-2006-Modeling Human Motion Using Binary Latent Variables

Author: Graham W. Taylor, Geoffrey E. Hinton, Sam T. Roweis

Abstract: We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued “visible” variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture. Website: http://www.cs.toronto.edu/∼gwtaylor/publications/nips2006mhmublv/

4 0.38639885 153 nips-2006-Online Clustering of Moving Hyperplanes

Author: René Vidal

Abstract: We propose a recursive algorithm for clustering trajectories lying in multiple moving hyperplanes. Starting from a given or random initial condition, we use normalized gradient descent to update the coefficients of a time varying polynomial whose degree is the number of hyperplanes and whose derivatives at a trajectory give an estimate of the vector normal to the hyperplane containing that trajectory. As time proceeds, the estimates of the hyperplane normals are shown to track their true values in a stable fashion. The segmentation of the trajectories is then obtained by clustering their associated normal vectors. The final result is a simple recursive algorithm for segmenting a variable number of moving hyperplanes. We test our algorithm on the segmentation of dynamic scenes containing rigid motions and dynamic textures, e.g., a bird floating on water. Our method not only segments the bird motion from the surrounding water motion, but also determines patterns of motion in the scene (e.g., periodic motion) directly from the temporal evolution of the estimated polynomial coefficients. Our experiments also show that our method can deal with appearing and disappearing motions in the scene.

5 0.33455187 40 nips-2006-Bayesian Detection of Infrequent Differences in Sets of Time Series with Shared Structure

Author: Jennifer Listgarten, Radford M. Neal, Sam T. Roweis, Rachel Puckrin, Sean Cutler

Abstract: We present a hierarchical Bayesian model for sets of related, but different, classes of time series data. Our model performs alignment simultaneously across all classes, while detecting and characterizing class-specific differences. During inference the model produces, for each class, a distribution over a canonical representation of the class. These class-specific canonical representations are automatically aligned to one another — preserving common sub-structures, and highlighting differences. We apply our model to compare and contrast solenoid valve current data, and also, liquid-chromatography-ultraviolet-diode array data from a study of the plant Arabidopsis thaliana. 1 Aligning Time Series From Different Classes Many practical problems over a wide range of domains require synthesizing information from several noisy examples of one or more categories in order to build a model which captures common structure and also learns the patterns of variability between categories. In time series analysis, these modeling goals manifest themselves in the tasks of alignment and difference detection. These tasks have diverse applicability, spanning speech & music processing, equipment & industrial plant diagnosis/monitoring, and analysis of biological time series such as microarray & liquid/gas chromatography-based laboratory data (including mass spectrometry and ultraviolet diode arrays). Although alignment and difference detection have been extensively studied as separate problems in the signal processing and statistical pattern recognition communities, to our knowledge, no existing model performs both tasks in a unified way. Single class alignment algorithms attempt to align a set of time series all together, assuming that variability across different time series is attributable purely to noise. In many real-world situations, however, we have time series from multiple classes (categories) and our prior belief is that there is both substantial shared structure between the class distributions and, simultaneously, systematic (although often rare) differences between them. While in some circumstances (if differences are small and infrequent), single class alignment can be applied to multi-class data, it is much more desirable to have a model which performs true multi-class alignment in a principled way, allowing for more refined and accurate modeling of the data. In this paper, we introduce a novel hierarchical Bayesian model which simultaneously solves the multi-class alignment and difference detection tasks in a unified manner, as illustrated in Figure 1. The single-class alignment shown in this figure coerces the feature in region A for class 1 to be inappropriately collapsed in time, and the overall width of the main broad peak in class 2 to be inappropriately narrowed. In contrast, our multi-class model handles these features correctly. Furthermore, because our algorithm does inference for a fully probabilistic model, we are able to obtain quantitative measures of the posterior uncertainty in our results, which, unlike the point estimates produced by most current approaches, allow us to assess our relative confidence in differences learned by the model. Our basic setup for multi-class alignment assumes the class labels are known for each time series, as is the case for most difference detection problems. However, as we discuss at the end of the paper, our model can be extended to the completely unsupervised case. normal abnormal 3 common structure 3 1 1 20 0 3 −20 class−specific differences 20 1 0 −20 3 class−specific models 3 A 1 1 0 50 100 150 200 250 0 50 100 150 200 Figure 1: Nine time series from the NASA valve solenoid current data set [4]. Four belong to a ‘normal’ class, and five to an ‘abnormal’ class. On all figures, the horizontal axis is time, or latent time for figures of latent traces and observed time series aligned to latent traces. The vertical axis is current amplitude. Top left: The raw, unaligned data. Middle left: Average of the unaligned data within each class in thick line, with the thin lines showing one standard deviation on either side. Bottom left: Average of the aligned data (over MCMC samples) within each class, using the single-class alignment version of the model (no child traces), again with one standard deviation lines shown in the thinner style line. Right: Mean and one standard deviation over MCMC samples using the HB-CPM. Top right: Parent trace. Middle right: Class-specific energy impulses with the topmost showing the class impulses for the less smooth class. Bottom right: Child traces superimposed. Note that if one generates more HB-CPM MCMC samples, the parent cycles between the two classes since the model has no preference for which class is seen as a modification of the other; the child classes remain stable however. 2 A Hierarchical Bayesian Continuous Profile Model Building on our previous Continuous Profile Model (CPM) [7], we propose a Hierarchical Bayesian Continuous Profile Model (HB-CPM) to address the problems of multi-class alignment and difference detection, together, for sets of sibling time series data — that is, replicate time series from several distinct, but related classes. The HB-CPM is a generative model that allows simultaneous alignment of time series and also provides aligned canonical representations of each class along with measures of uncertainty on these representations. Inference in the model can be used, for example, to detect and quantify similarities and differences in class composition. The HB-CPM extends the basic CPM in two significant ways: i) it addresses the multi-class rather than the single-class alignment problem, and ii) it uses a fully Bayesian framework rather than a maximum likelihood approach, allowing us to estimate uncertainty in both the alignments and the canonical representations. Our model, depicted in Figure 2, assumes that each observed time series is generated as a noisy transformation of a single, class-specific latent trace. Each latent trace is an underlying, noiseless representation of the set of replicated, observable time series belonging to a single class. An observed time series is generated from this latent trace exactly as in the original CPM, by moving through a sequence of hidden states in a Markovian manner and emitting an observable value at each step, as with an HMM. Each hidden state corresponds to a ‘latent time’ in the latent trace. Thus different choices of hidden state sequences result in different nonlinear transformations of the underlying trace. The HB-CPM uses a separate latent trace for each class, which we call child traces. Crucially, each of these child traces is generated from a single parent trace (also unobserved), which 250 captures the common structure among all of the classes. The joint prior distribution for the child traces in the HB-CPM model can be realized by first sampling a parent trace, and then, for each class, sampling a sparse ‘difference vector’ which dictates how and where each child trace should differ from the common parent. z Figure 2: Core elements of the HB-CPM, illustrated with two-class data (hidden and observed) drawn from the model’s prior. parent r1 r2 z1 impulse z2 child trace child trace x1 x2 x3 class 1 observed time series 2.1 impulse x4 x5 x6 class 2 observed time series The Prior on Latent Traces Let the vector xk = (xk , xk , ..., xk ) represent the k th observed scalar time series, and w k ∈ 1..C 1 2 N be the class label of this time series. Also, let z = (z1 , z2 , ..., zM ) be the parent trace, and c c c z c = (z1 , z2 , ..., zM ) be the child trace for the cth class. During inference, posterior samples of c z form a canonical representation of the observed times series in class c, and z contains their common sub-structure. Ideally, the length of the latent traces, M , would be very large relative to N so that any experimental data could be mapped precisely to the correct underlying trace point. Aside from the computational impracticalities this would pose, great care to avoid overfitting would have to be taken. Thus in practice, we have used M = (2 + )N (double the resolution, plus some slack on each end) in our experiments, and found this to be sufficient with < 0.2. Because the resolution of the latent traces is higher than that of the observed time series, experimental time can be made to effectively speed up or slow down by advancing along the latent trace in larger or smaller jumps. As mentioned previously, the child traces in the HB-CPM inherit most of their structure from a common parent trace. The differences between child and parent are encoded in a difference vector for each class, dc = (dc , dc , ..., dc ); normally, most elements of dc are close to zero. Child traces 1 2 M are obtained by adding this difference vector to the parent trace: z c = z + dc . We model both the parent trace and class-specific difference vectors with what we call an energy impulse chain, which is an undirected Markov chain in which neighbouring nodes are encouraged to be similar (i.e., smooth), and where this smoothness is perturbed by a set of marginally independent energy impulse nodes, with one energy impulse node attached to each node in the chain. For the difc c c ference vector of the cth class, the corresponding energy impulses are denoted r c = (r1 , r2 , ..., rM ), and for the parent trace the energy impulses are denoted r = (r1 , r2 , ..., rM ). Conditioned on the energy impulses, the probability of a difference vector is p(dc |r c , αc , ρc ) = 1 1 exp − Zr c 2 M −1 i=1 M c (dc − dc )2 (dc − ri )2 i i+1 i + c c α ρ i=1 . (1) Here, Zrc is the normalizing constant for this probability density, αc controls the smoothness of the chain, and ρc controls the influence of the energy impulses. Together, αc and ρc also control the overall tightness of the distribution for dc . Presently, we set all αc = α , and similarly ρc = ρ — that is, these do not differ between classes. Similarly, the conditional probability of the parent trace is p(z|r, α, ρ) = 1 1 exp − Zr 2 M −1 i=1 M (zi − zi+1 )2 (zi − ri )2 + α ρ i=1 . (2) These probability densities are each multivariate Gaussian with tridiagonal precision matrixes (corresponding to the Markov nature of the interactions). Each component of each energy impulse for the parent, rj , is drawn independently from a single univariate Gaussian, N (ri |µpar , spar ), whose mean and variance are in turn drawn from a Gaussian and inverse-gamma, respectively. The class-specific difference vector impulses, however, are drawn from a mixture of two zero-mean Gaussians — one ‘no difference’ (inlier) Gaussian, and one ‘classdifference’ (outlier) Gaussian. The means are zero so as to encourage difference vectors to be near c zero (and thus child traces to be similar to the parent trace). Letting δi denote the binary latent c mixture component indicator variables for each rj , c c c c p(δj ) = Multinomial(δj |mc , mc ) = (mc )δj (mc )1−δj in out in out c c p(rj |δj ) = c N (rj |0, s2 ), in c N (rj |0, s2 ), out if if c δj c δj =1 . =0 (3) (4) Each Gaussian mixture variance has an Inverse-Gamma prior, which for the ‘no difference’ variance, s2 , is set to have very low mean (and not overly dispersed) so that ‘no difference’ regions truly have in little difference from the parent class, while for the ‘class-difference’ variance, s 2 , the prior is out set to have a larger mean, so as to model our belief that substantial class-specific differences do occasionally exist. The priors for αc , ρc , α, ρ are each log-normal (inverse-gamma priors would not be conjugate in this model, so we use log-normals which are easier to specify). Additionally, the mixing proportions, mc , mc , have a Dirichlet prior, which typically encodes our belief that the out in proportion that are ‘class differences’ is likely to be small. 2.2 The HMM Portion of the Model Each observed xk is modeled as being generated by an HMM conditioned on the appropriate child k trace, z w . The probability of an observed time series conditioned on a path of hidden time states, k N wk τ k , and the child trace, is given by p(xk |z w , τ k ) = i=1 N (xk |zτ k uk , ξ k ), where ξ k is the i i emission variance for time series k, and the scale factor, uk , allows for constant, global, multiplicak tive rescaling. The HMM transition probabilities T k (τi−1 → τik ) are multinomial within a limited k k range, with p (τi = a|τi−1 = b) = κ(a−b) for (a − b) ∈ [1, Jτ ] and pk (τi = a|τi−1 = b) = 0 for (a − b) < 1 or (a − b) > Jτ where Jτ is the maximum allowable number of consecutive time Jτ states that can be advanced in a single transition. (Of course, i=1 κk = 1.) This multinomial disi tribution, in turn, has a Dirichlet prior. The HMM emission variances, ξ k , have an inverse-gamma prior. Additionally, the prior over the first hidden time state is a uniform distribution over a constant number of states, 1..Q, where Q defines how large a shift can exist between any two observed time series. The prior over each global scaling parameter, uk , is a log-normal with fixed variance and mean of zero, which encourages the scaling factors to remain near unity. 3 Posterior Inference of Alignments and Parameters by MCMC Given a set of observed time series (and their associated class labels), the main computational operation to be performed in the HB-CPM is inference of the latent traces, alignment state paths and other model parameters. Exact inference is analytically intractable, but we are able to use Markov Chain Monte Carlo (MCMC) methods to create an iterative algorithm which, if run for sufficiently long, produces samples from the correct posterior distribution. This posterior provides simultaneous alignments of all observed time series in all classes, and also, crucially, aligned canonical representations of each class, along with error bars on these representations, allowing for a principled approach to difference detection in time series data from different classes. We may also wish to obtain a posterior estimate of some of our parameters conditioned on the data, and marginalized over the other parameters. In particular, we might be interested in obtaining the posterior over hidden time state vectors for each time series, τ k , which together provide a simultaneous, multi-class alignment of our data. We may, in addition, or, alternatively, be interested in the posterior of the child traces, z c , which together characterize how the classes agree and disagree. The former may be more of interest for visualizing aligned observed time series, or in expanding out aligned scalar time series to a related vector time series, while the latter would be more of interest when looking to characterize differences in multi-class, scalar time series data. We group our parameters into blocks, and sample these blocks conditioned on the values of the other parameters (as in Gibbs sampling) — however, when certain conditional distributions are not amenable to direct sampling, we use slice sampling [8]. The scalar conditional distributions for each c of µpar , spar , mc , mc , δj , κk are known distributions, amenable to direct sampling. The conditional out i in distributions for the scalars αc , ρc , α, ρ and uk are not tractable, and for each of these we use slice sampling (doubling out and shrinking). The conditional distribution for each of r and r c is multivariate Gaussian, and we sample directly from each using a Cholesky decomposition of the covariance matrix. 1 p(r|z, α, ρ) = p(z|r, α, ρ)p(r) = N (r|c, C) (5) Z 1 p(r c |dc , αc , ρc ) = p(dc |r, αc , ρc )p(r) = N (r c |b, B), (6) Z where, using I to denote the identity matrix, −1 µpar z S + Ispar −1 +I (7) , c=C C= ρ2 ρ spar B= S† 2 (ρc ) −1 +v c −1 , b=B dc . ρc (8) −1 The diagonal matrix v c consists of mixture component variances (s2 or s2 ). S −1 [or S † ] is the out in tridiagonal precision matrix of the multivariate normal distribution p(z|r, α, ρ) [or p(d c |r c , αc , ρc )], −1 −1 2 1 1 1 and has entries Sj,j = α + ρ for j = 2..(M − 1), Sj,j = α + ρ for j = 1, M , and −1 −1 −1 1 Sj,j+1 = Sj+1,j = − α [or analogously for S † ]. The computation of C and B can be made more efficient by using the Sherman-Morrison-Woodbury matrix inversion lemma. For example, −1 −1 −1 −1 −1 B = (ρ1)2 (S † − S † (v c + S † )−1 S † ), and we have S −1 [or S † ] almost for free, and c no longer need to invert S [or S † ] to obtain it. The conditional distributions of each of z, z c are also multivariate Gaussians. However, because of the underlying Markov dependencies, their precision matrixes are tridiagonal, and hence we can use belief propagation, in the style of Kalman filtering, followed by a stochastic traceback to sample from them efficiently. Thus each can be sampled in time proportional to M rather than M 3 , as required for a general multivariate Gaussian. Lastly, to sample from the conditional distribution of the hidden time vectors for each sample, τ k , we run belief propagation (analogous to the HMM forward-backward algorithm) followed by a stochastic traceback. In our experiments, the parent trace was initialized by averaging one smoothed example from each class. The child traces were initialized to the initial parent trace. The HMM states were initialized by a Viterbi decoding with respect to the initial values of the other parameters. The scaling factors were initialized to unity, and the child energy impulses to zero. MCMC was run for 5000 iterations, with convergence generally realized in less than 1000 iterations. 4 Experiments and Results We demonstrate use of the HB-CPM on two data sets. The first data set is the part of the NASA shuttle valve data [4], which measures valve solenoid current against time for some ‘normal’ runs and some ‘abnormal’ runs. Measurements were taken at a rate of 1ms per sample, with 1000 samples per time series. We subsampled the data by a factor of 7 in time since it was extremely dense. The results of performing posterior inference in our model on this two-class data set are shown in Figure 1. They nicely match our intuition of what makes a good solution. In our experiments, we also compared our model to a simple “single-class” version of the HB-CPM in which we simply remove the child trace level of the model, letting all observed data in both classes depend directly on one single parent trace. The single-class alignment, while doing a reasonable job, does so by coercing the two classes to look more similar than they should. This is evident in one particular region labeled on the graph and discussed in the legend. Essentially a single class alignment causes us to lose class-specific fine detail — the precise information we seek to retain for difference detection. The second data set is from a botany study which uses reverse-phase HPLC (high performance liquid chromatography) as a high-throughput screening method to identify genes involved in xenobiotic uptake and metabolism in the model plant Arabidopsis thaliana. Liquid-chromatography (LC) techniques are currently being developed and refined with the aim of providing a robust platform with which to detect differences in biological organisms — be they plants, animals or humans. Detected differences can reveal new fundamental biological insight, or can be applied in more clinical settings. LC-mass spectrometry technology has recently undergone explosive growth in tackling the problem of biomarker discovery — for example, detecting biological markers that can predict treatment outcome or severity of disease, thereby providing the potential for improved health care and better understanding of the mechanisms of drug and disease. In botany, LC-UV data is used to help understand the uptake and metabolism of compounds in plants by looking for differences across experimental conditions, and it is this type of data that we use here. LC separates mixtures of analytes on the basis of some chemical property — hydrophobicity, for reverse-phase LC, used to generate our data. Components of the analyte in our data set were detected as they came off the LC column with a Diode Array Detector (DAD), yielding UV-visible spectra collected at 540 time points (we used the 280 nm band, which is informative for these experiments). We performed background subtraction [2] and then subsampled this data by a factor of four. This is a three-class data set, where the first class is untreated plant extract, followed by two classes consisting of this same plant treated with compounds that were identified as possessing robust uptake in vivo, and, hence, when metabolized, provide a differential LC-UV signal of interest. Figure 3 gives an overview of the LC-UV results, while Figure 4 zooms in on a particular area of interest to highlight how subtle differences can be detected by the HB-CPM, but not by a singleclass alignment scheme. As with the NASA data set, a single-class alignment coerces features across classes that are in fact different to look the same, thereby preventing us from detecting them. Recall that this data set consists of a ‘no treatment’ plant extract, and two ‘treatments’ of this same plant. Though our model was not informed of these special relationships, it nevertheless elegantly captures this structure by giving almost no energy impulses to the ‘no treatment’ class, meaning that this class is essentially the parent trace, and allowing the ‘treatment’ classes to diverge from it, thereby nicely matching the reality of the situation. All averaging over MCMC runs shown is over 4000 samples, after a 1000 burn in period, which took around 3 hours for the NASA data, and 5 hours for the LC data set, on machines with dual 3 GHz Pentium 4 processors. 5 Related Work While much work has been done on time series alignment, and on comparison/clustering of time series, none of this work, to our knowledge, directly addresses the problem presented in this paper — simultaneously aligning and comparing sets of related time series in order to characterize how they differ from one another. The classical algorithm for aligning time series is Dynamic Time Warping (DTW) [10]. DTW works on pairs of time series, aligning one time series to a specified reference time, in a non-probabilistic way, without explicit allowance for differences in related time series. More recently, Gaffney et al [5] jointly clustered and aligned time series data from different classes. However, their model does not attempt to put time series from different classes into correspondence with one another — only time series within a class are aligned to one another. Ziv Bar-Joseph et al [1] use a similar approach to cluster and align microarray time series data. Ramsay et al [9] have introduced a curve clustering model, in which a time warping function, h(t), for each time series is learned by way of learning its relative curvature, parameterized with order one B-spline coefficients. This model accounts for 5 5 3 3 9 0 5 −9 9 0 −9 9 3 0 −9 5 5 3 3 0 50 100 150 200 250 300 0 50 100 150 200 250 Figure 3: Seven time series from each of three classes of LC-UV data. On all figures, the horizontal axis is time, or latent time for figures of latent traces and observed time series aligned to latent traces. The vertical axis is log of UV absorbance. Top left: The raw, unaligned data. Middle left: Average of the unaligned data within each class in thick line, with the thin lines showing one standard deviation on either side. Bottom left: Average of the aligned data within each class, using the single-class alignment version of the model (no child traces), again with one standard deviation lines shown in the thinner style line. Right: Mean and one standard deviation over MCMC samples using the HB-CPM model. Top right: Parent trace. Middle right: Class-specific energy impulses, with the top-most showing the class impulses for the ‘no treatment’ class. Bottom right: Child traces superimposed. See Figure 4 for a zoom-in in around the arrow. systematic changes in the range and domain of time series in a way that aligns curves with the same fundamental shape. However, their method does not allow for class-specific differences between shapes to be taken into account. The anomaly detection (AD) literature deals with related, yet distinct problems. For example, Chan et al [3] build a model of one class of time series data (they use the same NASA valve data as in this paper), and then match test data, possibly belonging to another class (e.g. ‘abnormal’ shuttle valve data) to this model to obtain an anomaly score. Emphasis in the AD community is on detecting abnormal events relative to a normal baseline, in an on-line manner, rather than comparing and contrasting two or more classes from a dataset containing examples of all classes. The problem of ‘elastic curve matching‘ is addressed in [6], where a target time series that best matches a query series is found, by mapping the problem of finding the best matching subsequence to the problem of finding the cheapest path in a DAG (directed acyclic graph). 6 Discussion and Conclusion We have introduced a hierarchical, Bayesian model to perform detection of rare differences between sets of related time series, a problem which arises across a wide range of domains. By training our model, we obtain the posterior distribution over a set of class-specific canonical representations of each class, which are aligned in a way that preserves their common sub-structures, yet retains and highlights important differences. This model can be extended in several interesting and useful ways. One small modification could be useful for the LC-UV data set presented in this paper, in which one of the classes was ‘no treatment’, while the other two were each a different ‘treatment’. We might model the ‘no treatment’ as the parent trace, and each of the treatments as a child trace, so that the direct comparison of interest would be made more explicit. Another direction would be to apply the HB-CPM in a completely 300 4 5 0 3 −5 4 5 0 3 −5 4 5 0 3 −5 100 105 110 115 120 125 130 135 140 145 150 155 100 105 110 115 120 125 130 135 140 145 150 155 Figure 4: Left: A zoom in of data displayed in Figure 3, from the region of time 100-150 (labeled in that figure in latent time, not observed time). Top left: mean and standard deviation of the unaligned data. Middle left: mean and standard deviation of the single-class alignment. Bottom left: mean and standard deviation of the child traces from the HB-CPM. A case in point of a difference that could be detected with the HB-CPM and not in the raw or single-class aligned data, is the difference occurring at time point 127. Right: The mean and standard deviation of the child energy impulses, with dashed lines showing correspondences with the child traces in the bottom left panel. unsupervised setting where we learn not only the canonical class representations, but also obtain the posterior over the class labels by introducing a latent class indicator variable. Lastly, one could use a model with cyclical latent traces to model cyclic data such as electrocardiogram (ECG) and climate data. In such a model, an observed trace being generated by the model would be allowed to cycle back to the start of the latent trace, and the smoothness constraints on the trace would be extended to apply to beginning and end of the traces, coercing these to be similar. Such a model would allow one to do anomaly detection in cyclic data, as well as segmentation. Acknowledgments: Thanks to David Ross and Roland Memisevic for useful discussions, and Ben Marlin for his Matlab slice sampling code. References [1] Z. Bar-Joseph, G. Gerber, D. K. Gifford, T. Jaakkola, and I. Simon. A new approach to analyzing gene expression time series data. In RECOMB, pages 39–48, 2002. [2] H. Boelens, R. Dijkstra, P. Eilers, F. Fitzpatrick, and J. Westerhuis. New background correction method for liquid chromatography with diode array detection, infrared spectroscopic detection and raman spectroscopic detection. Journal of Chromatography A, 1057:21–30, 2004. [3] P. K. Chan and M. V. Mahoney. Modeling multiple time series for anomaly detection. In ICDM, 2005. [4] B. Ferrell and S. Santuro. NASA shuttle valve data. http://www.cs.fit.edu/∼pkc/nasa/data/, 2005. [5] S. J. Gaffney and P. Smyth. Joint probabilistic curve clustering and alignment. In Advances in Neural Information Processing Systems 17, 2005. [6] L. Latecki, V. Megalooikonomou, Q. Wang, R. Lakaemper, C. Ratanamahatana, and E. Keogh. Elastic partial matching of time series, 2005. [7] J. Listgarten, R. M. Neal, S. T. Roweis, and A. Emili. Multiple alignment of continuous time series. In Advances in Neural Information Processing Systems 17, 2005. [8] R. M. Neal. Slice sampling. Annals of Statistics, 31:705–767, 2003. [9] J. Ramsay and X. Li. Curve registration. Journal of the Royal Statistical Society(B), 60, 1998. [10] H. Sakoe and S. Chiba. Dynamic programming algorithm for spoken word recognition. Readings in Speech Recognition, pages 159–165, 1990.

6 0.3029992 147 nips-2006-Non-rigid point set registration: Coherent Point Drift

7 0.29178157 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment

8 0.28189793 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation

9 0.27269635 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

10 0.27097172 135 nips-2006-Modelling transcriptional regulation using Gaussian Processes

11 0.2693325 151 nips-2006-On the Relation Between Low Density Separation, Spectral Clustering and Graph Cuts

12 0.26257235 86 nips-2006-Graph-Based Visual Saliency

13 0.24597289 1 nips-2006-A Bayesian Approach to Diffusion Models of Decision-Making and Response Time

14 0.24455632 66 nips-2006-Detecting Humans via Their Pose

15 0.22276436 144 nips-2006-Near-Uniform Sampling of Combinatorial Spaces Using XOR Constraints

16 0.21789618 108 nips-2006-Large Scale Hidden Semi-Markov SVMs

17 0.21548776 192 nips-2006-Theory and Dynamics of Perceptual Bistability

18 0.20862646 45 nips-2006-Blind Motion Deblurring Using Image Statistics

19 0.20643173 29 nips-2006-An Information Theoretic Framework for Eukaryotic Gradient Sensing

20 0.19054706 139 nips-2006-Multi-dynamic Bayesian Networks


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.068), (3, 0.038), (7, 0.097), (8, 0.016), (9, 0.024), (12, 0.015), (20, 0.026), (22, 0.056), (44, 0.063), (57, 0.074), (65, 0.053), (69, 0.046), (71, 0.012), (88, 0.315), (90, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.81818593 31 nips-2006-Analysis of Contour Motions

Author: Ce Liu, William T. Freeman, Edward H. Adelson

Abstract: A reliable motion estimation algorithm must function under a wide range of conditions. One regime, which we consider here, is the case of moving objects with contours but no visible texture. Tracking distinctive features such as corners can disambiguate the motion of contours, but spurious features such as T-junctions can be badly misleading. It is difficult to determine the reliability of motion from local measurements, since a full rank covariance matrix can result from both real and spurious features. We propose a novel approach that avoids these points altogether, and derives global motion estimates by utilizing information from three levels of contour analysis: edgelets, boundary fragments and contours. Boundary fragment are chains of orientated edgelets, for which we derive motion estimates from local evidence. The uncertainties of the local estimates are disambiguated after the boundary fragments are properly grouped into contours. The grouping is done by constructing a graphical model and marginalizing it using importance sampling. We propose two equivalent representations in this graphical model, reversible switch variables attached to the ends of fragments and fragment chains, to capture both local and global statistics of boundaries. Our system is successfully applied to both synthetic and real video sequences containing high-contrast boundaries and textureless regions. The system produces good motion estimates along with properly grouped and completed contours.

2 0.48136365 43 nips-2006-Bayesian Model Scoring in Markov Random Fields

Author: Sridevi Parise, Max Welling

Abstract: Scoring structures of undirected graphical models by means of evaluating the marginal likelihood is very hard. The main reason is the presence of the partition function which is intractable to evaluate, let alone integrate over. We propose to approximate the marginal likelihood by employing two levels of approximation: we assume normality of the posterior (the Laplace approximation) and approximate all remaining intractable quantities using belief propagation and the linear response approximation. This results in a fast procedure for model scoring. Empirically, we find that our procedure has about two orders of magnitude better accuracy than standard BIC methods for small datasets, but deteriorates when the size of the dataset grows. 1

3 0.47804034 119 nips-2006-Learning to Rank with Nonsmooth Cost Functions

Author: Christopher J. Burges, Robert Ragno, Quoc V. Le

Abstract: The quality measures used in information retrieval are particularly difficult to optimize directly, since they depend on the model scores only through the sorted order of the documents returned for a given query. Thus, the derivatives of the cost with respect to the model parameters are either zero, or are undefined. In this paper, we propose a class of simple, flexible algorithms, called LambdaRank, which avoids these difficulties by working with implicit cost functions. We describe LambdaRank using neural network models, although the idea applies to any differentiable function class. We give necessary and sufficient conditions for the resulting implicit cost function to be convex, and we show that the general method has a simple mechanical interpretation. We demonstrate significantly improved accuracy, over a state-of-the-art ranking algorithm, on several datasets. We also show that LambdaRank provides a method for significantly speeding up the training phase of that ranking algorithm. Although this paper is directed towards ranking, the proposed method can be extended to any non-smooth and multivariate cost functions. 1

4 0.47723925 80 nips-2006-Fundamental Limitations of Spectral Clustering

Author: Boaz Nadler, Meirav Galun

Abstract: Spectral clustering methods are common graph-based approaches to clustering of data. Spectral clustering algorithms typically start from local information encoded in a weighted graph on the data and cluster according to the global eigenvectors of the corresponding (normalized) similarity matrix. One contribution of this paper is to present fundamental limitations of this general local to global approach. We show that based only on local information, the normalized cut functional is not a suitable measure for the quality of clustering. Further, even with a suitable similarity measure, we show that the first few eigenvectors of such adjacency matrices cannot successfully cluster datasets that contain structures at different scales of size and density. Based on these findings, a second contribution of this paper is a novel diffusion based measure to evaluate the coherence of individual clusters. Our measure can be used in conjunction with any bottom-up graph-based clustering method, it is scale-free and can determine coherent clusters at all scales. We present both synthetic examples and real image segmentation problems where various spectral clustering algorithms fail. In contrast, using this coherence measure finds the expected clusters at all scales. Keywords: Clustering, kernels, learning theory. 1

5 0.47707629 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation

Author: David B. Grimes, Daniel R. Rashid, Rajesh P. Rao

Abstract: Learning by imitation represents an important mechanism for rapid acquisition of new behaviors in humans and robots. A critical requirement for learning by imitation is the ability to handle uncertainty arising from the observation process as well as the imitator’s own dynamics and interactions with the environment. In this paper, we present a new probabilistic method for inferring imitative actions that takes into account both the observations of the teacher as well as the imitator’s dynamics. Our key contribution is a nonparametric learning method which generalizes to systems with very different dynamics. Rather than relying on a known forward model of the dynamics, our approach learns a nonparametric forward model via exploration. Leveraging advances in approximate inference in graphical models, we show how the learned forward model can be directly used to plan an imitating sequence. We provide experimental results for two systems: a biomechanical model of the human arm and a 25-degrees-of-freedom humanoid robot. We demonstrate that the proposed method can be used to learn appropriate motor inputs to the model arm which imitates the desired movements. A second set of results demonstrates dynamically stable full-body imitation of a human teacher by the humanoid robot. 1

6 0.47594544 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds

7 0.47419912 87 nips-2006-Graph Laplacian Regularization for Large-Scale Semidefinite Programming

8 0.47302207 34 nips-2006-Approximate Correspondences in High Dimensions

9 0.47039545 118 nips-2006-Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields

10 0.46977669 110 nips-2006-Learning Dense 3D Correspondence

11 0.46903795 33 nips-2006-Analysis of Representations for Domain Adaptation

12 0.46887821 97 nips-2006-Inducing Metric Violations in Human Similarity Judgements

13 0.46850348 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

14 0.46844652 121 nips-2006-Learning to be Bayesian without Supervision

15 0.46826881 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints

16 0.46751475 32 nips-2006-Analysis of Empirical Bayesian Methods for Neuroelectromagnetic Source Localization

17 0.46731395 134 nips-2006-Modeling Human Motion Using Binary Latent Variables

18 0.46723884 195 nips-2006-Training Conditional Random Fields for Maximum Labelwise Accuracy

19 0.46664536 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment

20 0.46655452 117 nips-2006-Learning on Graph with Laplacian Regularization