nips nips2010 nips2010-262 knowledge-graph by maker-knowledge-mining

262 nips-2010-Switched Latent Force Models for Movement Segmentation


Source: pdf

Author: Mauricio Alvarez, Jan R. Peters, Neil D. Lawrence, Bernhard Schölkopf

Abstract: Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function. Each variable to be modeled is represented as the output of a differential equation and each differential equation is driven by a weighted sum of latent functions with uncertainty given by a Gaussian process prior. In this paper we consider employing the latent force model framework for the problem of determining robot motor primitives. To deal with discontinuities in the dynamical systems or the latent driving force we introduce an extension of the basic latent force model, that switches between different latent functions and potentially different dynamical systems. This creates a versatile representation for robot movements that can capture discrete changes and non-linearities in the dynamics. We give illustrative examples on both synthetic data and for striking movements recorded using a Barrett WAM robot as haptic input device. Our inspiration is robot motor primitives, but we expect our model to have wide application for dynamical systems including models for human motion capture data and systems biology. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Each variable to be modeled is represented as the output of a differential equation and each differential equation is driven by a weighted sum of latent functions with uncertainty given by a Gaussian process prior. [sent-4, score-0.282]

2 In this paper we consider employing the latent force model framework for the problem of determining robot motor primitives. [sent-5, score-0.293]

3 To deal with discontinuities in the dynamical systems or the latent driving force we introduce an extension of the basic latent force model, that switches between different latent functions and potentially different dynamical systems. [sent-6, score-0.688]

4 We give illustrative examples on both synthetic data and for striking movements recorded using a Barrett WAM robot as haptic input device. [sent-8, score-0.088]

5 Our inspiration is robot motor primitives, but we expect our model to have wide application for dynamical systems including models for human motion capture data and systems biology. [sent-9, score-0.131]

6 1 Introduction Latent force models [1] are a new approach for modeling data that allows combining dimensionality reduction with systems of differential equations. [sent-10, score-0.159]

7 The assumption is that the R forcing functions drive the D observed functions through a set of differential equation models. [sent-12, score-0.088]

8 Each differential equation is driven by a weighted mix of latent forcing functions. [sent-13, score-0.191]

9 Sets of coupled differential equations arise in many physics and engineering problems particularly when the temporal evolution of a system needs to be described. [sent-14, score-0.071]

10 A latent force model differs from classical approaches as it places a probabilistic process prior over the latent functions and hence can make statements about the uncertainty in the system. [sent-18, score-0.345]

11 A joint Gaussian process model over the latent forcing functions and the observed data functions can be recovered using a Gaussian process prior in conjunction with linear differential equations [1]. [sent-19, score-0.208]

12 The resulting latent force modeling framework allows the combination of the knowledge of the systems dynamics with a data driven model. [sent-20, score-0.232]

13 If a single Gaussian process prior is used to represent each latent function then the models we consider are limited to smooth driving functions. [sent-22, score-0.131]

14 However, discontinuities and segmented latent forces are omnipresent in real-world data. [sent-23, score-0.167]

15 For example, impact forces due to contacts in a mechanical dynamical system (when grasping an object or when the feet touch the ground) or a switch in an electrical circuit result in discontinuous latent forces. [sent-24, score-0.218]

16 In this paper, we extract a sequence of dynamical systems motor primitives modeled by second order linear differential equations in conjunction with forcing functions (as in [1, 6]) from human movement to be used as demonstrations of elementary movements for an anthropomorphic robot. [sent-27, score-0.223]

17 As human trajectories have a large variability: both due to planned uncertainty of the human’s movement policy, as well as due to motor execution errors [7], a probabilistic model is needed to capture the underlying motor primitives. [sent-28, score-0.107]

18 A set of second order differential equations is employed as mechanical systems are of the same type and a temporal Gaussian process prior is used to allow probabilistic modeling [1]. [sent-29, score-0.082]

19 To be able to obtain a sequence of dynamical systems, we augment the latent force model to include discontinuities in the latent function and change dynamics. [sent-30, score-0.405]

20 We introduce discontinuities by switching between different Gaussian process models (superficially similar to a mixture of Gaussian processes; however, the switching times are modeled as parameters so that at any instant a single Gaussian process is driving the system). [sent-31, score-0.27]

21 Continuity of the observed functions is then ensured by constraining the relevant state variables (for example in a second order differential equation velocity and displacement) to be continuous across the switching points. [sent-32, score-0.169]

22 2 Review of Latent force models (LFM) Latent force models [1] are hybrid models that combine mechanistic principles and Gaussian processes as a flexible way to introduce prior knowledge for data modeling. [sent-35, score-0.248]

23 A set of D functions {yd (t)}D is modeled as the set of output functions of a series of coupled differential equations, d=1 whose common input is a linear combination of R latent functions, {ur (t)}R . [sent-36, score-0.182]

24 We assume the output yd (t) is described by Ad d2 yd (t) dyd (t) + Cd + κd yd (t) = dt2 dt R r=1 Sd,r ur (t), where, for a mass-spring-damper system, Ad would represent the mass, Cd the damper and κd , the spring constant associated to the output d. [sent-38, score-0.941]

25 They are used to represent the relative strength that the latent force r exerts over the output d. [sent-40, score-0.259]

26 Note that models that learn a forcing function to drive a linear system have proven to be well-suited for imitation learning for robot systems [6]. [sent-42, score-0.103]

27 The solution of the second order ODE follows yd (t) = yd (0)cd (t) + yd (0)ed (t) + fd (t, u), ˙ (1) where yd (0) and yd (0) are the output and the velocity at time t = 0, respectively, known as the ˙ 2 initial conditions (IC). [sent-43, score-1.549]

28 The angular frequency is given by ωd = (4Ad κd − Cd )/(4A2 ) and the d remaining variables are given by αd e−αd t cd (t) = e−αd t cos(ωd t) + sin(ωd t) , ed (t) = sin(ωd t), ωd ωd t t Sd Sd fd (t, u) = Gd (t − τ )u(τ )dτ = e−αd (t−τ ) sin[(t − τ )ωd ]u(τ )dτ, Ad ωd 0 Ad ω d 0 with αd = Cd /(2Ad ). [sent-44, score-0.097]

29 Note that fd (t, u) has an implicit dependence on the latent function u(t). [sent-45, score-0.144]

30 (1) is due to the fact that the latent force u(t) and the initial conditions yd (0) and yd (0) are not known. [sent-47, score-0.817]

31 We will assume that the latent function u(t) is sampled from a zero ˙ mean Gaussian process prior, u(t) ∼ GP(0, ku,u (t, t )), with covariance function ku,u (t, t ). [sent-48, score-0.161]

32 So the covariance function kfd ,fd (t, t ) depends on the covariance function of the latent force u(t). [sent-56, score-0.38]

33 If we assume the latent function has a radial basis function (RBF) covariance, ku,u (t, t ) = exp[−(t − t )2 / 2 ], then kfd ,fd (t, t ) can be computed analytically [1] (see also supplementary material). [sent-57, score-0.181]

34 The latent force model induces a joint Gaussian process model across all the outputs. [sent-58, score-0.233]

35 The parameters of the covariance function are given by the parameters of the differential equations and the length scale of the latent force. [sent-59, score-0.204]

36 In this paper we look to extend the framework to the case where there can be discontinuities in the latent functions. [sent-62, score-0.127]

37 We do this through switching between different Gaussian process models to drive the system. [sent-63, score-0.121]

38 3 Switching dynamical latent force models (SDLFM) We now consider switching the system between different latent forces. [sent-64, score-0.482]

39 This allows us to change the dynamical system and the driving force for each segment. [sent-65, score-0.207]

40 By constraining the displacement and velocity at each switching time to be the same, the output functions remain continuous. [sent-66, score-0.165]

41 1 Definition of the model We assume that the input space is divided in a series of non-overlapping intervals [tq−1 , tq ]Q . [sent-68, score-0.897]

42 q=1 During each interval, only one force uq−1 (t) out of Q forces is active, that is, there are {uq−1 (t)}Q q=1 forces. [sent-69, score-0.158]

43 The force uq−1 (t) is activated after time tq−1 (switched on) and deactivated (switched off) after time tq . [sent-70, score-0.992]

44 A particular output zd (t) at a particular time instant t, in the interval (tq−1 , tq ), is expressed as q q q zd (t) = yd (t − tq−1 ) = cq (t − tq−1 )yd (tq−1 ) + eq (t − tq−1 )yd (tq−1 ) + fd (t − tq−1 , uq−1 ). [sent-72, score-1.592]

45 ˙q d d This equation is assummed to be valid for describing the output only inside the interval (tq−1 , tq ). [sent-73, score-0.957]

46 q Here we highlighted this idea by including the superscript q in yd (t − tq−1 ) to represent the interval q for which the equation holds, although later we will omit it to keep the notation uncluttered. [sent-74, score-0.328]

47 Note that for Q = 1 and t0 = 0, we recover the original latent force model given in equation (1). [sent-75, score-0.232]

48 0 Given the parameters θ = {{Ad , Cd , κd , Sd }D , { q−1 }Q }, the uncertainty in the outputs is q=1 d=1 q induced by the prior over the initial conditions yd (tq−1 ), yd (tq−1 ) for all values of tq−1 and the ˙q prior over latent force uq−1 (t) that is active during (tq−1 , tq ). [sent-77, score-1.725]

49 We place independent Gaussian process priors over each of these latent forces uq−1 (t), assuming independence between them. [sent-78, score-0.155]

50 q For initial conditions yd (tq−1 ), yd (tq−1 ), we could assume that they are either parameters to ˙q be estimated or random variables with uncertainty governed by independent Gaussian distribuq tions with covariance matrices KIC as described in the last section. [sent-79, score-0.653]

51 However, for the class of applications we will consider: mechanical systems, the outputs should be continuous across the switching points. [sent-80, score-0.134]

52 We therefore assume that the uncertainty about the initial conditions q for the interval q, yd (tq−1 ), yd (tq−1 ) are proscribed by the Gaussian process that describes the ˙q outputs zd (t) and velocities zd (t) in the previous interval q − 1. [sent-81, score-0.937]

53 We also consider ˙ q−1 ˙ q−1 ˙ ˙ covariances between zd (tq−1 ) and zd (tq −1 ), this is, between positions and velocities for different ˙ values of q and d. [sent-83, score-0.243]

54 Let us assume we have one output (D = 1) and three switching intervals (Q = 3) with switching points t0 , t1 and t2 . [sent-85, score-0.256]

55 Figure 1 shows an example of the switching ˙ ˙ dynamical latent force model scenario. [sent-92, score-0.365]

56 To ensure the continuity of the outputs, the initial condition is forced to be equal to the output of the last interval evaluated at the switching point. [sent-93, score-0.197]

57 2 z(t) The covariance function The derivation of the covariance function for the switching model is rather y 2 (t − t1 ) involved. [sent-95, score-0.189]

58 For continuous output signals, we must take into account cony 2 (t2 − t1 ) straints at each switching y 2 (t1 ) time. [sent-96, score-0.136]

59 This effort is worth- ical latent force model with Q = 3. [sent-98, score-0.22]

60 The initial conditions y q (tq−1 ) for each while though as the result- interval are matched to the value of the output in the last interval, evaluated at q q−1 ing model is very flexible the switching point tq−1 , this is, y (tq−1 ) = y (tq−1 − tq−2 ). [sent-99, score-0.197]

61 and can take advantage of the switching dynamics to represent a range of signals. [sent-100, score-0.097]

62 As a taster, Figure 2 shows samples from a covariance function of a switching dynamical latent force model with D = 1 and Q = 3. [sent-101, score-0.411]

63 Note that while the latent forces (a and c) are discrete, the outputs (b and d) are continuous and have matching gradients at the switching points. [sent-102, score-0.263]

64 The switching times turn out to be parameters of the covariance function. [sent-104, score-0.143]

65 4 In general, we need to compute the covariance kzd ,zd (t, t ) = cov[zd (t), zd (t )] for zd (t) in time interval (tq−1 , tq ) and zd (t ) in time interval (tq −1 , tq ). [sent-119, score-2.387]

66 By definition, this covariance follows q q cov[zd (t), zd (t )] = cov yd (t − tq−1 ), yd (t − tq −1 )) . [sent-120, score-1.665]

67 We assumme independence between the latent forces uq (t) and independence between the initial conditions yIC and the latent forces uq (t). [sent-121, score-0.431]

68 ˙q ˙q ˙ ˙ q q q kfd ,fd (t, t ) = cov[fd (t − tq−1 )fd (t − tq−1 )]. [sent-123, score-0.068]

69 q−1 q−1 In expression (3), kzd ,zd (tq−1 , tq−1 ) = cov[yd (tq−1 − tq−2 ), yd (tq−1 − tq−2 )] and values for kzd ,zd (tq−1 , tq−1 ), kzd ,zd (tq−1 , tq−1 ) and kzd ,zd (tq−1 , tq−1 ) can be obtained by similar ex˙ ˙ ˙ ˙ q pressions. [sent-124, score-1.104]

70 The covariance kfd ,fd (t, t ) follows a similar expression that the one for kfd ,fd (t, t ) in equation (2), now depending on the covariance kuq−1 ,uq−1 (t, t ). [sent-125, score-0.24]

71 We will assume that the covariances for the latent forces follow the RBF form, with length-scale q . [sent-126, score-0.156]

72 q When q > q , we have to take into account the correlation between the initial conditions yd (tq−1 ), q yd (tq−1 ) and the latent force uq −1 (t ). [sent-127, score-0.876]

73 This correlation appears because of the contribution of ˙ q uq −1 (t ) to the generation of the initial conditions, yd (tq−1 ), yd (tq−1 ). [sent-128, score-0.642]

74 q q 3 We will write fd (t − tq−1 , uq−1 ) as fd (t − tq−1 ) for notational simplicity. [sent-138, score-0.084]

75 The authors call this covariance continuous conditionally independent covariance function. [sent-149, score-0.092]

76 In our switched latent force model, a more natural option is to use the initial conditions as the way to transit smoothly between different regimes. [sent-150, score-0.272]

77 4, the authors propose covariances that account for a sudden change in the input scale and a sudden change in the output scale. [sent-153, score-0.091]

78 This reference is less concerned about the particular type of change that is represented by the model: in our application scenario, the continuity of the covariance function between two regimes must be assured beforehand. [sent-157, score-0.07]

79 The covariance functions kzd ,zd (t, t ), kzd ,zd (t, t ) and ˙ ˙ kzd ,zd (t, t ) are obtained by taking derivatives of kzd ,zd (t, t ) with respect to t and t [10]. [sent-162, score-0.866]

80 , zd (tN )] , Kz,z is a D × D block-partitioned matrix with blocks Kzd ,zd . [sent-171, score-0.108]

81 The entries in each of these blocks are evaluated using kzd ,zd (t, t ). [sent-172, score-0.205]

82 Furthermore, kzd ,zd (t, t ) is computed using the expressions (3), and (4), according to the relative values of q and q . [sent-173, score-0.205]

83 In the first experiment, we sample from a model with D = 2, R = 1 and Q = 3, with switching points t0 = −1, t1 = 5 and t2 = 12. [sent-180, score-0.097]

84 1 0 5 10 15 20 Figure 4: Mean and two standard deviations for the predictions over the latent force and two of the three outputs in the test set. [sent-258, score-0.244]

85 For the second toy experiment, we assume D = 3, Q = 2 and switching points t0 = −2 and t1 = 8. [sent-269, score-0.141]

86 In figures 4(d), 4(e) and 4(f), the inferred latent force and the predictions made for two of the three outputs. [sent-278, score-0.22]

87 2 Segmentation of human movement data for robot imitation learning In this section, we evaluate the feasibility of the model for motion segmentation with possible applications in the analysis of human movement data and imitation learning. [sent-280, score-0.176]

88 To do so, we had a human teacher take the robot by the hand and have him demonstrate striking movements in a cooperative game of table tennis with another human being as shown in Figure 3. [sent-281, score-0.131]

89 4 Latent Force Value of the log−likelihood 10 Time (e) Latent force Try 2. [sent-293, score-0.118]

90 5 −2 −1 5 10 Time 15 5 10 Time 15 Figure 5: Employing the switching dynamical LFM model on the human movement data collected as in Fig. [sent-296, score-0.194]

91 The first row corresponds to the loglikelihood, latent force and one of four outputs for trial one. [sent-298, score-0.265]

92 angular velocities, and angular acceleration of the robot for two independent trials of the same table tennis exercise. [sent-301, score-0.076]

93 For each trial, we selected four output positions and train several models for different values of Q, including the latent force model without switches (Q = 1). [sent-302, score-0.268]

94 Figure 5 shows the log-likelihood, the inferred latent force and one output for trial one (first row) and the corresponding quantities for trial two (second row). [sent-304, score-0.301]

95 As the movement has few gaps and the data has several output dimensions, it is hard even for a human being to detect the transitions between movements (unless it is visualized as in a movie). [sent-306, score-0.11]

96 As a result, we obtained not only a segmentation of the movement but also a generative model for table tennis striking movements. [sent-309, score-0.07]

97 7 Conclusion We have introduced a new probabilistic model that develops the latent force modeling framework with switched Gaussian processes. [sent-310, score-0.243]

98 This allows for discontinuities in the latent space of forces. [sent-311, score-0.127]

99 We have shown the application of the model in toy examples and on a real world robot problem, in which we were interested in finding and representing striking movements. [sent-312, score-0.098]

100 Other applications of the switching latent force model that we envisage include modeling human motion capture data using the second order ODE and a first order ODE for modeling of complex circuits in biological networks. [sent-313, score-0.337]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('tq', 0.874), ('yd', 0.284), ('kzd', 0.205), ('force', 0.118), ('zd', 0.108), ('latent', 0.102), ('switching', 0.097), ('cov', 0.069), ('kfd', 0.068), ('uq', 0.059), ('cq', 0.057), ('dynamical', 0.048), ('smse', 0.048), ('covariance', 0.046), ('cd', 0.044), ('toy', 0.044), ('fd', 0.042), ('differential', 0.041), ('forces', 0.04), ('robot', 0.039), ('eq', 0.039), ('output', 0.039), ('xd', 0.033), ('interval', 0.032), ('gd', 0.029), ('movement', 0.029), ('kic', 0.027), ('msll', 0.027), ('shef', 0.027), ('ad', 0.026), ('discontinuities', 0.025), ('motor', 0.024), ('outputs', 0.024), ('forcing', 0.024), ('switched', 0.023), ('intervals', 0.023), ('movements', 0.022), ('vd', 0.022), ('neil', 0.022), ('ode', 0.022), ('sd', 0.021), ('trial', 0.021), ('gaussian', 0.021), ('kmd', 0.02), ('lfm', 0.02), ('standarized', 0.02), ('human', 0.02), ('velocity', 0.019), ('yic', 0.018), ('sin', 0.017), ('mauricio', 0.016), ('driving', 0.016), ('alvarez', 0.015), ('tennis', 0.015), ('system', 0.015), ('striking', 0.015), ('equations', 0.015), ('try', 0.015), ('initial', 0.015), ('covariances', 0.014), ('conditions', 0.014), ('imitation', 0.014), ('continuity', 0.014), ('garnett', 0.014), ('manchester', 0.014), ('sensitivities', 0.014), ('sfe', 0.014), ('mechanical', 0.013), ('velocities', 0.013), ('process', 0.013), ('driven', 0.012), ('subindex', 0.012), ('barrett', 0.012), ('haptic', 0.012), ('wam', 0.012), ('mlss', 0.012), ('mq', 0.012), ('osborne', 0.012), ('processes', 0.012), ('equation', 0.012), ('segmentation', 0.011), ('angular', 0.011), ('luengo', 0.011), ('ur', 0.011), ('michalis', 0.011), ('supplementary', 0.011), ('drive', 0.011), ('gp', 0.01), ('uncertainty', 0.01), ('transcription', 0.01), ('employing', 0.01), ('change', 0.01), ('displacement', 0.01), ('roman', 0.01), ('subsections', 0.009), ('switches', 0.009), ('hq', 0.009), ('material', 0.009), ('instant', 0.009), ('sudden', 0.009)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 262 nips-2010-Switched Latent Force Models for Movement Segmentation

Author: Mauricio Alvarez, Jan R. Peters, Neil D. Lawrence, Bernhard Schölkopf

Abstract: Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function. Each variable to be modeled is represented as the output of a differential equation and each differential equation is driven by a weighted sum of latent functions with uncertainty given by a Gaussian process prior. In this paper we consider employing the latent force model framework for the problem of determining robot motor primitives. To deal with discontinuities in the dynamical systems or the latent driving force we introduce an extension of the basic latent force model, that switches between different latent functions and potentially different dynamical systems. This creates a versatile representation for robot movements that can capture discrete changes and non-linearities in the dynamics. We give illustrative examples on both synthetic data and for striking movements recorded using a Barrett WAM robot as haptic input device. Our inspiration is robot motor primitives, but we expect our model to have wide application for dynamical systems including models for human motion capture data and systems biology. 1

2 0.053193528 33 nips-2010-Approximate inference in continuous time Gaussian-Jump processes

Author: Manfred Opper, Andreas Ruttor, Guido Sanguinetti

Abstract: We present a novel approach to inference in conditionally Gaussian continuous time stochastic processes, where the latent process is a Markovian jump process. We first consider the case of jump-diffusion processes, where the drift of a linear stochastic differential equation can jump at arbitrary time points. We derive partial differential equations for exact inference and present a very efficient mean field approximation. By introducing a novel lower bound on the free energy, we then generalise our approach to Gaussian processes with arbitrary covariance, such as the non-Markovian RBF covariance. We present results on both simulated and real data, showing that the approach is very accurate in capturing latent dynamics and can be useful in a number of real data modelling tasks.

3 0.052130792 213 nips-2010-Predictive Subspace Learning for Multi-view Data: a Large Margin Approach

Author: Ning Chen, Jun Zhu, Eric P. Xing

Abstract: Learning from multi-view data is important in many applications, such as image classification and annotation. In this paper, we present a large-margin learning framework to discover a predictive latent subspace representation shared by multiple views. Our approach is based on an undirected latent space Markov network that fulfills a weak conditional independence assumption that multi-view observations and response variables are independent given a set of latent variables. We provide efficient inference and parameter estimation methods for the latent subspace model. Finally, we demonstrate the advantages of large-margin learning on real video and web image data for discovering predictive latent representations and improving the performance on image classification, annotation and retrieval.

4 0.051046975 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

Author: Nicholas Fisher, Arunava Banerjee

Abstract: From a functional viewpoint, a spiking neuron is a device that transforms input spike trains on its various synapses into an output spike train on its axon. We demonstrate in this paper that the function mapping underlying the device can be tractably learned based on input and output spike train data alone. We begin by posing the problem in a classification based framework. We then derive a novel kernel for an SRM0 model that is based on PSP and AHP like functions. With the kernel we demonstrate how the learning problem can be posed as a Quadratic Program. Experimental results demonstrate the strength of our approach. 1

5 0.049538396 89 nips-2010-Factorized Latent Spaces with Structured Sparsity

Author: Yangqing Jia, Mathieu Salzmann, Trevor Darrell

Abstract: Recent approaches to multi-view learning have shown that factorizing the information into parts that are shared across all views and parts that are private to each view could effectively account for the dependencies and independencies between the different input modalities. Unfortunately, these approaches involve minimizing non-convex objective functions. In this paper, we propose an approach to learning such factorized representations inspired by sparse coding techniques. In particular, we show that structured sparsity allows us to address the multiview learning problem by alternately solving two convex optimization problems. Furthermore, the resulting factorized latent spaces generalize over existing approaches in that they allow having latent dimensions shared between any subset of the views instead of between all the views only. We show that our approach outperforms state-of-the-art methods on the task of human pose estimation. 1

6 0.047161669 242 nips-2010-Slice sampling covariance hyperparameters of latent Gaussian models

7 0.046316549 198 nips-2010-Optimal Web-Scale Tiering as a Flow Problem

8 0.045841545 266 nips-2010-The Maximal Causes of Natural Scenes are Edge Filters

9 0.032931749 171 nips-2010-Movement extraction by detecting dynamics switches and repetitions

10 0.031864863 103 nips-2010-Generating more realistic images using gated MRF's

11 0.031685889 153 nips-2010-Learning invariant features using the Transformed Indian Buffet Process

12 0.030853737 284 nips-2010-Variational bounds for mixed-data factor analysis

13 0.02870244 113 nips-2010-Heavy-Tailed Process Priors for Selective Shrinkage

14 0.02842227 235 nips-2010-Self-Paced Learning for Latent Variable Models

15 0.025914766 29 nips-2010-An Approximate Inference Approach to Temporal Optimization in Optimal Control

16 0.023872172 148 nips-2010-Learning Networks of Stochastic Differential Equations

17 0.023766315 85 nips-2010-Exact learning curves for Gaussian process regression on large random graphs

18 0.022655532 167 nips-2010-Mixture of time-warped trajectory models for movement decoding

19 0.022579517 98 nips-2010-Functional form of motion priors in human motion perception

20 0.022093609 194 nips-2010-Online Learning for Latent Dirichlet Allocation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.058), (1, 0.011), (2, -0.034), (3, 0.018), (4, -0.03), (5, 0.014), (6, 0.008), (7, 0.03), (8, -0.019), (9, -0.007), (10, -0.006), (11, -0.003), (12, 0.013), (13, -0.009), (14, 0.001), (15, 0.038), (16, -0.018), (17, 0.044), (18, 0.064), (19, -0.006), (20, -0.031), (21, 0.061), (22, -0.043), (23, -0.022), (24, -0.027), (25, -0.006), (26, 0.013), (27, 0.029), (28, 0.03), (29, -0.017), (30, 0.132), (31, -0.039), (32, 0.072), (33, 0.024), (34, -0.017), (35, 0.008), (36, 0.02), (37, 0.003), (38, -0.038), (39, 0.006), (40, -0.01), (41, 0.072), (42, 0.058), (43, -0.024), (44, -0.076), (45, 0.002), (46, -0.028), (47, 0.081), (48, 0.013), (49, 0.006)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93279076 262 nips-2010-Switched Latent Force Models for Movement Segmentation

Author: Mauricio Alvarez, Jan R. Peters, Neil D. Lawrence, Bernhard Schölkopf

Abstract: Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function. Each variable to be modeled is represented as the output of a differential equation and each differential equation is driven by a weighted sum of latent functions with uncertainty given by a Gaussian process prior. In this paper we consider employing the latent force model framework for the problem of determining robot motor primitives. To deal with discontinuities in the dynamical systems or the latent driving force we introduce an extension of the basic latent force model, that switches between different latent functions and potentially different dynamical systems. This creates a versatile representation for robot movements that can capture discrete changes and non-linearities in the dynamics. We give illustrative examples on both synthetic data and for striking movements recorded using a Barrett WAM robot as haptic input device. Our inspiration is robot motor primitives, but we expect our model to have wide application for dynamical systems including models for human motion capture data and systems biology. 1

2 0.56790274 242 nips-2010-Slice sampling covariance hyperparameters of latent Gaussian models

Author: Iain Murray, Ryan P. Adams

Abstract: The Gaussian process (GP) is a popular way to specify dependencies between random variables in a probabilistic model. In the Bayesian framework the covariance structure can be specified using unknown hyperparameters. Integrating over these hyperparameters considers different possible explanations for the data when making predictions. This integration is often performed using Markov chain Monte Carlo (MCMC) sampling. However, with non-Gaussian observations standard hyperparameter sampling approaches require careful tuning and may converge slowly. In this paper we present a slice sampling approach that requires little tuning while mixing well in both strong- and weak-data regimes. 1

3 0.53134853 33 nips-2010-Approximate inference in continuous time Gaussian-Jump processes

Author: Manfred Opper, Andreas Ruttor, Guido Sanguinetti

Abstract: We present a novel approach to inference in conditionally Gaussian continuous time stochastic processes, where the latent process is a Markovian jump process. We first consider the case of jump-diffusion processes, where the drift of a linear stochastic differential equation can jump at arbitrary time points. We derive partial differential equations for exact inference and present a very efficient mean field approximation. By introducing a novel lower bound on the free energy, we then generalise our approach to Gaussian processes with arbitrary covariance, such as the non-Markovian RBF covariance. We present results on both simulated and real data, showing that the approach is very accurate in capturing latent dynamics and can be useful in a number of real data modelling tasks.

4 0.52978206 213 nips-2010-Predictive Subspace Learning for Multi-view Data: a Large Margin Approach

Author: Ning Chen, Jun Zhu, Eric P. Xing

Abstract: Learning from multi-view data is important in many applications, such as image classification and annotation. In this paper, we present a large-margin learning framework to discover a predictive latent subspace representation shared by multiple views. Our approach is based on an undirected latent space Markov network that fulfills a weak conditional independence assumption that multi-view observations and response variables are independent given a set of latent variables. We provide efficient inference and parameter estimation methods for the latent subspace model. Finally, we demonstrate the advantages of large-margin learning on real video and web image data for discovering predictive latent representations and improving the performance on image classification, annotation and retrieval.

5 0.47442192 284 nips-2010-Variational bounds for mixed-data factor analysis

Author: Mohammad E. Khan, Guillaume Bouchard, Kevin P. Murphy, Benjamin M. Marlin

Abstract: We propose a new variational EM algorithm for fitting factor analysis models with mixed continuous and categorical observations. The algorithm is based on a simple quadratic bound to the log-sum-exp function. In the special case of fully observed binary data, the bound we propose is significantly faster than previous variational methods. We show that EM is significantly more robust in the presence of missing data compared to treating the latent factors as parameters, which is the approach used by exponential family PCA and other related matrix-factorization methods. A further benefit of the variational approach is that it can easily be extended to the case of mixtures of factor analyzers, as we show. We present results on synthetic and real data sets demonstrating several desirable properties of our proposed method. 1

6 0.46340346 89 nips-2010-Factorized Latent Spaces with Structured Sparsity

7 0.45667943 101 nips-2010-Gaussian sampling by local perturbations

8 0.42610538 171 nips-2010-Movement extraction by detecting dynamics switches and repetitions

9 0.40632284 103 nips-2010-Generating more realistic images using gated MRF's

10 0.38945878 113 nips-2010-Heavy-Tailed Process Priors for Selective Shrinkage

11 0.37520352 82 nips-2010-Evaluation of Rarity of Fingerprints in Forensics

12 0.37496436 54 nips-2010-Copula Processes

13 0.37146747 154 nips-2010-Learning sparse dynamic linear systems using stable spline kernels and exponential hyperpriors

14 0.33555806 40 nips-2010-Beyond Actions: Discriminative Models for Contextual Group Activities

15 0.33031175 70 nips-2010-Efficient Optimization for Discriminative Latent Class Models

16 0.32304183 266 nips-2010-The Maximal Causes of Natural Scenes are Edge Filters

17 0.31719291 68 nips-2010-Effects of Synaptic Weight Diffusion on Learning in Decision Making Networks

18 0.31037703 253 nips-2010-Spike timing-dependent plasticity as dynamic filter

19 0.31025529 85 nips-2010-Exact learning curves for Gaussian process regression on large random graphs

20 0.29987118 235 nips-2010-Self-Paced Learning for Latent Variable Models


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.349), (16, 0.022), (27, 0.041), (30, 0.029), (35, 0.014), (36, 0.014), (45, 0.175), (50, 0.06), (52, 0.019), (60, 0.01), (77, 0.034), (78, 0.018), (85, 0.012), (90, 0.041), (97, 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.96021688 259 nips-2010-Subgraph Detection Using Eigenvector L1 Norms

Author: Benjamin Miller, Nadya Bliss, Patrick J. Wolfe

Abstract: When working with network datasets, the theoretical framework of detection theory for Euclidean vector spaces no longer applies. Nevertheless, it is desirable to determine the detectability of small, anomalous graphs embedded into background networks with known statistical properties. Casting the problem of subgraph detection in a signal processing context, this article provides a framework and empirical results that elucidate a “detection theory” for graph-valued data. Its focus is the detection of anomalies in unweighted, undirected graphs through L1 properties of the eigenvectors of the graph’s so-called modularity matrix. This metric is observed to have relatively low variance for certain categories of randomly-generated graphs, and to reveal the presence of an anomalous subgraph with reasonable reliability when the anomaly is not well-correlated with stronger portions of the background graph. An analysis of subgraphs in real network datasets confirms the efficacy of this approach. 1

2 0.93010372 192 nips-2010-Online Classification with Specificity Constraints

Author: Andrey Bernstein, Shie Mannor, Nahum Shimkin

Abstract: We consider the online binary classification problem, where we are given m classifiers. At each stage, the classifiers map the input to the probability that the input belongs to the positive class. An online classification meta-algorithm is an algorithm that combines the outputs of the classifiers in order to attain a certain goal, without having prior knowledge on the form and statistics of the input, and without prior knowledge on the performance of the given classifiers. In this paper, we use sensitivity and specificity as the performance metrics of the meta-algorithm. In particular, our goal is to design an algorithm that satisfies the following two properties (asymptotically): (i) its average false positive rate (fp-rate) is under some given threshold; and (ii) its average true positive rate (tp-rate) is not worse than the tp-rate of the best convex combination of the m given classifiers that satisfies fprate constraint, in hindsight. We show that this problem is in fact a special case of the regret minimization problem with constraints, and therefore the above goal is not attainable. Hence, we pose a relaxed goal and propose a corresponding practical online learning meta-algorithm that attains it. In the case of two classifiers, we show that this algorithm takes a very simple form. To our best knowledge, this is the first algorithm that addresses the problem of the average tp-rate maximization under average fp-rate constraints in the online setting. 1

3 0.91335702 45 nips-2010-CUR from a Sparse Optimization Viewpoint

Author: Jacob Bien, Ya Xu, Michael W. Mahoney

Abstract: The CUR decomposition provides an approximation of a matrix X that has low reconstruction error and that is sparse in the sense that the resulting approximation lies in the span of only a few columns of X. In this regard, it appears to be similar to many sparse PCA methods. However, CUR takes a randomized algorithmic approach, whereas most sparse PCA methods are framed as convex optimization problems. In this paper, we try to understand CUR from a sparse optimization viewpoint. We show that CUR is implicitly optimizing a sparse regression objective and, furthermore, cannot be directly cast as a sparse PCA method. We also observe that the sparsity attained by CUR possesses an interesting structure, which leads us to formulate a sparse PCA method that achieves a CUR-like sparsity.

4 0.89468902 146 nips-2010-Learning Multiple Tasks using Manifold Regularization

Author: Arvind Agarwal, Samuel Gerber, Hal Daume

Abstract: We present a novel method for multitask learning (MTL) based on manifold regularization: assume that all task parameters lie on a manifold. This is the generalization of a common assumption made in the existing literature: task parameters share a common linear subspace. One proposed method uses the projection distance from the manifold to regularize the task parameters. The manifold structure and the task parameters are learned using an alternating optimization framework. When the manifold structure is fixed, our method decomposes across tasks which can be learnt independently. An approximation of the manifold regularization scheme is presented that preserves the convexity of the single task learning problem, and makes the proposed MTL framework efficient and easy to implement. We show the efficacy of our method on several datasets. 1

5 0.88042831 284 nips-2010-Variational bounds for mixed-data factor analysis

Author: Mohammad E. Khan, Guillaume Bouchard, Kevin P. Murphy, Benjamin M. Marlin

Abstract: We propose a new variational EM algorithm for fitting factor analysis models with mixed continuous and categorical observations. The algorithm is based on a simple quadratic bound to the log-sum-exp function. In the special case of fully observed binary data, the bound we propose is significantly faster than previous variational methods. We show that EM is significantly more robust in the presence of missing data compared to treating the latent factors as parameters, which is the approach used by exponential family PCA and other related matrix-factorization methods. A further benefit of the variational approach is that it can easily be extended to the case of mixtures of factor analyzers, as we show. We present results on synthetic and real data sets demonstrating several desirable properties of our proposed method. 1

6 0.86342323 221 nips-2010-Random Projections for $k$-means Clustering

7 0.85866892 261 nips-2010-Supervised Clustering

same-paper 8 0.80150384 262 nips-2010-Switched Latent Force Models for Movement Segmentation

9 0.79647779 136 nips-2010-Large-Scale Matrix Factorization with Missing Data under Additional Constraints

10 0.72633016 89 nips-2010-Factorized Latent Spaces with Structured Sparsity

11 0.72321165 210 nips-2010-Practical Large-Scale Optimization for Max-norm Regularization

12 0.70398676 226 nips-2010-Repeated Games against Budgeted Adversaries

13 0.70080745 30 nips-2010-An Inverse Power Method for Nonlinear Eigenproblems with Applications in 1-Spectral Clustering and Sparse PCA

14 0.69735748 110 nips-2010-Guaranteed Rank Minimization via Singular Value Projection

15 0.6879136 117 nips-2010-Identifying graph-structured activation patterns in networks

16 0.68692225 246 nips-2010-Sparse Coding for Learning Interpretable Spatio-Temporal Primitives

17 0.68501925 195 nips-2010-Online Learning in The Manifold of Low-Rank Matrices

18 0.68344551 196 nips-2010-Online Markov Decision Processes under Bandit Feedback

19 0.68184912 166 nips-2010-Minimum Average Cost Clustering

20 0.68162262 182 nips-2010-New Adaptive Algorithms for Online Classification