nips nips2000 nips2000-80 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Vladimir Pavlovic, James M. Rehg, John MacCormick
Abstract: The human figure exhibits complex and rich dynamic behavior that is both nonlinear and time-varying. Effective models of human dynamics can be learned from motion capture data using switching linear dynamic system (SLDS) models. We present results for human motion synthesis, classification, and visual tracking using learned SLDS models. Since exact inference in SLDS is intractable, we present three approximate inference algorithms and compare their performance. In particular, a new variational inference algorithm is obtained by casting the SLDS model as a Dynamic Bayesian Network. Classification experiments show the superiority of SLDS over conventional HMM's for our problem domain.
Reference: text
sentIndex sentText sentNum sentScore
1 com Abstract The human figure exhibits complex and rich dynamic behavior that is both nonlinear and time-varying. [sent-7, score-0.159]
2 Effective models of human dynamics can be learned from motion capture data using switching linear dynamic system (SLDS) models. [sent-8, score-0.806]
3 We present results for human motion synthesis, classification, and visual tracking using learned SLDS models. [sent-9, score-0.321]
4 Since exact inference in SLDS is intractable, we present three approximate inference algorithms and compare their performance. [sent-10, score-0.334]
5 In particular, a new variational inference algorithm is obtained by casting the SLDS model as a Dynamic Bayesian Network. [sent-11, score-0.257]
6 1 Introduction The human figure exhibits complex and rich dynamic behavior. [sent-13, score-0.159]
7 Dynamics are essential to the classification of human motion (e. [sent-14, score-0.239]
8 gesture recognition) as well as to the synthesis of realistic figure motion for computer graphics. [sent-16, score-0.23]
9 In visual tracking applications, dynamics can provide a powerful cue in the presence of occlusions and measurement noise. [sent-17, score-0.153]
10 Although the use of kinematic models in figure motion analysis is now commonplace, dynamic models have received relatively little attention. [sent-18, score-0.406]
11 A stochastic dynamic model imposes additional structure on the state space by specifying a probability distribution over state trajectories. [sent-22, score-0.242]
12 We are interested in learning dynamic models from motion capture data, which provides a training corpus of observed state space trajectories. [sent-23, score-0.43]
13 More recently, switching linear dynamic system (SLDS) models have been studied in [5, 12]. [sent-25, score-0.523]
14 In SLDS models, the Markov process controls an underlying linear dynamic system, rather than a fixed Gaussian measurement model. [sent-26, score-0.133]
15 Offsetting this advantage is the fact that exact inference in SLDS is intractable. [sent-28, score-0.138]
16 Approximate inference algorithms are required, which in turn complicates SLDS learning. [sent-29, score-0.138]
17 In this paper we present a framework for SLDS learning and apply it to figure motion modeling. [sent-30, score-0.179]
18 We derive three different approximate inference schemes: Viterbi [13], variational, and GPB2 [1]. [sent-31, score-0.196]
19 We apply learned motion models to three tasks: classification, motion synthesis, and visual tracking. [sent-32, score-0.478]
20 The SLDS model class consistently outperforms standard HMMs even on fairly simple motion sequences. [sent-40, score-0.179]
21 Our results suggest that SLDS models are a promising tool for figure motion analysis, and could play a key role in applications such as gesture recognition, visual surveillance, and computer animation. [sent-41, score-0.27]
22 In addition, this paper provides a summary of approximate inference techniques which is lacking in the previous literature on SLDS. [sent-42, score-0.174]
23 Furthermore, our variational inference algorithm is novel, and it provides another example of the benefit of interpreting classical statistical models as (mixed-state) graphical models. [sent-43, score-0.306]
24 2 Switching Linear Dynamic System Model A switching linear dynamic system (SLDS) model describes the dynamics of a complex, nonlinear physical process by switching among a set of linear dynamic models over time. [sent-44, score-0.983]
25 The system can be described using the following set of state-space equations, Xt+1 = A(st+dxt + Vt+1(St+1), Yt = eXt + Wt, Pr(st+1 = ils t = j) = II(i,j), for the plant and the switching model. [sent-45, score-0.384]
26 The meaning of the variables is as follows: X t E lRN denotes the hidden state of the LDS, and Vt is the state noise process. [sent-46, score-0.168]
27 The switching model is a discrete first order Markov process with state variables St from a set of S states. [sent-53, score-0.444]
28 The switching model is defined with the state transition matrix II and an initial state distribution 71"0. [sent-54, score-0.519]
29 The LDS and switching process are coupled due to the dependence of the LDS parameters A and Q on the switching state S t : A(st = i) = Ai, Q(St = i) = Qi. [sent-55, score-0.806]
30 l Pr(xt IXt-l, St) TI;=~l Pr(Yt IXt), (1) where YT, XT, and ST denote the sequences (of length T) of observations and hidden state variables. [sent-58, score-0.128]
31 From the Gauss-Markov assumption on the LDS and the Markov switching assumption, we can expand Equation I into the parameterized joint pdf of the SLDS of duration T. [sent-59, score-0.361]
32 Given the sufficient statistics from the inference phase, the parameter update equations in the maximization (M) step are easily obtained by maximizing the expected log of Equation 1 with respect to the LDS and Me parameters (see [13]). [sent-63, score-0.16]
33 3 Inference in SLDS The goal of inference in complex DBNs is to estimate the posterior P(XT, STIYT). [sent-64, score-0.165]
34 If there were no switching dynamics, the inference would be straightforward - we could infer X T from YT using LDS inference. [sent-65, score-0.483]
35 However, the presence of switching dynamics makes exact inference exponentially hard, as the distribution of the system state at time t is a mixture of Gaussians. [sent-66, score-0.629]
36 1 Approximate Viterbi Inference Viterbi approximation approach finds the most likely sequence of switching states Sf for a given observation sequence YT. [sent-70, score-0.42]
37 It is well known how to apply Viterbi inference to discrete state hidden Markov models and continuous state Gauss-Markov models. [sent-72, score-0.38]
38 Here we review an algorithm for approximate Viterbi inference in SLDSs presented in [13]. [sent-73, score-0.174]
39 We have shown in [13] that one can use a recursive procedure to find the best switching sequence Sf = argmaxsT Pr(STIYT). [sent-74, score-0.394]
40 (2) The two scaling components are the likelihood associated with the transition i ~ j from t to t - 1, and the probability of discrete SLDS switching from j to i. [sent-76, score-0.396]
41 The Viterbi inference algorithm can now be written Initialize LDS state estimates XOI-l,i and E01-1,i ; Initialize JO ,i ; fort=l:T-l fori=l:S forj=l:S Predict and filter LDS state estimates t It ,i,j and E tlt ,i ,j; Find j -+ i "transition probability" J tit - 1. [sent-79, score-0.39]
42 f;t - 1 i into state i; Update sequence probabilities J t • i and LDS slate estimates Xtl t , i and E t It ,i; end end Find "best" final switching state i;' _ l and backtrace the best switching sequence S;' ; Do RTS smoothing for S = s. [sent-81, score-1.039]
43 2 Approximate Variational Inference A general structured variational inference technique for Bayesian networks is described in [8]. [sent-85, score-0.257]
44 In our case we define Q by decoupling the switching and LDS portions of SLDS as shown in Figure l(b). [sent-87, score-0.345]
45 The original distribution is factorized into two independent distributions, a Hidden Markov Model (HMM) Qs with variational parameters {qo, . [sent-88, score-0.141]
46 , qT-l} and a time-varying LDS Qx with variational parameters {xo,A o, . [sent-91, score-0.141]
47 The optimal values of the variational parameters TJ are obtained by minimizing the KLdivergence w. [sent-98, score-0.141]
48 , qT-t) we use the inference in the HMM with output "probabilities" qt . [sent-105, score-0.193]
49 Similarly, to obtain (Xt) = E[XtIYT] we perform LDS inference in the decoupled time-varying LDS via RTS smoothing. [sent-106, score-0.178]
50 Equation 3 together with the inference solutions in the decoupled models form a set of fixed-point equations. [sent-107, score-0.227]
51 Solution of this fixed-point set is a tractable approximation to the intractable inference of the fully coupled SLDS. [sent-108, score-0.158]
52 The variational inference algorithm for fully coupled SLDSs can now be summarized as: error = 00 ; Initialize P r C St) ; while (KL di vergence> maxError) Find Qt, At, XO [TOm PrCSt) (Eq. [sent-109, score-0.277]
53 LDS parameters At and 1 define the best unimodal representation of the corresponding switching system and are, roughly, averages of original parameters weighted by a best estimates of the switching states P(St). [sent-113, score-0.847]
54 Instead of picking the most likely previous switching state j, we collapse the S Gaussians (one for each possible value of j) down into a single Gaussian. [sent-120, score-0.446]
55 Together with filtering this results in the following GPB2 algorithm pseudo code x Initialize LDS state estimates 01-1, i and Eo 1_ I, i; Initialize Pr(sQ = il - 1) = . [sent-125, score-0.139]
56 Unlike Viterbi, GPB2 provides soft estimates of switching states at each time t. [sent-128, score-0.396]
57 4 Previous Work SLDS models and their equivalents have been studied in statistics, time-series modeling, and target tracking since early 1970's. [sent-131, score-0.121]
58 Ghahramani [6] introduced a DBN-framework for learning and approximate inference in one class of SLDS models. [sent-133, score-0.174]
59 His underlying model differs from ours in assuming the presence of S independent, white noise-driven LDSs whose measurements are selected by the Markov switching process. [sent-134, score-0.345]
60 A switching framework for particle filters applied to dynamics learning is described in [2]. [sent-135, score-0.383]
61 5 Experimental Results The data set for our experiments is a corpus of 18 sequences of six individuals performing walking and jogging. [sent-138, score-0.129]
62 The first task we addressed was learning HMM and SLDS models for walking and running. [sent-146, score-0.156]
63 Each of the two motion types were modeled as one, two, or four-state HMM and SLDS models and then combined into a single complex jog-walk model. [sent-147, score-0.255]
64 In addition, each SLDS motion model was assumed to be of either the first or the second order 2. [sent-148, score-0.179]
65 Hence, a total of three models (HMM, first order SLDS, and second order SLDS) were considered for each cardinality (one, two, or four) of the switching state. [sent-149, score-0.416]
66 Learned HMM models were used to initialize the switching state segmentations for the SLDS models. [sent-152, score-0.535]
67 The inference) in SLDS learning was accomplished using the three approximated methods outlined in Section 3: Viterbi, GPB2, and variational inference. [sent-154, score-0.157]
68 Results of SLDS learning using either of the three approximate inference methods did not produce significantly different models. [sent-155, score-0.196]
69 ok '" " (a) One switching state, second order SLDS. [sent-161, score-0.345]
70 (c) KF, frame 7 (d) SLDS, frame 7 (e) SLDS , frame 20 :50 : : : v'"""'"" :LJ :LJ :LJ : :LJ : :LJ • 100 (b) Four switching states, second order SLDS. [sent-162, score-0.468]
71 (f) Synthesized walking motion Figure 2: (a)-(d) show an example of classification results on mixed walk-jog sequences using models of different order. [sent-163, score-0.362]
72 (e)-(g) compare constant velocity and SLDS trackers, and (h) shows motion synthesis. [sent-164, score-0.179]
73 locally optimal solution and all three inference schemes indeed converged to the same or similar posteriors. [sent-165, score-0.181]
74 We next addressed the classification of unknown motion sequences in order to test the relative performance of inference in HMM and SLDS. [sent-166, score-0.402]
75 Test sequences of walking and jogging motion were selected randomly and spliced together using B-spline smoothing. [sent-167, score-0.291]
76 Segmentation of the resulting sequences into "walk" and "jog" regimes was accomplished using Viterbi inference in the HMM model and approximate Viterbi, GPB2, and variational inference under the SLDS model. [sent-168, score-0.481]
77 Estimates of "best" switching states Pr(St) indicated which of the two models were considered to be the source of the corresponding motion segment. [sent-169, score-0.594]
78 Figure 2(a)-(b) shows results for two representative combinations of switching state and linear model orders. [sent-170, score-0.419]
79 Each motion type Gog and walk) is modeled using one switching state and a second order LDS. [sent-172, score-0.598]
80 Figure 2(b) shows the result when the switching state is increased to four. [sent-173, score-0.419]
81 The accuracy of classification increases with the order of the switching states and the LDS model order. [sent-174, score-0.388]
82 More interesting, however, is that the HMM model consistently yields lower segmentation accuracy then all of the SLDS inference schemes. [sent-175, score-0.138]
83 This is not surprising since the HMM model does not impose continuity across time in the plant state space (x), which does indeed exist in a natural figure motion Goint angles evolve continuously in time. [sent-176, score-0.274]
84 ) Quantitatively, the three SLDS inference schemes produce very similar results. [sent-177, score-0.181]
85 Our next experiment addressed the use of learned dynamic models in visual tracking. [sent-181, score-0.221]
86 A conventional extended Kalman filter using a constant velocity dynamic model performs poorly on simple walking motion, due to pixel noise and self-occlusions, and fails by frame 7 as shown in Figure 2(c). [sent-184, score-0.246]
87 We employ approximate Viterbi inference in SLDS as a multi-hypothesis predictor that initializes multiple local template searches in the image space. [sent-185, score-0.193]
88 From the S2 multiple hypotheses Xtlt-l,i,j at each time step, we pick the best S hypothesis with the smallest switching cost, as determined by Equation 2. [sent-186, score-0.367]
89 The tracker is well-aligned at frame 7 and only starts to drift off by frame 20. [sent-188, score-0.149]
90 The final experiment simulated walking motion by sampling from a learned SLDS walking model. [sent-190, score-0.363]
91 The discrete states used to generate the motion are plotted at the bottom of the figure. [sent-192, score-0.225]
92 6 Conclusions Dynamic models for human motion can be learned within a Switching Linear Dynamic System (SLDS) framework. [sent-194, score-0.294]
93 We have derived three approximate inference algorithms for SLDS: Viterbi, GPB2, and variational. [sent-195, score-0.196]
94 We show synthesis of natural walking motion by sampling. [sent-199, score-0.287]
95 In future work we will build more complex motion models using a much larger motion capture dataset, which we are currently building. [sent-200, score-0.451]
96 We will also extend the SLDS tracker to more complex measurement models and complex discrete state processes (see [10] for a recent approach). [sent-201, score-0.308]
97 Koller, "Discovering the hidden structure of complex dynamic systems," in Proc. [sent-211, score-0.141]
98 Freeman, "Bayesian reconstruction of 3d human motion from single-camera video," in NIPS'99, 1999. [sent-227, score-0.217]
99 Murphy, "Learning switching kalman-filter models," TR 98-10, Compaq CRL. [sent-252, score-0.345]
100 Murphy, "A dynamic bayesian network approach to figure tmcking using learned dynamic models," in Proc. [sent-261, score-0.247]
wordName wordTfidf (topN-words)
[('slds', 0.725), ('switching', 0.345), ('lds', 0.295), ('viterbi', 0.184), ('motion', 0.179), ('inference', 0.138), ('hmm', 0.122), ('variational', 0.119), ('st', 0.11), ('pr', 0.098), ('dynamic', 0.094), ('yt', 0.078), ('walking', 0.078), ('state', 0.074), ('xt', 0.068), ('tracker', 0.067), ('tracking', 0.055), ('qt', 0.055), ('models', 0.049), ('initialize', 0.049), ('lj', 0.048), ('frame', 0.041), ('compaq', 0.04), ('decoupled', 0.04), ('rehg', 0.04), ('rts', 0.04), ('measurement', 0.039), ('smoothing', 0.039), ('human', 0.038), ('dynamics', 0.038), ('approximate', 0.036), ('dbn', 0.035), ('pseudo', 0.035), ('kinematic', 0.035), ('sequences', 0.034), ('walk', 0.033), ('kalman', 0.033), ('bayesian', 0.031), ('estimates', 0.03), ('synthesis', 0.03), ('addressed', 0.029), ('learned', 0.028), ('end', 0.028), ('collapse', 0.027), ('forj', 0.027), ('jist', 0.027), ('pavlovic', 0.027), ('sldss', 0.027), ('stiyt', 0.027), ('tlt', 0.027), ('xtlt', 0.027), ('sequence', 0.027), ('complex', 0.027), ('transition', 0.026), ('discrete', 0.025), ('vt', 0.024), ('dbns', 0.023), ('fori', 0.023), ('econometrics', 0.023), ('xtl', 0.023), ('markov', 0.023), ('parameters', 0.022), ('best', 0.022), ('gaussians', 0.022), ('classification', 0.022), ('xo', 0.022), ('three', 0.022), ('states', 0.021), ('schemes', 0.021), ('ghahramani', 0.021), ('fort', 0.021), ('synthesized', 0.021), ('gesture', 0.021), ('plant', 0.021), ('sf', 0.021), ('visual', 0.021), ('coupled', 0.02), ('video', 0.02), ('hidden', 0.02), ('mc', 0.019), ('murphy', 0.019), ('template', 0.019), ('ti', 0.018), ('segmentations', 0.018), ('cvpr', 0.018), ('ot', 0.018), ('namely', 0.018), ('system', 0.018), ('generalized', 0.018), ('corpus', 0.017), ('koller', 0.017), ('filter', 0.017), ('studied', 0.017), ('capture', 0.017), ('con', 0.016), ('joint', 0.016), ('mixture', 0.016), ('conventional', 0.016), ('accomplished', 0.016), ('hmms', 0.016)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000004 80 nips-2000-Learning Switching Linear Models of Human Motion
Author: Vladimir Pavlovic, James M. Rehg, John MacCormick
Abstract: The human figure exhibits complex and rich dynamic behavior that is both nonlinear and time-varying. Effective models of human dynamics can be learned from motion capture data using switching linear dynamic system (SLDS) models. We present results for human motion synthesis, classification, and visual tracking using learned SLDS models. Since exact inference in SLDS is intractable, we present three approximate inference algorithms and compare their performance. In particular, a new variational inference algorithm is obtained by casting the SLDS model as a Dynamic Bayesian Network. Classification experiments show the superiority of SLDS over conventional HMM's for our problem domain.
2 0.15261358 82 nips-2000-Learning and Tracking Cyclic Human Motion
Author: Dirk Ormoneit, Hedvig Sidenbladh, Michael J. Black, Trevor Hastie
Abstract: We present methods for learning and tracking human motion in video. We estimate a statistical model of typical activities from a large set of 3D periodic human motion data by segmenting these data automatically into
3 0.134799 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
Author: Zoubin Ghahramani, Matthew J. Beal
Abstract: Variational approximations are becoming a widespread tool for Bayesian learning of graphical models. We provide some theoretical results for the variational updates in a very general family of conjugate-exponential graphical models. We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian learning. Applying these results to the Bayesian analysis of linear-Gaussian state-space models we obtain a learning procedure that exploits the Kalman smoothing propagation, while integrating over all model parameters. We demonstrate how this can be used to infer the hidden state dimensionality of the state-space model in a variety of synthetic problems and one real high-dimensional data set. 1
4 0.10334257 125 nips-2000-Stability and Noise in Biochemical Switches
Author: William Bialek
Abstract: Many processes in biology, from the regulation of gene expression in bacteria to memory in the brain, involve switches constructed from networks of biochemical reactions. Crucial molecules are present in small numbers, raising questions about noise and stability. Analysis of noise in simple reaction schemes indicates that switches stable for years and switchable in milliseconds can be built from fewer than one hundred molecules. Prospects for direct tests of this prediction, as well as implications, are discussed. 1
5 0.10167533 83 nips-2000-Machine Learning for Video-Based Rendering
Author: Arno Schödl, Irfan A. Essa
Abstract: We present techniques for rendering and animation of realistic scenes by analyzing and training on short video sequences. This work extends the new paradigm for computer animation, video textures, which uses recorded video to generate novel animations by replaying the video samples in a new order. Here we concentrate on video sprites, which are a special type of video texture. In video sprites, instead of storing whole images, the object of interest is separated from the background and the video samples are stored as a sequence of alpha-matted sprites with associated velocity information. They can be rendered anywhere on the screen to create a novel animation of the object. We present methods to create such animations by finding a sequence of sprite samples that is both visually smooth and follows a desired path. To estimate visual smoothness, we train a linear classifier to estimate visual similarity between video samples. If the motion path is known in advance, we use beam search to find a good sample sequence. We can specify the motion interactively by precomputing the sequence cost function using Q-Iearning.
6 0.098526776 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
7 0.08481025 90 nips-2000-New Approaches Towards Robust and Adaptive Speech Recognition
8 0.065933987 115 nips-2000-Sequentially Fitting ``Inclusive'' Trees for Inference in Noisy-OR Networks
9 0.062270533 53 nips-2000-Feature Correspondence: A Markov Chain Monte Carlo Approach
10 0.061091401 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
11 0.057471942 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks
12 0.056188565 96 nips-2000-One Microphone Source Separation
13 0.054993201 137 nips-2000-The Unscented Particle Filter
14 0.054713801 72 nips-2000-Keeping Flexible Active Contours on Track using Metropolis Updates
15 0.051432863 23 nips-2000-An Adaptive Metric Machine for Pattern Classification
16 0.050192211 122 nips-2000-Sparse Representation for Gaussian Process Models
17 0.047754664 138 nips-2000-The Use of Classifiers in Sequential Inference
18 0.046535209 14 nips-2000-A Variational Mean-Field Theory for Sigmoidal Belief Networks
19 0.046032149 103 nips-2000-Probabilistic Semantic Video Indexing
20 0.042730827 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images
topicId topicWeight
[(0, 0.153), (1, -0.084), (2, 0.106), (3, 0.001), (4, 0.029), (5, -0.033), (6, 0.059), (7, 0.113), (8, -0.085), (9, -0.041), (10, 0.164), (11, 0.066), (12, 0.063), (13, 0.021), (14, -0.15), (15, -0.205), (16, 0.135), (17, 0.029), (18, 0.021), (19, -0.041), (20, 0.088), (21, 0.038), (22, 0.112), (23, -0.158), (24, -0.202), (25, -0.023), (26, -0.058), (27, -0.28), (28, -0.044), (29, -0.16), (30, -0.057), (31, 0.083), (32, -0.004), (33, 0.137), (34, -0.052), (35, 0.017), (36, 0.035), (37, -0.074), (38, 0.144), (39, 0.054), (40, 0.139), (41, -0.037), (42, 0.052), (43, 0.058), (44, 0.018), (45, -0.033), (46, -0.129), (47, -0.097), (48, 0.092), (49, 0.024)]
simIndex simValue paperId paperTitle
same-paper 1 0.95113403 80 nips-2000-Learning Switching Linear Models of Human Motion
Author: Vladimir Pavlovic, James M. Rehg, John MacCormick
Abstract: The human figure exhibits complex and rich dynamic behavior that is both nonlinear and time-varying. Effective models of human dynamics can be learned from motion capture data using switching linear dynamic system (SLDS) models. We present results for human motion synthesis, classification, and visual tracking using learned SLDS models. Since exact inference in SLDS is intractable, we present three approximate inference algorithms and compare their performance. In particular, a new variational inference algorithm is obtained by casting the SLDS model as a Dynamic Bayesian Network. Classification experiments show the superiority of SLDS over conventional HMM's for our problem domain.
2 0.59674829 82 nips-2000-Learning and Tracking Cyclic Human Motion
Author: Dirk Ormoneit, Hedvig Sidenbladh, Michael J. Black, Trevor Hastie
Abstract: We present methods for learning and tracking human motion in video. We estimate a statistical model of typical activities from a large set of 3D periodic human motion data by segmenting these data automatically into
3 0.50989735 83 nips-2000-Machine Learning for Video-Based Rendering
Author: Arno Schödl, Irfan A. Essa
Abstract: We present techniques for rendering and animation of realistic scenes by analyzing and training on short video sequences. This work extends the new paradigm for computer animation, video textures, which uses recorded video to generate novel animations by replaying the video samples in a new order. Here we concentrate on video sprites, which are a special type of video texture. In video sprites, instead of storing whole images, the object of interest is separated from the background and the video samples are stored as a sequence of alpha-matted sprites with associated velocity information. They can be rendered anywhere on the screen to create a novel animation of the object. We present methods to create such animations by finding a sequence of sprite samples that is both visually smooth and follows a desired path. To estimate visual smoothness, we train a linear classifier to estimate visual similarity between video samples. If the motion path is known in advance, we use beam search to find a good sample sequence. We can specify the motion interactively by precomputing the sequence cost function using Q-Iearning.
4 0.50938869 125 nips-2000-Stability and Noise in Biochemical Switches
Author: William Bialek
Abstract: Many processes in biology, from the regulation of gene expression in bacteria to memory in the brain, involve switches constructed from networks of biochemical reactions. Crucial molecules are present in small numbers, raising questions about noise and stability. Analysis of noise in simple reaction schemes indicates that switches stable for years and switchable in milliseconds can be built from fewer than one hundred molecules. Prospects for direct tests of this prediction, as well as implications, are discussed. 1
5 0.38253522 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
Author: Zoubin Ghahramani, Matthew J. Beal
Abstract: Variational approximations are becoming a widespread tool for Bayesian learning of graphical models. We provide some theoretical results for the variational updates in a very general family of conjugate-exponential graphical models. We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian learning. Applying these results to the Bayesian analysis of linear-Gaussian state-space models we obtain a learning procedure that exploits the Kalman smoothing propagation, while integrating over all model parameters. We demonstrate how this can be used to infer the hidden state dimensionality of the state-space model in a variety of synthetic problems and one real high-dimensional data set. 1
6 0.33772382 115 nips-2000-Sequentially Fitting ``Inclusive'' Trees for Inference in Noisy-OR Networks
7 0.33658749 138 nips-2000-The Use of Classifiers in Sequential Inference
8 0.31099012 90 nips-2000-New Approaches Towards Robust and Adaptive Speech Recognition
9 0.27696577 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
10 0.25906491 53 nips-2000-Feature Correspondence: A Markov Chain Monte Carlo Approach
11 0.245206 23 nips-2000-An Adaptive Metric Machine for Pattern Classification
12 0.23900244 73 nips-2000-Kernel-Based Reinforcement Learning in Average-Cost Problems: An Application to Optimal Portfolio Choice
13 0.22843441 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks
14 0.22309195 14 nips-2000-A Variational Mean-Field Theory for Sigmoidal Belief Networks
15 0.21446593 137 nips-2000-The Unscented Particle Filter
16 0.20709974 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images
17 0.17606623 96 nips-2000-One Microphone Source Separation
18 0.16712624 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
19 0.15605463 30 nips-2000-Bayesian Video Shot Segmentation
20 0.15213396 38 nips-2000-Data Clustering by Markovian Relaxation and the Information Bottleneck Method
topicId topicWeight
[(6, 0.268), (10, 0.023), (17, 0.09), (26, 0.026), (32, 0.027), (33, 0.023), (45, 0.01), (55, 0.04), (62, 0.072), (65, 0.038), (67, 0.034), (75, 0.016), (76, 0.022), (79, 0.023), (81, 0.065), (90, 0.023), (91, 0.011), (94, 0.055), (97, 0.02)]
simIndex simValue paperId paperTitle
same-paper 1 0.7880246 80 nips-2000-Learning Switching Linear Models of Human Motion
Author: Vladimir Pavlovic, James M. Rehg, John MacCormick
Abstract: The human figure exhibits complex and rich dynamic behavior that is both nonlinear and time-varying. Effective models of human dynamics can be learned from motion capture data using switching linear dynamic system (SLDS) models. We present results for human motion synthesis, classification, and visual tracking using learned SLDS models. Since exact inference in SLDS is intractable, we present three approximate inference algorithms and compare their performance. In particular, a new variational inference algorithm is obtained by casting the SLDS model as a Dynamic Bayesian Network. Classification experiments show the superiority of SLDS over conventional HMM's for our problem domain.
2 0.5888651 2 nips-2000-A Comparison of Image Processing Techniques for Visual Speech Recognition Applications
Author: Michael S. Gray, Terrence J. Sejnowski, Javier R. Movellan
Abstract: We examine eight different techniques for developing visual representations in machine vision tasks. In particular we compare different versions of principal component and independent component analysis in combination with stepwise regression methods for variable selection. We found that local methods, based on the statistics of image patches, consistently outperformed global methods based on the statistics of entire images. This result is consistent with previous work on emotion and facial expression recognition. In addition, the use of a stepwise regression technique for selecting variables and regions of interest substantially boosted performance. 1
3 0.49736476 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks
Author: Javier R. Movellan, Paul Mineiro, Ruth J. Williams
Abstract: This paper explores a framework for recognition of image sequences using partially observable stochastic differential equation (SDE) models. Monte-Carlo importance sampling techniques are used for efficient estimation of sequence likelihoods and sequence likelihood gradients. Once the network dynamics are learned, we apply the SDE models to sequence recognition tasks in a manner similar to the way Hidden Markov models (HMMs) are commonly applied. The potential advantage of SDEs over HMMS is the use of continuous state dynamics. We present encouraging results for a video sequence recognition task in which SDE models provided excellent performance when compared to hidden Markov models. 1
4 0.47174621 82 nips-2000-Learning and Tracking Cyclic Human Motion
Author: Dirk Ormoneit, Hedvig Sidenbladh, Michael J. Black, Trevor Hastie
Abstract: We present methods for learning and tracking human motion in video. We estimate a statistical model of typical activities from a large set of 3D periodic human motion data by segmenting these data automatically into
5 0.44471204 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador
Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their
6 0.44362888 103 nips-2000-Probabilistic Semantic Video Indexing
7 0.44219685 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
8 0.43074632 6 nips-2000-A Neural Probabilistic Language Model
9 0.4296675 71 nips-2000-Interactive Parts Model: An Application to Recognition of On-line Cursive Script
10 0.42718041 146 nips-2000-What Can a Single Neuron Compute?
11 0.42255074 72 nips-2000-Keeping Flexible Active Contours on Track using Metropolis Updates
12 0.4174419 28 nips-2000-Balancing Multiple Sources of Reward in Reinforcement Learning
13 0.41640425 26 nips-2000-Automated State Abstraction for Options using the U-Tree Algorithm
14 0.41570896 138 nips-2000-The Use of Classifiers in Sequential Inference
15 0.41566223 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition
16 0.41493455 49 nips-2000-Explaining Away in Weight Space
17 0.4130111 1 nips-2000-APRICODD: Approximate Policy Construction Using Decision Diagrams
18 0.41114205 122 nips-2000-Sparse Representation for Gaussian Process Models
19 0.4103696 142 nips-2000-Using Free Energies to Represent Q-values in a Multiagent Reinforcement Learning Task
20 0.410182 127 nips-2000-Structure Learning in Human Causal Induction