nips nips2000 nips2000-137 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Rudolph van der Merwe, Arnaud Doucet, Nando de Freitas, Eric A. Wan
Abstract: In this paper, we propose a new particle filter based on sequential importance sampling. The algorithm uses a bank of unscented filters to obtain the importance proposal distribution. This proposal has two very
Reference: text
sentIndex sentText sentNum sentScore
1 The Unscented Particle Filter Rudolph van der Merwe Oregon Graduate Institute Electrical and Computer Engineering P. [sent-1, score-0.12]
2 edu N ando de Freitas UC Berkeley, Computer Science 387 Soda Hall, Berkeley CA 94720-1776 USA jfgf@cs. [sent-5, score-0.117]
3 edu Abstract In this paper, we propose a new particle filter based on sequential importance sampling. [sent-14, score-0.846]
4 The algorithm uses a bank of unscented filters to obtain the importance proposal distribution. [sent-15, score-0.701]
5 Firstly, it makes efficient use of the latest available information and, secondly, it can have heavy tails. [sent-17, score-0.064]
6 As a result, we find that the algorithm outperforms standard particle filtering and other nonlinear filtering methods very substantially. [sent-18, score-0.689]
7 This experimental finding is in agreement with the theoretical convergence proof for the algorithm. [sent-19, score-0.049]
8 The algorithm also includes resampling and (possibly) Markov chain Monte Carlo (MCMC) steps. [sent-20, score-0.051]
9 1 Introduction Filtering is the problem of estimating the states (parameters or hidden variables) of a system as a set of observations becomes available on-line. [sent-21, score-0.057]
10 This problem is of paramount importance in many fields of science, engineering and finance. [sent-22, score-0.287]
11 To solve it, one begins by modelling the evolution of the system and the noise in the measurements. [sent-23, score-0.035]
12 The resulting models typically exhibit complex nonlinearities and non-Gaussian distributions, thus precluding analytical solution. [sent-24, score-0.03]
13 The best known algorithm to solve the problem of non-Gaussian, nonlinear filtering (filtering for short) is the extended Kalman filter (Anderson and Moore 1979). [sent-25, score-0.422]
14 This filter is based upon the principle of linearising the measurements and evolution models using Taylor series expansions. [sent-26, score-0.277]
15 The series approximations in the EKF algorithm can, however, lead to poor representations of the nonlinear functions and probability distributions of interest. [sent-27, score-0.076]
16 Recently, Julier and Uhlmann (Julier and Uhlmann 1997) have introduced a filter founded on the intuition that it is easier to approximate a Gaussian distribution than it is to approximate arbitrary nonlinear functions. [sent-29, score-0.318]
17 They named this filter the unscented Kalman filter (UKF) . [sent-30, score-0.724]
18 They have shown that the UKF leads to more accurate results than the EKF and that in particular it generates much better estimates of the covariance of the states (the EKF seems to underestimate this quantity). [sent-31, score-0.164]
19 Another popular solution strategy for the general filtering problem is to use sequential Monte Carlo methods, also known as particle filters (PFs): see for example (Doucet, Godsill and Andrieu 2000, Doucet, de Freitas and Gordon 2001, Gordon, Salmond and Smith 1993). [sent-33, score-0.758]
20 These methods allow for a complete representation of the posterior distribution of the states, so that any statistical estimates, such as the mean, modes, kurtosis and variance, can be easily computed. [sent-34, score-0.028]
21 They can therefore, deal with any nonlinearities or distributions. [sent-35, score-0.03]
22 PFs rely on importance sampling and, as a result, require the design of proposal distributions that can approximate the posterior distribution reasonably welL In general, it is hard to design such proposals. [sent-36, score-0.448]
23 The most common strategy is to sample from the probabilistic model of the states evolution (transition prior). [sent-37, score-0.065]
24 This strategy can, however, fail if the new measurements appear in the tail of the prior or if the likelihood is too peaked in comparison to the prior. [sent-38, score-0.061]
25 This situation does indeed arise in several areas of engineering and finance, where one can encounter sensors that are very accurate (peaked likelihoods) or data that undergoes sudden changes (nonstationarities): see for example (Pitt and Shephard 1999, Thrun 2000). [sent-39, score-0.122]
26 To overcome this problem, several techniques based on linearisation have been proposed in the literature (de Freitas 1999, de Freitas, Niranjan, Gee and Doucet 2000, Doucet et aL 2000, Pitt and Shephard 1999). [sent-40, score-0.117]
27 For example, in (de Freitas et aL 2000), the EKF Gaussian approximation is used as the proposal distribution for a PF. [sent-41, score-0.228]
28 In this paper, we follow the same approach, but replace the EKF proposal by a UKF proposal. [sent-42, score-0.2]
29 The resulting filter should perform better not only because the UKF is more accurate, but because it also allows one to control the rate at which the tails of the proposal distribution go to zero. [sent-43, score-0.559]
30 It becomes thus possible to adopt heavier tailed distributions as proposals and, consequently, obtain better importance samplers (Gelman, Carlin, Stern and Rubin 1995). [sent-44, score-0.345]
31 Readers are encouraged to consult our technical report for further results and implementation details (van der Merwe, Doucet, de Freitas and Wan 2000)1. [sent-45, score-0.187]
32 2 Dynamic State Space Model We apply our algorithm to general state space models consisting of a transition equation p(Xt IXt-d and a measurement equation p(Yt IXt). [sent-46, score-0.1]
33 That is, the states follow a Markov process and the observations are assumed to be independent given the states. [sent-47, score-0.057]
34 The mappings f : Rnz x Rnv r-+ Rnz and h : (Rn z x Rnu) x Rnn r-+ Rn y represent the deterministic process and measurement models. [sent-49, score-0.042]
35 To complete the specification ofthe model, the prior distribution (at t = 0) lThe TR and software are available at http://www. [sent-50, score-0.059]
36 Our goal will be to approximate the posterior distribution p(xo:tIYl:t) and one of its marginals, the filtering density p(XtIYl:t) , where Yl:t = {Yl, Y2, . [sent-55, score-0.16]
37 ,yd· By computing the filtering density recursively, we do not need to keep track of the complete history of the states. [sent-58, score-0.132]
38 3 Particle Filtering Particle filters allow us to approximate the posterior distribution P (xo:t I Yl:t) using N N}, a set of weighted samples (particles) {x~~L i = 1, . [sent-59, score-0.124]
39 , which are drawn from an importance proposal distribution q(xo:tIYl:t). [sent-62, score-0.393]
40 These samples are propagated in time as shown in Figure 1. [sent-63, score-0.026]
41 This is done in a rigorous setting that ensures convergence according to the strong law of large numbers where ~ denotes almost sure convergence and it : IRn~ -t IRn't is some function of interest. [sent-65, score-0.098]
42 For example, it could be the conditional mean, in which case it (xo:t) = XO:t, or the conditional covariance of Xt with it (xo:t) = XtX~ i= 1, . [sent-66, score-0.026]
43 ,N= 10 particles o 0 0 0 o , 000 0 " it tf' ! [sent-69, score-0.118]
44 i 1h lh j 1 {x(i)1' w(i)} t· t· 1 •• Figure 1: In this example, a particle filter starts at time t - 1 with an unweighted measure {X~~l' N- 1 }, which provides an approximation of p(Xt-lIYl:t-2). [sent-70, score-0.684]
45 For each particle we compute the importance weights using the information at time t - 1. [sent-71, score-0.568]
46 Subsequently, a resampling step selects only the "fittest" particles to obtain the unweighted measure {X~~l' N- 1 }, which is still an approximation of p(Xt-lIYl:t-l) . [sent-74, score-0.301]
47 Finally, the sampling (prediction) step introduces variety, resulting in the measure {x~i), N-l}. [sent-75, score-0.068]
48 Sequential importance sampling step • For i = 1, . [sent-79, score-0.233]
49 evaluate the importance weights up to a normalizing constant: (il _ wt - -(i l P( xo:t IYl:t ) (il q (-(i IXOt - 1' Y1:t )P (-(ilt - 1IY1 :t-1 ) xt : XO : l . [sent-88, score-0.41]
50 Selection step • Multiply/suppress samples (x~i~) with high/low importance weights w~il. [sent-96, score-0.258]
51 to obtain N random samples (x~i~) approximately distributed according to p(X~~~IY1:t). [sent-98, score-0.052]
52 MCMC step • Apply a Markov transition kernel with invariant distribution given by p(x~~~IYl:t) to obtain (x~i ~). [sent-100, score-0.123]
53 The simplest choice is to just sample from the prior, P (Xt I Xt- I), in which case the importance weight is equal to the likelihood, P (Ytl YI:t-l, xO:t). [sent-102, score-0.165]
54 The selection (resampling) step is used to eliminate the particles having low importance weights and to multiply particles having high importance weights (Gordon et al. [sent-104, score-0.659]
55 This is done by mapping the weighted measure {x~i) ,w~i)} to an unweighted measure {x~i), N- I } that provides an approximation of p(xtIYl:t). [sent-106, score-0.065]
56 After the selection scheme at time t, we obtain N particles distributed marginally approximately according to p(xo:tIYl:t). [sent-107, score-0.144]
57 One can, therefore, apply a Markov kernel (for example, a Metropolis or Gibbs kernel) to each particle and the resulting distribution will still be p(xo:t IYl:t). [sent-108, score-0.405]
58 This step usually allows us to obtain better results and to treat more complex models (de Freitas 1999). [sent-109, score-0.093]
59 4 The Unscented Particle Filter As mentioned earlier, using the transition prior as proposal distribution can be inefficient. [sent-110, score-0.287]
60 As illustrated in Figure 2, if we fail to use the latest available information to propose new values for the states, only a few particles might survive. [sent-111, score-0.148]
61 It is therefore of paramount importance to move the particles towards the regions of high likelihood. [sent-112, score-0.398]
62 To achieve this, we propose to use the unscented filter as proposal distribution. [sent-113, score-0.682]
63 For exact details, please refer to our technical report (van der Merwe et al. [sent-115, score-0.101]
64 Prior Likelihood • • ••••••• • • • • • • Figure 2: The UKF proposal distribution allows us to move the samples in the prior to regions of high likelihood. [sent-117, score-0.335]
65 This is of paramount importance if the likelihood happens to lie in one of the tails of the prior distribution, or if it is too narrow (low measurement error). [sent-118, score-0.366]
66 This convergence result shows that, under very lose assumptions, convergence of the (unscented) particle filter is ensured and that the convergence rate of the method is independent of the dimension of the state-space. [sent-123, score-0.766]
67 The only crucial assumption is to ensure that Wt is upper bounded, that is that the proposal distribution q (Xt I XO:t-l, Yl:t) has heavier tails than P (Yt I Xt) P (Xtl Xt-t). [sent-124, score-0.347]
68 Considering this theoretical result, it is not surprising that the UKF (which has heavier tails than the EKF) can yield better estimates. [sent-125, score-0.145]
69 Given only the noisy observations, Yt, a few different filters were used to estimate the underlying clean state sequence Xt for t = 1 . [sent-131, score-0.1]
70 table shows the means and variances of the mean-square-error (MSE) of the state estimates. [sent-155, score-0.03]
71 Figure 3 compares the estimates generated from a single run of the different particle filters. [sent-157, score-0.421]
72 The superior performance of the unscented particle filter is clearly evident. [sent-158, score-0.859]
73 Figure 'O ~--~ ' O----~2~O----~30-----4~O----~W----~ro· Time Figure 3: Plot of the state estimates generated by different filters. [sent-159, score-0.074]
74 4 shows the estimates of the state covariance generated by a stand-alone EKF and UKF for this problem. [sent-160, score-0.1]
75 Notice how the EKF's estimates are consistently smaller than those generated by the UKF. [sent-161, score-0.044]
76 This property makes the UKF better suited than the EKF for proposal distribution generation within the particle filter framework. [sent-162, score-0.903]
77 Estimates of state covariance I-- EKF I - 10"" UKF I I , "'-- . [sent-163, score-0.056]
78 , 'O~O:--------": '0"----------:0 :--------,3":-0-------": 2: 40-------:'::-0------:"0 5 time Figure 4: EKF and UKF estimates of state covariance. [sent-177, score-0.074]
79 7 Conclusions We proposed a new particle filter that uses unscented filters as proposal distributions. [sent-178, score-1.129]
80 The convergence proof and empirical evidence, clearly, demonstrate that this algorithm can lead to substantial improvements over other nonlinear filtering algorithms. [sent-179, score-0.229]
81 The algorithm is well suited for engineering applications, when the sensors are very accurate but nonlinear, and financial time series, where outliers and heavy tailed distributions play a significant role in the analysis of the data. [sent-180, score-0.258]
82 For further details and experiments, please refer to our report (van der Merwe et al. [sent-181, score-0.101]
83 Convergence of generalized particle filters, Technical Report CUED/F-INFENG/TR 381, Cambridge University Engineering Department. [sent-192, score-0.377]
84 Bayesian Methods for Neural Networks, PhD thesis, Department of Engineering, Cambridge University, Cambridge, UK de Freitas, J. [sent-197, score-0.117]
85 On sequential Monte Carlo sampling methods for Bayesian filtering, Statistics and Computing 10(3): 197- 208. [sent-218, score-0.089]
86 Novel approach to nonlinear/non-Gaussian Bayesian state estimation, lEE Proceedings-F 140(2): 107113. [sent-236, score-0.03]
87 A new extension of the Kalman filter of AeroSense: The 11th International Symposium on Simulation and Controls, Orlando, Florida. [sent-242, score-0.242]
88 Filtering via simulation: Auxiliary particle filters, Journal of the American Statistical Association 94(446): 590- 599. [sent-248, score-0.377]
89 The unscented particle filter, Technical Report CUED/F-INFENG/TR 380, Cambridge University Engineering Department. [sent-263, score-0.617]
wordName wordTfidf (topN-words)
[('particle', 0.377), ('ukf', 0.327), ('doucet', 0.261), ('ekf', 0.244), ('filter', 0.242), ('freitas', 0.24), ('unscented', 0.24), ('xo', 0.223), ('proposal', 0.2), ('importance', 0.165), ('xt', 0.145), ('filtering', 0.132), ('particles', 0.118), ('de', 0.117), ('merwe', 0.109), ('tiyl', 0.109), ('mcmc', 0.094), ('yt', 0.089), ('gordon', 0.079), ('iyl', 0.075), ('wt', 0.074), ('der', 0.07), ('filters', 0.07), ('kalman', 0.067), ('julier', 0.065), ('paramount', 0.065), ('shephard', 0.065), ('uhlmann', 0.065), ('unweighted', 0.065), ('yl', 0.064), ('tails', 0.063), ('sequential', 0.062), ('carlo', 0.058), ('monte', 0.058), ('engineering', 0.057), ('heavier', 0.056), ('wan', 0.056), ('pitt', 0.056), ('resampling', 0.051), ('mse', 0.051), ('van', 0.05), ('move', 0.05), ('convergence', 0.049), ('nonlinear', 0.048), ('estimates', 0.044), ('andrieu', 0.044), ('crisan', 0.044), ('gee', 0.044), ('godsill', 0.044), ('niranjan', 0.044), ('pfs', 0.044), ('rnn', 0.044), ('rnu', 0.044), ('rnv', 0.044), ('rnz', 0.044), ('salmond', 0.044), ('stern', 0.044), ('tailed', 0.044), ('xtiyl', 0.044), ('measurement', 0.042), ('generic', 0.042), ('step', 0.041), ('vt', 0.038), ('accurate', 0.038), ('carlin', 0.038), ('gelman', 0.038), ('il', 0.037), ('evolution', 0.035), ('heavy', 0.034), ('cx', 0.034), ('oregon', 0.034), ('rn', 0.033), ('please', 0.031), ('rubin', 0.031), ('thrun', 0.031), ('smith', 0.031), ('prior', 0.031), ('states', 0.03), ('state', 0.03), ('suited', 0.03), ('latest', 0.03), ('peaked', 0.03), ('berkeley', 0.03), ('ixt', 0.03), ('nonlinearities', 0.03), ('distribution', 0.028), ('graduate', 0.028), ('distributions', 0.028), ('transition', 0.028), ('observations', 0.027), ('sampling', 0.027), ('moore', 0.027), ('sensors', 0.027), ('covariance', 0.026), ('samples', 0.026), ('better', 0.026), ('weights', 0.026), ('obtain', 0.026), ('nt', 0.025), ('eds', 0.025)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999988 137 nips-2000-The Unscented Particle Filter
Author: Rudolph van der Merwe, Arnaud Doucet, Nando de Freitas, Eric A. Wan
Abstract: In this paper, we propose a new particle filter based on sequential importance sampling. The algorithm uses a bank of unscented filters to obtain the importance proposal distribution. This proposal has two very
2 0.26623157 72 nips-2000-Keeping Flexible Active Contours on Track using Metropolis Updates
Author: Trausti T. Kristjansson, Brendan J. Frey
Abstract: Condensation, a form of likelihood-weighted particle filtering, has been successfully used to infer the shapes of highly constrained
3 0.11075812 49 nips-2000-Explaining Away in Weight Space
Author: Peter Dayan, Sham Kakade
Abstract: Explaining away has mostly been considered in terms of inference of states in belief networks. We show how it can also arise in a Bayesian context in inference about the weights governing relationships such as those between stimuli and reinforcers in conditioning experiments such as bacA, 'Ward blocking. We show how explaining away in weight space can be accounted for using an extension of a Kalman filter model; provide a new approximate way of looking at the Kalman gain matrix as a whitener for the correlation matrix of the observation process; suggest a network implementation of this whitener using an architecture due to Goodall; and show that the resulting model exhibits backward blocking.
4 0.09863627 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador
Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their
5 0.095446043 53 nips-2000-Feature Correspondence: A Markov Chain Monte Carlo Approach
Author: Frank Dellaert, Steven M. Seitz, Sebastian Thrun, Charles E. Thorpe
Abstract: When trying to recover 3D structure from a set of images, the most difficult problem is establishing the correspondence between the measurements. Most existing approaches assume that features can be tracked across frames, whereas methods that exploit rigidity constraints to facilitate matching do so only under restricted camera motion. In this paper we propose a Bayesian approach that avoids the brittleness associated with singling out one
6 0.090810992 89 nips-2000-Natural Sound Statistics and Divisive Normalization in the Auditory System
7 0.084624454 23 nips-2000-An Adaptive Metric Machine for Pattern Classification
8 0.080876783 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
9 0.079969704 134 nips-2000-The Kernel Trick for Distances
10 0.072158203 31 nips-2000-Beyond Maximum Likelihood and Density Estimation: A Sample-Based Criterion for Unsupervised Learning of Complex Models
11 0.071269035 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
12 0.070679806 122 nips-2000-Sparse Representation for Gaussian Process Models
13 0.070412792 82 nips-2000-Learning and Tracking Cyclic Human Motion
14 0.059319735 96 nips-2000-One Microphone Source Separation
15 0.057592906 112 nips-2000-Reinforcement Learning with Function Approximation Converges to a Region
16 0.056940131 54 nips-2000-Feature Selection for SVMs
17 0.055582624 73 nips-2000-Kernel-Based Reinforcement Learning in Average-Cost Problems: An Application to Optimal Portfolio Choice
18 0.054993201 80 nips-2000-Learning Switching Linear Models of Human Motion
19 0.053721644 2 nips-2000-A Comparison of Image Processing Techniques for Visual Speech Recognition Applications
20 0.048501495 140 nips-2000-Tree-Based Modeling and Estimation of Gaussian Processes on Graphs with Cycles
topicId topicWeight
[(0, 0.174), (1, -0.044), (2, 0.054), (3, 0.025), (4, -0.001), (5, 0.049), (6, 0.012), (7, 0.099), (8, -0.058), (9, -0.158), (10, 0.073), (11, 0.075), (12, 0.141), (13, -0.042), (14, -0.104), (15, -0.505), (16, 0.107), (17, 0.091), (18, 0.02), (19, -0.07), (20, 0.026), (21, -0.095), (22, -0.076), (23, 0.131), (24, 0.195), (25, -0.098), (26, 0.108), (27, 0.186), (28, 0.11), (29, 0.077), (30, -0.167), (31, -0.092), (32, -0.007), (33, 0.025), (34, 0.054), (35, -0.022), (36, 0.047), (37, -0.017), (38, -0.026), (39, 0.037), (40, -0.034), (41, 0.004), (42, -0.007), (43, 0.16), (44, -0.032), (45, 0.019), (46, 0.013), (47, 0.064), (48, 0.044), (49, -0.033)]
simIndex simValue paperId paperTitle
same-paper 1 0.96854055 137 nips-2000-The Unscented Particle Filter
Author: Rudolph van der Merwe, Arnaud Doucet, Nando de Freitas, Eric A. Wan
Abstract: In this paper, we propose a new particle filter based on sequential importance sampling. The algorithm uses a bank of unscented filters to obtain the importance proposal distribution. This proposal has two very
2 0.7674647 72 nips-2000-Keeping Flexible Active Contours on Track using Metropolis Updates
Author: Trausti T. Kristjansson, Brendan J. Frey
Abstract: Condensation, a form of likelihood-weighted particle filtering, has been successfully used to infer the shapes of highly constrained
3 0.36179659 49 nips-2000-Explaining Away in Weight Space
Author: Peter Dayan, Sham Kakade
Abstract: Explaining away has mostly been considered in terms of inference of states in belief networks. We show how it can also arise in a Bayesian context in inference about the weights governing relationships such as those between stimuli and reinforcers in conditioning experiments such as bacA, 'Ward blocking. We show how explaining away in weight space can be accounted for using an extension of a Kalman filter model; provide a new approximate way of looking at the Kalman gain matrix as a whitener for the correlation matrix of the observation process; suggest a network implementation of this whitener using an architecture due to Goodall; and show that the resulting model exhibits backward blocking.
4 0.29851565 23 nips-2000-An Adaptive Metric Machine for Pattern Classification
Author: Carlotta Domeniconi, Jing Peng, Dimitrios Gunopulos
Abstract: Nearest neighbor classification assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with finite samples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose a locally adaptive nearest neighbor classification method to try to minimize bias. We use a Chi-squared distance analysis to compute a flexible metric for producing neighborhoods that are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities tend to be smoother in the modified neighborhoods, whereby better classification performance can be achieved. The efficacy of our method is validated and compared against other techniques using a variety of real world data. 1
Author: Sepp Hochreiter, Michael Mozer
Abstract: The goal of many unsupervised learning procedures is to bring two probability distributions into alignment. Generative models such as Gaussian mixtures and Boltzmann machines can be cast in this light, as can recoding models such as ICA and projection pursuit. We propose a novel sample-based error measure for these classes of models, which applies even in situations where maximum likelihood (ML) and probability density estimation-based formulations cannot be applied, e.g., models that are nonlinear or have intractable posteriors. Furthermore, our sample-based error measure avoids the difficulties of approximating a density function. We prove that with an unconstrained model, (1) our approach converges on the correct solution as the number of samples goes to infinity, and (2) the expected solution of our approach in the generative framework is the ML solution. Finally, we evaluate our approach via simulations of linear and nonlinear models on mixture of Gaussians and ICA problems. The experiments show the broad applicability and generality of our approach. 1
6 0.28471851 89 nips-2000-Natural Sound Statistics and Divisive Normalization in the Auditory System
7 0.26974088 53 nips-2000-Feature Correspondence: A Markov Chain Monte Carlo Approach
8 0.25105342 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
9 0.21642306 112 nips-2000-Reinforcement Learning with Function Approximation Converges to a Region
10 0.21366625 80 nips-2000-Learning Switching Linear Models of Human Motion
11 0.20387597 115 nips-2000-Sequentially Fitting ``Inclusive'' Trees for Inference in Noisy-OR Networks
12 0.18675007 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
13 0.17307113 82 nips-2000-Learning and Tracking Cyclic Human Motion
14 0.1729276 73 nips-2000-Kernel-Based Reinforcement Learning in Average-Cost Problems: An Application to Optimal Portfolio Choice
15 0.17089853 134 nips-2000-The Kernel Trick for Distances
16 0.1704887 122 nips-2000-Sparse Representation for Gaussian Process Models
17 0.16739269 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
18 0.16587171 12 nips-2000-A Support Vector Method for Clustering
19 0.15953603 54 nips-2000-Feature Selection for SVMs
20 0.15775786 96 nips-2000-One Microphone Source Separation
topicId topicWeight
[(10, 0.021), (17, 0.066), (32, 0.044), (33, 0.033), (55, 0.017), (62, 0.052), (65, 0.019), (67, 0.03), (75, 0.01), (76, 0.035), (79, 0.021), (81, 0.507), (90, 0.025), (97, 0.029)]
simIndex simValue paperId paperTitle
1 0.96370226 66 nips-2000-Hippocampally-Dependent Consolidation in a Hierarchical Model of Neocortex
Author: Szabolcs KĂĄli, Peter Dayan
Abstract: In memory consolidation, declarative memories which initially require the hippocampus for their recall, ultimately become independent of it. Consolidation has been the focus of numerous experimental and qualitative modeling studies, but only little quantitative exploration. We present a consolidation model in which hierarchical connections in the cortex, that initially instantiate purely semantic information acquired through probabilistic unsupervised learning, come to instantiate episodic information as well. The hippocampus is responsible for helping complete partial input patterns before consolidation is complete, while also training the cortex to perform appropriate completion by itself.
same-paper 2 0.93776858 137 nips-2000-The Unscented Particle Filter
Author: Rudolph van der Merwe, Arnaud Doucet, Nando de Freitas, Eric A. Wan
Abstract: In this paper, we propose a new particle filter based on sequential importance sampling. The algorithm uses a bank of unscented filters to obtain the importance proposal distribution. This proposal has two very
3 0.93612981 141 nips-2000-Universality and Individuality in a Neural Code
Author: Elad Schneidman, Naama Brenner, Naftali Tishby, Robert R. de Ruyter van Steveninck, William Bialek
Abstract: The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. Thus the neural code has a quantifiable mixture of individuality and universality. 1
4 0.86016899 103 nips-2000-Probabilistic Semantic Video Indexing
Author: Milind R. Naphade, Igor Kozintsev, Thomas S. Huang
Abstract: We propose a novel probabilistic framework for semantic video indexing. We define probabilistic multimedia objects (multijects) to map low-level media features to high-level semantic labels. A graphical network of such multijects (multinet) captures scene context by discovering intra-frame as well as inter-frame dependency relations between the concepts. The main contribution is a novel application of a factor graph framework to model this network. We model relations between semantic concepts in terms of their co-occurrence as well as the temporal dependencies between these concepts within video shots. Using the sum-product algorithm [1] for approximate or exact inference in these factor graph multinets, we attempt to correct errors made during isolated concept detection by forcing high-level constraints. This results in a significant improvement in the overall detection performance. 1
5 0.57537669 55 nips-2000-Finding the Key to a Synapse
Author: Thomas Natschläger, Wolfgang Maass
Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.
6 0.53897387 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
7 0.48380116 131 nips-2000-The Early Word Catches the Weights
8 0.47089177 146 nips-2000-What Can a Single Neuron Compute?
9 0.45976606 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
10 0.44482642 43 nips-2000-Dopamine Bonuses
11 0.41071099 80 nips-2000-Learning Switching Linear Models of Human Motion
12 0.38697207 49 nips-2000-Explaining Away in Weight Space
13 0.38431439 71 nips-2000-Interactive Parts Model: An Application to Recognition of On-line Cursive Script
14 0.38335878 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
15 0.37855819 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
16 0.37535736 89 nips-2000-Natural Sound Statistics and Divisive Normalization in the Auditory System
17 0.3679328 30 nips-2000-Bayesian Video Shot Segmentation
18 0.36764035 90 nips-2000-New Approaches Towards Robust and Adaptive Speech Recognition
19 0.36605948 72 nips-2000-Keeping Flexible Active Contours on Track using Metropolis Updates
20 0.3642619 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks