nips nips2001 nips2001-102 knowledge-graph by maker-knowledge-mining

102 nips-2001-KLD-Sampling: Adaptive Particle Filters


Source: pdf

Author: Dieter Fox

Abstract: Over the last years, particle filters have been applied with great success to a variety of state estimation problems. We present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets on-the-fly. The key idea of the KLD-sampling method is to bound the approximation error introduced by the sample-based representation of the particle filter. The name KLD-sampling is due to the fact that we measure the approximation error by the Kullback-Leibler distance. Our adaptation approach chooses a small number of samples if the density is focused on a small part of the state space, and it chooses a large number of samples if the state uncertainty is high. Both the implementation and computation overhead of this approach are small. Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation technique.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract Over the last years, particle filters have been applied with great success to a variety of state estimation problems. [sent-3, score-0.586]

2 We present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets on-the-fly. [sent-4, score-0.726]

3 The key idea of the KLD-sampling method is to bound the approximation error introduced by the sample-based representation of the particle filter. [sent-5, score-0.568]

4 Our adaptation approach chooses a small number of samples if the density is focused on a small part of the state space, and it chooses a large number of samples if the state uncertainty is high. [sent-7, score-0.847]

5 Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation technique. [sent-9, score-1.867]

6 1 Introduction Estimating the state of a dynamic system based on noisy sensor measurements is extremely important in areas as different as speech recognition, target tracking, mobile robot navigation, and computer vision. [sent-10, score-0.823]

7 Over the last years, particle filters have been applied with great success to a variety of state estimation problems (see [3] for a recent overview). [sent-11, score-0.586]

8 It is due to this representation, that particle filters combine efficiency with the ability to represent a wide range of probability densities. [sent-14, score-0.485]

9 The efficiency of particle filters lies in the way they place computational resources. [sent-15, score-0.485]

10 By sampling in proportion to likelihood, particle filters focus the computational resources on regions with high likelihood, where things really matter. [sent-16, score-0.551]

11 So far, however, an important source for increasing the efficiency of particle filters has only rarely been studied: Adapting the number of samples over time. [sent-17, score-0.738]

12 While variable sample sizes have been discussed in the context of genetic algorithms [10] and interacting particle filters [2], most existing approaches to particle filters use a fixed number of samples during the whole state estimation process. [sent-18, score-1.497]

13 An adaptive approach for particle filters has been applied by [8] and [5]. [sent-20, score-0.554]

14 This approach adjusts the number of samples based on the likelihood of observations, which has some important shortcomings, as we will show. [sent-21, score-0.345]

15 In this paper we introduce a novel approach to adapting the number of samples over time. [sent-22, score-0.342]

16 Our technique determines the number of samples based on statistical bounds on the samplebased approximation quality. [sent-23, score-0.277]

17 Extensive experiments using a mobile robot indicate that our approach yields significant improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation technique. [sent-24, score-1.505]

18 The remainder of this paper is organized as follows: In the next section we will outline the basics of particle filters and their application to mobile robot localization. [sent-25, score-1.115]

19 In Section 3, we will introduce our novel technique to adaptive particle filters. [sent-26, score-0.514]

20 2 Particle filters for Bayesian filtering and robot localization   Particle filters address the problem of estimating the state of a dynamical system from sensor measurements. [sent-28, score-0.937]

21 The goal of particle filters is to estimate a posterior probability density over the state space conditioned on the data collected so far. [sent-29, score-0.67]

22 Particle filters represent this belief by a set of weighted samples distributed according to : ¢ ¥¤  ¢   © § ¢ ¨¦ £¡  ¢    § © ¦  ¢ 5 1) 20' A 9 7 53 ¢ % # ! [sent-33, score-0.33]

23 by the importance weight , the Importance sampling: Weight the sample likelihood of the sample given the measurement . [sent-41, score-0.386]

24 After iterations, the importance weights of the samples are normalized so that they sum up to 1. [sent-43, score-0.322]

25   ¢   © § ¨¦ Particle filters for mobile robot localization We use the problem of mobile robot localization to illustrate and test our approach to adaptive particle filters. [sent-45, score-2.452]

26 Robot localization is the problem of estimating a robot’s pose relative to a map of its environment. [sent-46, score-0.367]

27 The mobile robot localization problem comes in different flavors. [sent-48, score-0.949]

28 Here the initial robot pose is known, and localization seeks to correct small, incremental errors in a robot’s odometry. [sent-50, score-0.812]

29 More challenging is the global localization problem, where a robot is not told its initial pose, but instead has to determine it from scratch. [sent-51, score-0.847]

30 Robot position (a) (b) Start Robot position Robot position (c) (d) Fig. [sent-52, score-0.303]

31 b)-d) Map of an office environment along with a series of sample sets representing the robot’s belief during global localization using sonar sensors (samples are projected into 2D). [sent-54, score-0.651]

32 b) After moving 5m, the robot is still highly uncertain about its position and the samples are spread trough major parts of the free-space. [sent-56, score-0.868]

33 c) Even as the robot reaches the upper left corner of the map, its belief is still concentrated around four possible locations. [sent-57, score-0.566]

34 d) Finally, after moving approximately 55m, the ambiguity is resolved and the robot knows where it is. [sent-58, score-0.494]

35 ¢   In the context of robot localization, the state of the system is the robot’s position, which is typically represented in a two-dimensional Cartesian space and the robot’s heading direction. [sent-60, score-0.52]

36 The state transition probability describes how the position of the robot changes using information collected by the robot’s wheel encoders. [sent-61, score-0.649]

37 The perceptual model describes the likelihood of making the observation given that the robot is at location . [sent-62, score-0.519]

38 Figure 1 illustrates particle filters for mobile robot localization. [sent-64, score-1.115]

39 While such a high number of samples might be needed to accurately represent the belief during early stages of localization (cf. [sent-67, score-0.732]

40 1(a)), it is obvious that only a small fraction of this number suffices to track the position of the robot once it knows where it is (cf. [sent-68, score-0.654]

41  QV¤ 3 Q¢   9 ¢   U R P¢ R P  ¢ ¡ ¢ ¤ ¢   ¢   9 ¢ U ¡ ¢ 3 Adaptive particle filters with variable sample set sizes The localization example in the previous section illustrates that the efficiency of particle filters can be greatly increased by changing the number of samples over time. [sent-71, score-1.689]

42 Before we introduce our approach to adaptive particle filters, let us first discuss an existing technique. [sent-72, score-0.554]

43 1 Likelihood-based adaptation We call this approach likelihood-based adaptation since it determines the number of samples such that the sum of non-normalized likelihoods (importance weights) exceeds a prespecified threshold. [sent-74, score-0.499]

44 This approach has been applied to dynamic Bayesian networks [8] and mobile robot localization [5]. [sent-75, score-0.989]

45 The intuition behind this approach can be illustrated in the robot localization context: If the sample set is well in tune with the sensor reading, each individual importance weight is large and the sample set remains small. [sent-76, score-1.233]

46 If, however, the sensor reading carries a lot of surprise, as is the case when the robot is globally uncertain or when it lost track of its position, the individual sample weights are small and the sample set becomes large. [sent-79, score-0.911]

47 The likelihood-based adaptation directly relates to the property that the variance of the importance sampler is a function of the mismatch between the proposal distribution and the distribution that is being approximated. [sent-80, score-0.262]

48 Consider, for example, the ambiguous belief state consisting of four distinctive sample clusters shown in Fig. [sent-82, score-0.272]

49 Due to the symmetry of the environment, the average likelihood of a sensor measurement observed in this situation is approximately the same as if the robot knew its position unambiguously (cf. [sent-84, score-0.772]

50 Likelihood-based adaptation would therefore use the same number of samples in both situations. [sent-86, score-0.345]

51 1(b) requires a multiple of the samples needed to represent the belief in Fig. [sent-88, score-0.366]

52 2 KLD-sampling The key idea of our approach is to bound the error introduced by the sample-based representation of the particle filter. [sent-91, score-0.584]

53 For such a representation we can determine the number of samples so that the distance between the maximum likelihood estimate (MLE) based on the samples and the true posterior does not exceed a pre-specified threshold . [sent-93, score-0.775]

54 In what follows, we will first derive the equation for determining the number of samples needed to approximate a discrete probability distribution (see also [12, 7]). [sent-95, score-0.346]

55 Then we will show how to modify the basic particle filter algorithm so that it realizes our adaptation approach. [sent-96, score-0.577]

56 ¡ To see, suppose that samples are drawn from a discrete distribution with different bins. [sent-97, score-0.288]

57 Now we can determine the probability that this distance is smaller than , given that samples are drawn from the true distribution: § 7  C  & A (3) RP £2F0 8    U 3 §U @ 7   XC  A 7 9 & B U 3 §U  @ &  E8 D! [sent-112, score-0.35]

58 From (7) we see that the required number of samples is proportional to the inverse of the bound, and to the first order linear in the number of bins with support. [sent-124, score-0.347]

59 ¡ ¡ It remains to be shown how to apply this result to particle filters. [sent-127, score-0.485]

60 The problem is that we do not know the true posterior distribution (the estimation of this posterior is the main goal of the particle filter). [sent-128, score-0.709]

61 To be more specific, we estimate for the proposal distribution resulting from the first two steps of the particle filter update. [sent-132, score-0.536]

62 Sampling is stopped as soon as the number of samples exceeds the threshold specified in (7). [sent-134, score-0.299]

63 An update step of the resulting KLD-sampling particle filter is given in Table 1. [sent-135, score-0.512]

64 ¡ ¡ ¡  Q¢    (Q¢ ¤ 3 (Q¢   9 ¢   U R P R P R P © § 6¦  ¡ The implementation of this modified particle filter is straightforward. [sent-136, score-0.507]

65 1, and to particle filters with fixed sample set sizes. [sent-143, score-0.605]

66 For the fixed approach we varied the number of samples, for the likelihood-based approach we varied the threshold used to determine the number of samples, and for our approach we varied , the bound on the K-L distance. [sent-145, score-0.335]

67 ¡¢ © falls into empty bin ) then R P (Q¢ 1 1 © ' T(  S' ; R P Q¢  S /* Generate samples from the discrete distribution given by the weights in using and ¢ ! [sent-162, score-0.39]

68 T  ¢  £ 5 if /* Initialize */  control measurement do  R P FQ¢   © § 6¦ Inputs: /* Compute importance weight */ /* Update normalization factor */ /* Insert sample into sample set */ /* Update number of bins with support */ non-empty I GR P PP R H£2 0 R 2 £    B  ! [sent-165, score-0.428]

69 £2 '0) ¢ 1 ¢ ¥ return  for /* Update number of generated samples */ /* until K-L bound is reached */ D D FED while Table 1: KLD-sampling algorithm. [sent-168, score-0.285]

70 Since the ground truth for these posteriors is not available, we compared the sample sets generated by the different approaches with reference sample sets. [sent-170, score-0.328]

71 These reference sets were generated using a particle filter with a fixed number of 200,000 samples (far more than actually needed for position estimation). [sent-171, score-0.963]

72 2(a) plots the average K-L distance along with 95% confidence intervals against the average number of samples for the different algorithms (for clarity, we omitted the large error bars for KL distances above 1. [sent-175, score-0.362]

73 Each data point represents the average of 16 global localization runs with different start positions of the robot (each run itself consists of approximately 150 sample set comparisons at the different points in time). [sent-177, score-0.959]

74 The curves also illustrate the superior performance of our approach: While the fixed approach requires about 50,000 samples before it converges to a KL distance below 0. [sent-179, score-0.322]

75 25, our approach converges to the same level using only 3,000 samples on average. [sent-180, score-0.271]

76 This is also an improvement by a factor of 12 compared to the approximately 36,000 samples needed by the likelihood-based approach. [sent-181, score-0.267]

77 In essence, these experiments indicate that our approach, even though based on several approximations, is able to accurately track the true posterior using significantly smaller sample sets on avarage than the other approaches. [sent-182, score-0.337]

78 To test the performance of our approach under realistic conditions, we performed multiple global localization experiments under real-time considerations using the timestamps in the data sets. [sent-184, score-0.44]

79 5 0 0 20000 40000 60000 Average number of samples 80000 100000 (a) 0 20000 40000 60000 Average number of samples 80000 (b) Fig. [sent-190, score-0.506]

80 a) The -axis plots the K-L distance between the reference densities and the sample sets generated by the different approaches (real-time constraints were not considered in this experiment). [sent-192, score-0.285]

81 b) The -axis represents the average localization error measured by the distance between estimated positions and reference positions. [sent-193, score-0.455]

82 The U-shape in b) is due to the fact that under real-time conditions, an increasing number of samples results in higher update times and therefore loss of sensor data. [sent-194, score-0.378]

83 As a natural measure of the performance of the different algorithms, we determined the distance between the estimated robot position and the corresponding reference position after each iteration. [sent-199, score-0.776]

84 The U-shape of all three graphs nicely illustrates the trade-off involved in choosing the number of samples under real-time constraints: Choosing not enough samples results in a poor approximation of the underlying posterior and the robot frequently fails to localize itself. [sent-202, score-1.046]

85 The smallest average localization error is 44cm in contrast to an average error of 79cm and 114cm for the likelihood-based and the fixed approach, respectively. [sent-206, score-0.377]

86 This result is due to the fact that our approach is able to determine the best mix between more samples during early stages of localization and less samples during position tracking. [sent-207, score-0.959]

87 5 Conclusions and Future Research We presented a statistical approach to adapting the sample set size of particle filters onthe-fly. [sent-209, score-0.694]

88 The key idea of the KLD-sampling approach is to bound the error introduced by the sample-based belief representation of the particle filter. [sent-210, score-0.683]

89 At each iteration, our approach generates samples until their number is large enough to guarantee that the K-L distance between the maximum likelihood estimate and the underlying posterior does not exceed a prespecified bound. [sent-211, score-0.492]

90 Thereby, our approach chooses a small number of samples if the density is focused on a small subspace of the state space, and chooses a large number of samples if the samples have to cover a major part of the state space. [sent-212, score-0.986]

91 Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample sets and over a previously introduced adaptation approach [8, 5]. [sent-214, score-1.912]

92 ter approximations using only 6% of the samples required by the fixed approach, and using less than 9% of the samples required by the likelihood adaptation approach. [sent-216, score-0.606]

93 So far, KLDsampling has been tested using robot localization only. [sent-217, score-0.786]

94 We conjecture, however, that many other applications of particle filters can benefit from this method. [sent-218, score-0.485]

95 As a result, more samples are used in “important” parts of the state space, while less samples are used in other parts. [sent-224, score-0.537]

96 Another area of future research is the thorough investigation of particle filters under real-time conditions. [sent-225, score-0.485]

97 In many applications the rate of incoming sensor data is higher than the update rate of the particle filter. [sent-226, score-0.61]

98 This introduces a trade-off between the number of samples and the amount of sensor data that can be processed (cf. [sent-227, score-0.351]

99 Branching and interacting particle systems approximations of feynamkac formulae with applications to non linear filtering. [sent-243, score-0.507]

100 Monte Carlo Localization: Efficient position estimation for mobile robots. [sent-264, score-0.289]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('particle', 0.485), ('robot', 0.467), ('lters', 0.364), ('localization', 0.319), ('samples', 0.231), ('mobile', 0.163), ('sample', 0.12), ('position', 0.101), ('belief', 0.099), ('sensor', 0.098), ('adaptation', 0.092), ('bin', 0.08), ('bins', 0.072), ('posterior', 0.071), ('lter', 0.07), ('importance', 0.069), ('gr', 0.067), ('sampling', 0.066), ('mle', 0.057), ('reference', 0.056), ('ef', 0.055), ('fox', 0.053), ('state', 0.053), ('xed', 0.052), ('likelihood', 0.052), ('distance', 0.051), ('adapting', 0.049), ('pp', 0.048), ('qv', 0.047), ('drastic', 0.043), ('doucet', 0.043), ('measurements', 0.042), ('ciency', 0.042), ('approach', 0.04), ('overhead', 0.038), ('track', 0.037), ('determine', 0.037), ('kld', 0.036), ('kldsampling', 0.036), ('prespeci', 0.036), ('timestamps', 0.036), ('needed', 0.036), ('chooses', 0.035), ('density', 0.033), ('bound', 0.032), ('sets', 0.032), ('yields', 0.032), ('discretizations', 0.031), ('burgard', 0.031), ('dellaert', 0.031), ('improvements', 0.031), ('true', 0.031), ('discrete', 0.031), ('extensive', 0.03), ('adaptive', 0.029), ('average', 0.029), ('multinomial', 0.029), ('sonar', 0.029), ('quantiles', 0.029), ('environment', 0.028), ('collected', 0.028), ('sizes', 0.027), ('genetic', 0.027), ('knows', 0.027), ('introduced', 0.027), ('update', 0.027), ('densities', 0.026), ('varied', 0.026), ('pose', 0.026), ('distribution', 0.026), ('estimation', 0.025), ('accurate', 0.025), ('kl', 0.025), ('measurement', 0.025), ('uncertain', 0.025), ('exceed', 0.025), ('carlo', 0.025), ('monte', 0.025), ('proposal', 0.025), ('accurately', 0.025), ('threshold', 0.024), ('approximation', 0.024), ('mismatch', 0.024), ('discretization', 0.024), ('global', 0.024), ('ces', 0.023), ('great', 0.023), ('please', 0.023), ('highly', 0.022), ('implementation', 0.022), ('parts', 0.022), ('weights', 0.022), ('map', 0.022), ('reading', 0.022), ('exceeds', 0.022), ('interacting', 0.022), ('thrun', 0.022), ('number', 0.022), ('experiments', 0.021), ('statistic', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 102 nips-2001-KLD-Sampling: Adaptive Particle Filters

Author: Dieter Fox

Abstract: Over the last years, particle filters have been applied with great success to a variety of state estimation problems. We present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets on-the-fly. The key idea of the KLD-sampling method is to bound the approximation error introduced by the sample-based representation of the particle filter. The name KLD-sampling is due to the fact that we measure the approximation error by the Kullback-Leibler distance. Our adaptation approach chooses a small number of samples if the density is focused on a small part of the state space, and it chooses a large number of samples if the state uncertainty is high. Both the implementation and computation overhead of this approach are small. Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation technique.

2 0.65782845 163 nips-2001-Risk Sensitive Particle Filters

Author: Sebastian Thrun, John Langford, Vandi Verma

Abstract: We propose a new particle filter that incorporates a model of costs when generating particles. The approach is motivated by the observation that the costs of accidentally not tracking hypotheses might be significant in some areas of state space, and next to irrelevant in others. By incorporating a cost model into particle filtering, states that are more critical to the system performance are more likely to be tracked. Automatic calculation of the cost model is implemented using an MDP value function calculation that estimates the value of tracking a particular state. Experiments in two mobile robot domains illustrate the appropriateness of the approach.

3 0.16860043 156 nips-2001-Rao-Blackwellised Particle Filtering via Data Augmentation

Author: Christophe Andrieu, Nando D. Freitas, Arnaud Doucet

Abstract: In this paper, we extend the Rao-Blackwellised particle filtering method to more complex hybrid models consisting of Gaussian latent variables and discrete observations. This is accomplished by augmenting the models with artificial variables that enable us to apply Rao-Blackwellisation. Other improvements include the design of an optimal importance proposal distribution and being able to swap the sampling an selection steps to handle outliers. We focus on sequential binary classifiers that consist of linear combinations of basis functions , whose coefficients evolve according to a Gaussian smoothness prior. Our results show significant improvements. 1

4 0.1279964 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

Author: Yun Gao, Michael J. Black, Elie Bienenstock, Shy Shoham, John P. Donoghue

Abstract: Statistical learning and probabilistic inference techniques are used to infer the hand position of a subject from multi-electrode recordings of neural activity in motor cortex. First, an array of electrodes provides training data of neural firing conditioned on hand kinematics. We learn a nonparametric representation of this firing activity using a Bayesian model and rigorously compare it with previous models using cross-validation. Second, we infer a posterior probability distribution over hand motion conditioned on a sequence of neural test data using Bayesian inference. The learned firing models of multiple cells are used to define a nonGaussian likelihood term which is combined with a prior probability for the kinematics. A particle filtering method is used to represent, update, and propagate the posterior distribution over time. The approach is compared with traditional linear filtering methods; the results suggest that it may be appropriate for neural prosthetic applications.

5 0.10565155 179 nips-2001-Tempo tracking and rhythm quantization by sequential Monte Carlo

Author: Ali Taylan Cemgil, Bert Kappen

Abstract: We present a probabilistic generative model for timing deviations in expressive music. performance. The structure of the proposed model is equivalent to a switching state space model. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. The inferences are carried out using sequential Monte Carlo integration (particle filtering) techniques. For this purpose, we have derived a novel Viterbi algorithm for Rao-Blackwellized particle filters, where a subset of the hidden variables is integrated out. The resulting model is suitable for realtime tempo tracking and transcription and hence useful in a number of music applications such as adaptive automatic accompaniment and score typesetting. 1

6 0.081149928 183 nips-2001-The Infinite Hidden Markov Model

7 0.072471641 116 nips-2001-Linking Motor Learning to Function Approximation: Learning in an Unlearnable Force Field

8 0.066245906 43 nips-2001-Bayesian time series classification

9 0.065403104 33 nips-2001-An Efficient Clustering Algorithm Using Stochastic Association Model and Its Implementation Using Nanostructures

10 0.065359257 109 nips-2001-Learning Discriminative Feature Transforms to Low Dimensions in Low Dimentions

11 0.058236677 1 nips-2001-(Not) Bounding the True Error

12 0.055676959 178 nips-2001-TAP Gibbs Free Energy, Belief Propagation and Sparsity

13 0.055632684 168 nips-2001-Sequential Noise Compensation by Sequential Monte Carlo Method

14 0.053205226 140 nips-2001-Optimising Synchronisation Times for Mobile Devices

15 0.053196032 121 nips-2001-Model-Free Least-Squares Policy Iteration

16 0.050723262 115 nips-2001-Linear-time inference in Hierarchical HMMs

17 0.0505744 46 nips-2001-Categorization by Learning and Combining Object Parts

18 0.04875575 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network

19 0.047169615 21 nips-2001-A Variational Approach to Learning Curves

20 0.045774162 63 nips-2001-Dynamic Time-Alignment Kernel in Support Vector Machine


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.183), (1, -0.049), (2, 0.05), (3, -0.024), (4, -0.179), (5, -0.172), (6, 0.152), (7, 0.533), (8, 0.148), (9, 0.273), (10, 0.072), (11, -0.182), (12, 0.197), (13, 0.12), (14, 0.078), (15, -0.088), (16, 0.127), (17, -0.148), (18, -0.158), (19, 0.019), (20, -0.066), (21, -0.087), (22, -0.053), (23, 0.009), (24, 0.001), (25, 0.114), (26, -0.041), (27, 0.068), (28, 0.086), (29, 0.04), (30, -0.019), (31, -0.0), (32, 0.004), (33, -0.051), (34, -0.014), (35, -0.048), (36, 0.057), (37, 0.002), (38, -0.082), (39, 0.003), (40, -0.009), (41, -0.019), (42, 0.089), (43, 0.021), (44, -0.023), (45, 0.023), (46, -0.036), (47, 0.088), (48, -0.01), (49, -0.003)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96864408 102 nips-2001-KLD-Sampling: Adaptive Particle Filters

Author: Dieter Fox

Abstract: Over the last years, particle filters have been applied with great success to a variety of state estimation problems. We present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets on-the-fly. The key idea of the KLD-sampling method is to bound the approximation error introduced by the sample-based representation of the particle filter. The name KLD-sampling is due to the fact that we measure the approximation error by the Kullback-Leibler distance. Our adaptation approach chooses a small number of samples if the density is focused on a small part of the state space, and it chooses a large number of samples if the state uncertainty is high. Both the implementation and computation overhead of this approach are small. Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation technique.

2 0.94076478 163 nips-2001-Risk Sensitive Particle Filters

Author: Sebastian Thrun, John Langford, Vandi Verma

Abstract: We propose a new particle filter that incorporates a model of costs when generating particles. The approach is motivated by the observation that the costs of accidentally not tracking hypotheses might be significant in some areas of state space, and next to irrelevant in others. By incorporating a cost model into particle filtering, states that are more critical to the system performance are more likely to be tracked. Automatic calculation of the cost model is implemented using an MDP value function calculation that estimates the value of tracking a particular state. Experiments in two mobile robot domains illustrate the appropriateness of the approach.

3 0.53207386 156 nips-2001-Rao-Blackwellised Particle Filtering via Data Augmentation

Author: Christophe Andrieu, Nando D. Freitas, Arnaud Doucet

Abstract: In this paper, we extend the Rao-Blackwellised particle filtering method to more complex hybrid models consisting of Gaussian latent variables and discrete observations. This is accomplished by augmenting the models with artificial variables that enable us to apply Rao-Blackwellisation. Other improvements include the design of an optimal importance proposal distribution and being able to swap the sampling an selection steps to handle outliers. We focus on sequential binary classifiers that consist of linear combinations of basis functions , whose coefficients evolve according to a Gaussian smoothness prior. Our results show significant improvements. 1

4 0.34222949 179 nips-2001-Tempo tracking and rhythm quantization by sequential Monte Carlo

Author: Ali Taylan Cemgil, Bert Kappen

Abstract: We present a probabilistic generative model for timing deviations in expressive music. performance. The structure of the proposed model is equivalent to a switching state space model. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. The inferences are carried out using sequential Monte Carlo integration (particle filtering) techniques. For this purpose, we have derived a novel Viterbi algorithm for Rao-Blackwellized particle filters, where a subset of the hidden variables is integrated out. The resulting model is suitable for realtime tempo tracking and transcription and hence useful in a number of music applications such as adaptive automatic accompaniment and score typesetting. 1

5 0.30355617 26 nips-2001-Active Portfolio-Management based on Error Correction Neural Networks

Author: Hans-Georg Zimmermann, Ralph Neuneier, Ralph Grothmann

Abstract: This paper deals with a neural network architecture which establishes a portfolio management system similar to the Black / Litterman approach. This allocation scheme distributes funds across various securities or financial markets while simultaneously complying with specific allocation constraints which meet the requirements of an investor. The portfolio optimization algorithm is modeled by a feedforward neural network. The underlying expected return forecasts are based on error correction neural networks (ECNN), which utilize the last model error as an auxiliary input to evaluate their own misspecification. The portfolio optimization is implemented such that (i.) the allocations comply with investor’s constraints and that (ii.) the risk of the portfolio can be controlled. We demonstrate the profitability of our approach by constructing internationally diversified portfolios across different financial markets of the G7 contries. It turns out, that our approach is superior to a preset benchmark portfolio. ¢ £¡ 1 Introduction: Portfolio-Management We integrate the portfolio optimization algorithm suggested by Black / Litterman [1] into a neural network architecture. Combining the mean-variance theory [5] with the capital asset pricing model (CAPM) [7], this approach utilizes excess returns of the CAPM equilibrium to define a neutral, well balanced benchmark portfolio. Deviations from the benchmark allocation are only allowed within preset boundaries. Hence, as an advantage, there are no unrealistic solutions (e. g. large short positions, huge portfolio changes). Moreover, there is no need of formulating return expectations for all assets. In contrast to Black / Litterman, excess return forecasts are estimated by time-delay recurrent error correction neural networks [8]. Investment decisions which comply with given allocation constraints are derived from these predictions. The risk exposure of the portfolio is implicitly controlled by a parameter-optimizing task over time (sec. 3 and 5). Our approach consists of the following three steps: (i.) Construction of forecast models on the basis of error correction neural networks (ECNN) for all assets (sec. 2).   ¤§© © © § ¥ ¦¨¦¤ To whom correspondence should be addressed: Georg.Zimmermann@mchp.siemens.de.  ¤ ¤ (ii.) Computation of excess returns by a higher-level feedforward network (sec. 3 and 4). By this, the profitability of an asset with respect to all others is measured. on the basis of the excess returns. (iii.) Optimization of the investment proportions Allocation constraints ensure, that the investment proportions may deviate from a given benchmark only within predefined intervals (sec. 3 and 4). £ § ¨¡ ¥ £¡ ¦¤¢  ¡ © ¡ © Finally, we apply our neural network based portfolio management system to an asset allocation problem concerning the G7 countries (sec. 6). 2 Forecasting by Error Correction Neural Networks Most dynamical systems are driven by a superposition of autonomous development and external influences [8]. For discrete time grids, such a dynamics can be described by a recurrent state transition and an output equation (Eq. 1). ¥   § § state transition eq. output eq. (1)  $

6 0.29685909 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

7 0.23649202 168 nips-2001-Sequential Noise Compensation by Sequential Monte Carlo Method

8 0.23202729 109 nips-2001-Learning Discriminative Feature Transforms to Low Dimensions in Low Dimentions

9 0.22103494 183 nips-2001-The Infinite Hidden Markov Model

10 0.21249352 33 nips-2001-An Efficient Clustering Algorithm Using Stochastic Association Model and Its Implementation Using Nanostructures

11 0.20159347 108 nips-2001-Learning Body Pose via Specialized Maps

12 0.18980397 155 nips-2001-Quantizing Density Estimators

13 0.18844675 178 nips-2001-TAP Gibbs Free Energy, Belief Propagation and Sparsity

14 0.18426085 121 nips-2001-Model-Free Least-Squares Policy Iteration

15 0.17998756 138 nips-2001-On the Generalization Ability of On-Line Learning Algorithms

16 0.17891608 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network

17 0.17753667 99 nips-2001-Intransitive Likelihood-Ratio Classifiers

18 0.1734951 3 nips-2001-ACh, Uncertainty, and Cortical Inference

19 0.17272034 140 nips-2001-Optimising Synchronisation Times for Mobile Devices

20 0.16899855 177 nips-2001-Switch Packet Arbitration via Queue-Learning


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.025), (17, 0.046), (19, 0.039), (27, 0.103), (30, 0.277), (38, 0.013), (59, 0.071), (72, 0.084), (79, 0.037), (80, 0.061), (83, 0.03), (88, 0.011), (91, 0.11)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.9680686 163 nips-2001-Risk Sensitive Particle Filters

Author: Sebastian Thrun, John Langford, Vandi Verma

Abstract: We propose a new particle filter that incorporates a model of costs when generating particles. The approach is motivated by the observation that the costs of accidentally not tracking hypotheses might be significant in some areas of state space, and next to irrelevant in others. By incorporating a cost model into particle filtering, states that are more critical to the system performance are more likely to be tracked. Automatic calculation of the cost model is implemented using an MDP value function calculation that estimates the value of tracking a particular state. Experiments in two mobile robot domains illustrate the appropriateness of the approach.

2 0.95328552 173 nips-2001-Speech Recognition with Missing Data using Recurrent Neural Nets

Author: S. Parveen, P. Green

Abstract: In the ‘missing data’ approach to improving the robustness of automatic speech recognition to added noise, an initial process identifies spectraltemporal regions which are dominated by the speech source. The remaining regions are considered to be ‘missing’. In this paper we develop a connectionist approach to the problem of adapting speech recognition to the missing data case, using Recurrent Neural Networks. In contrast to methods based on Hidden Markov Models, RNNs allow us to make use of long-term time constraints and to make the problems of classification with incomplete data and imputing missing values interact. We report encouraging results on an isolated digit recognition task.

3 0.94306773 159 nips-2001-Reducing multiclass to binary by coupling probability estimates

Author: B. Zadrozny

Abstract: This paper presents a method for obtaining class membership probability estimates for multiclass classification problems by coupling the probability estimates produced by binary classifiers. This is an extension for arbitrary code matrices of a method due to Hastie and Tibshirani for pairwise coupling of probability estimates. Experimental results with Boosted Naive Bayes show that our method produces calibrated class membership probability estimates, while having similar classification accuracy as loss-based decoding, a method for obtaining the most likely class that does not generate probability estimates.

4 0.94063276 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

Author: Xiaohui Xie, Martin A. Giese

Abstract: Asymmetric lateral connections are one possible mechanism that can account for the direction selectivity of cortical neurons. We present a mathematical analysis for a class of these models. Contrasting with earlier theoretical work that has relied on methods from linear systems theory, we study the network’s nonlinear dynamic properties that arise when the threshold nonlinearity of the neurons is taken into account. We show that such networks have stimulus-locked traveling pulse solutions that are appropriate for modeling the responses of direction selective cortical neurons. In addition, our analysis shows that outside a certain regime of stimulus speeds the stability of this solutions breaks down giving rise to another class of solutions that are characterized by specific spatiotemporal periodicity. This predicts that if direction selectivity in the cortex is mainly achieved by asymmetric lateral connections lurching activity waves might be observable in ensembles of direction selective cortical neurons within appropriate regimes of the stimulus speed.

5 0.9400627 33 nips-2001-An Efficient Clustering Algorithm Using Stochastic Association Model and Its Implementation Using Nanostructures

Author: Takashi Morie, Tomohiro Matsuura, Makoto Nagata, Atsushi Iwata

Abstract: This paper describes a clustering algorithm for vector quantizers using a “stochastic association model”. It offers a new simple and powerful softmax adaptation rule. The adaptation process is the same as the on-line K-means clustering method except for adding random fluctuation in the distortion error evaluation process. Simulation results demonstrate that the new algorithm can achieve efficient adaptation as high as the “neural gas” algorithm, which is reported as one of the most efficient clustering methods. It is a key to add uncorrelated random fluctuation in the similarity evaluation process for each reference vector. For hardware implementation of this process, we propose a nanostructure, whose operation is described by a single-electron circuit. It positively uses fluctuation in quantum mechanical tunneling processes.

6 0.93836606 151 nips-2001-Probabilistic principles in unsupervised learning of visual structure: human data and a model

same-paper 7 0.91911668 102 nips-2001-KLD-Sampling: Adaptive Particle Filters

8 0.91502947 149 nips-2001-Probabilistic Abstraction Hierarchies

9 0.87639874 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

10 0.86264408 63 nips-2001-Dynamic Time-Alignment Kernel in Support Vector Machine

11 0.85628259 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

12 0.83754355 46 nips-2001-Categorization by Learning and Combining Object Parts

13 0.82198703 60 nips-2001-Discriminative Direction for Kernel Classifiers

14 0.8197332 77 nips-2001-Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade

15 0.81695896 4 nips-2001-ALGONQUIN - Learning Dynamic Noise Models From Noisy Speech for Robust Speech Recognition

16 0.81187665 20 nips-2001-A Sequence Kernel and its Application to Speaker Recognition

17 0.80638319 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

18 0.80181134 162 nips-2001-Relative Density Nets: A New Way to Combine Backpropagation with HMM's

19 0.79305542 168 nips-2001-Sequential Noise Compensation by Sequential Monte Carlo Method

20 0.79068196 116 nips-2001-Linking Motor Learning to Function Approximation: Learning in an Unlearnable Force Field