nips nips2002 nips2002-169 nips2002-169-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Cody Kwok, Dieter Fox, Marina Meila
Abstract: Particle filters estimate the state of dynamical systems from sensor information. In many real time applications of particle filters, however, sensor information arrives at a significantly higher rate than the update rate of the filter. The prevalent approach to dealing with such situations is to update the particle filter as often as possible and to discard sensor information that cannot be processed in time. In this paper we present real-time particle filters, which make use of all sensor information even when the filter update rate is below the update rate of the sensors. This is achieved by representing posteriors as mixtures of sample sets, where each mixture component integrates one observation arriving during a filter update. The weights of the mixture components are set so as to minimize the approximation error introduced by the mixture representation. Thereby, our approach focuses computational resources (samples) on valuable sensor information. Experiments using data collected with a mobile robot show that our approach yields strong improvements over other approaches.
[1] A. Doucet, N. de Freitas, and N. Gordon, editors. Sequential Monte Carlo in Practice. SpringerVerlag, New York, 2001.
[2] D. Fox. KLD-sampling: Adaptive particle filters and mobile robot localization. In Advances in Neural Information Processing Systems (NIPS), 2001.
[3] D. Fox, S. Thrun, F. Dellaert, and W. Burgard. Particle filters for mobile robot localization. In Doucet et al. [1].
[4] P. Del Moral and L. Miclo. Branching and interacting particle systems approximations of feynamkac formulae with applications to non linear filtering. In Seminaire de Probabilites XXXIV, number 1729 in Lecture Notes in Mathematics. Springer-Verlag, 2000.
[5] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley Series in Telecommunications. Wiley, New York, 1991.
[6] W. Poland and R. Shachter. Mixtures of Gaussians and minimum relative entropy techniques for modeling continuous uncertainties. In Proc. of the Conference on Uncertainty in Artificial Intelligence (UAI), 1993.
[7] T. Jaakkola and M. Jordan. Improving the mean field approximation via the use of mixture distributions. In Learning in Graphical Models. Kluwer, 1997.
[8] P. R. Cohen. Empirical methods for artificial intelligence. MIT Press, 1995.
[9] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. of the Conference on Uncertainty in Artificial Intelligence (UAI), 1998.