nips nips2001 nips2001-43 nips2001-43-reference knowledge-graph by maker-knowledge-mining

43 nips-2001-Bayesian time series classification


Source: pdf

Author: Peter Sykacek, Stephen J. Roberts

Abstract: This paper proposes an approach to classification of adjacent segments of a time series as being either of classes. We use a hierarchical model that consists of a feature extraction stage and a generative classifier which is built on top of these features. Such two stage approaches are often used in signal and image processing. The novel part of our work is that we link these stages probabilistically by using a latent feature space. To use one joint model is a Bayesian requirement, which has the advantage to fuse information according to its certainty. The classifier is implemented as hidden Markov model with Gaussian and Multinomial observation distributions defined on a suitably chosen representation of autoregressive models. The Markov dependency is motivated by the assumption that successive classifications will be correlated. Inference is done with Markov chain Monte Carlo (MCMC) techniques. We apply the proposed approach to synthetic data and to classification of EEG that was recorded while the subjects performed different cognitive tasks. All experiments show that using a latent feature space results in a significant improvement in generalization accuracy. Hence we expect that this idea generalizes well to other hierarchical models.


reference text

[BS94] J. M. Bernardo and A. F. M. Smith. Bayesian Theory. Wiley, Chichester, 1994. [GG84] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741, 1984. [Gre95] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82:711–732, 1995. [Mac92] D. J. C. MacKay. The evidence framework applied to classification networks. Neural Computation, 4:720–736, 1992. [Nea96] R. M. Neal. Bayesian Learning for Neural Networks. Springer, New York, 1996. ´ [RF95] J. J. K. O Ruanaidh and W. J. Fitzgerald. Numerical Bayesian Methods Applied to Signal Processing. Springer-Verlag, New York, 1995. [RG97] S. Richardson and P. J. Green. On Bayesian analysis of mixtures with an unknown number of components. Journal Royal Stat. Soc. B, 59:731–792, 1997. [RJ86] L. R. Rabiner and B. H. Juang. An introduction to Hidden Markov Models. IEEE ASSP Magazine, 3(1):4–16, 1986. [RL96] A. E. Raftery and S. M. Lewis. Implementing MCMC. In W.R. Gilks, S. Richardson, and D.J. Spiegelhalter, editors, Markov Chain Monte Carlo in practice, chapter 7, pages 115– 130. Chapman & Hall, London, Weinheim, New York, 1996. [Rob96] C. P. Robert. Mixtures of distributions: inference and estimation. In W. R. Gilks, S. Richardson, and D.J. Spiegelhalter, editors, Markov Chain Mont Carlo in Practice, pages 441–464. Chapman & Hall, London, 1996. [Syk00] P. Sykacek. On input selection with reversible jump Markov chain Monte Carlo sampling. In S.A. Solla, T.K. Leen, and K.-R. M¨ ller, editors, Advances in Neural Information Prou cessing Systems 12, pages 638–644, Boston, MA, 2000. MIT Press.