nips nips2010 nips2010-2 nips2010-2-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Stephen Bach, Mark Maloof
Abstract: To cope with concept drift, we placed a probability distribution over the location of the most-recent drift point. We used Bayesian model comparison to update this distribution from the predictions of models trained on blocks of consecutive observations and pruned potential drift points with low probability. We compare our approach to a non-probabilistic method for drift and a probabilistic method for change-point detection. In our experiments, our approach generally yielded improved accuracy and/or speed over these other methods. 1
[1] J. C. Schlimmer and R. H. Granger. Beyond incremental processing: Tracking concept drift. In Proceedings of the Fifth National Conference on Artificial Intelligence, pages 502–507, Menlo Park, CA, 1986. AAAI Press.
[2] G. Hulten, L. Spencer, and P. Domingos. Mining time-changing data streams. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 97–106, New York, NY, 2001. ACM Press.
[3] G. Widmer and M. Kubat. Learning in the presence of concept drift and hidden contexts. Machine Learning, 23:69–101, 1996.
[4] S. H. Bach and M. A. Maloof. Paired learners for concept drift. In Proceedings of the Eighth IEEE International Conference on Data Mining, pages 23–32, Los Alamitos, CA, 2008. IEEE Press.
[5] J. Z. Kolter and M. A. Maloof. Dynamic weighted majority: An ensemble method for drifting concepts. Journal of Machine Learning Research, 8:2755–2790, Dec 2007.
[6] J. Z. Kolter and M. A. Maloof. Using additive expert ensembles to cope with concept drift. In Proceedings of the Twenty-second International Conference on Machine Learning, pages 449–456, New York, NY, 2005. ACM Press.
[7] H. Wang, W. Fan, P. S. Yu, and J. Han. Mining concept-drifting data streams using ensemble classifiers. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 226–235, New York, NY, 2003. ACM Press.
[8] W. N. Street and Y. Kim. A streaming ensemble algorithm (SEA) for large-scale classification. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 377–382, New York, NY, 2001. ACM Press.
[9] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, Berlin-Heidelberg, 2006.
[10] D. Barry and J. A. Hartigan. A Bayesian analysis for change point problems. Journal of the American Statistical Association, 88(421):309–319, 1993.
[11] D. Barry and J. A. Hartigan. Product partition models for change point problems. The Annals of Statistics, 20(1):260–279, 1992.
[12] Paul Fearnhead. Exact and efficient Bayesian inference for multiple changepoint problems. Statistics and Computing, 16(2):203–213, 2006.
[13] P. Fearnhead and Z. Liu. On-line inference for multiple changepoint problems. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(4):589–605, September 2007.
[14] R.P. Adams and D.J.C. MacKay. Bayesian online changepoint detection. Technical report, University of Cambridge, 2007. http://www.inference.phy.cam.ac.uk/rpa23/papers/rpachangepoint.pdf.
[15] A. Blum. Empirical support for winnow and weighted-majority algorithms: Results on a calendar scheduling domain. Machine Learning, 26:5–23, 1997.
[16] T. M. Mitchell, R. Caruana, D. Freitag, J. McDermott, and D. Zabowski. Experience with a learning personal assistant. Communications of the ACM, 37(7):80–91, July 1994.
[17] M. Harries, C. Sammut, and K. Horn. 32(2):101–126, 1998. Extracting hidden context. 9 Machine Learning,