nips nips2010 nips2010-46 nips2010-46-reference knowledge-graph by maker-knowledge-mining

46 nips-2010-Causal discovery in multiple models from different experiments


Source: pdf

Author: Tom Claassen, Tom Heskes

Abstract: A long-standing open research problem is how to use information from different experiments, including background knowledge, to infer causal relations. Recent developments have shown ways to use multiple data sets, provided they originate from identical experiments. We present the MCI-algorithm as the first method that can infer provably valid causal relations in the large sample limit from different experiments. It is fast, reliable and produces very clear and easily interpretable output. It is based on a result that shows that constraint-based causal discovery is decomposable into a candidate pair identification and subsequent elimination step that can be applied separately from different models. We test the algorithm on a variety of synthetic input model sets to assess its behavior and the quality of the output. The method shows promising signs that it can be adapted to suit causal discovery in real-world application areas as well, including large databases. 1


reference text

[1] P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search. Cambridge, Massachusetts: The MIT Press, 2nd ed., 2000.

[2] D. Chickering, “Optimal structure identification with greedy search,” Journal of Machine Learning Research, vol. 3, no. 3, pp. 507–554, 2002.

[3] R. Tillman, D. Danks, and C. Glymour, “Integrating locally learned causal structures with overlapping variables,” in Advances in Neural Information Processing Systems, 21, 2008.

[4] S. Mani, G. Cooper, and P. Spirtes, “A theoretical study of Y structures for causal discovery,” in Proceedings of the 22nd Conference in Uncertainty in Artificial Intelligence, pp. 314–323, 2006.

[5] J. Pearl, Causality: models, reasoning and inference. Cambridge University Press, 2000.

[6] J. Zhang, “Causal reasoning with ancestral graphs,” Journal of Machine Learning Research, vol. 9, pp. 1437 – 1474, 2008.

[7] T. Richardson and P. Spirtes, “Ancestral graph Markov models,” Ann. Stat., vol. 30, no. 4, pp. 962–1030, 2002.

[8] J. Zhang, “On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias,” Artificial Intelligence, vol. 172, no. 16-17, pp. 1873 – 1896, 2008.

[9] J. Zhang and P. Spirtes, “Detection of unfaithfulness and robust causal inference,” Minds and Machines, vol. 2, no. 18, pp. 239–271, 2008.

[10] P. Spirtes, C. Meek, and T. Richardson, “An algorithm for causal inference in the presence of latent variables and selection bias,” in Computation, Causation, and Discovery, pp. 211–252, 1999.

[11] S. Shimizu, P. Hoyer, A. Hyv¨ rinen, and A. Kerminen, “A linear non-Gaussian acyclic model a for causal discovery,” Journal of Machine Learning Research, vol. 7, pp. 2003–2030, 2006.

[12] P. Hoyer, D. Janzing, J. Mooij, J. Peters, and B. Sch¨ lkopf, “Nonlinear causal discovo ery with additive noise models,” in Advances in Neural Information Processing Systems 21 (NIPS*2008), pp. 689–696, 2009.

[13] J. Ide and F. Cozman, “Random generation of Bayesian networks,” in Advances in Artificial Intelligence, pp. 366–376, Springer Berlin, 2002.

[14] T. Claassen and T. Heskes, “Learning causal network structure from multiple (in)dependence models,” in Proceedings of the Fifth European Workshop on Probabilistic Graphical Models, 2010.

[15] C. Meek, “Causal inference and causal explanation with background knowledge,” in UAI, pp. 403–410, Morgan Kaufmann, 1995. 9