nips nips2013 nips2013-62 knowledge-graph by maker-knowledge-mining

62 nips-2013-Causal Inference on Time Series using Restricted Structural Equation Models


Source: pdf

Author: Jonas Peters, Dominik Janzing, Bernhard Schölkopf

Abstract: Causal inference uses observational data to infer the causal structure of the data generating system. We study a class of restricted Structural Equation Models for time series that we call Time Series Models with Independent Noise (TiMINo). These models require independent residual time series, whereas traditional methods like Granger causality exploit the variance of residuals. This work contains two main contributions: (1) Theoretical: By restricting the model class (e.g. to additive noise) we provide general identifiability results. They cover lagged and instantaneous effects that can be nonlinear and unfaithful, and non-instantaneous feedbacks between the time series. (2) Practical: If there are no feedback loops between time series, we propose an algorithm based on non-linear independence tests of time series. We show empirically that when the data are causally insufficient or the model is misspecified, the method avoids incorrect answers. We extend the theoretical and the algorithmic part to situations in which the time series have been measured with different time delays. TiMINo is applied to artificial and real data and code is provided. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 de Abstract Causal inference uses observational data to infer the causal structure of the data generating system. [sent-7, score-0.351]

2 We study a class of restricted Structural Equation Models for time series that we call Time Series Models with Independent Noise (TiMINo). [sent-8, score-0.162]

3 These models require independent residual time series, whereas traditional methods like Granger causality exploit the variance of residuals. [sent-9, score-0.222]

4 They cover lagged and instantaneous effects that can be nonlinear and unfaithful, and non-instantaneous feedbacks between the time series. [sent-13, score-0.357]

5 (2) Practical: If there are no feedback loops between time series, we propose an algorithm based on non-linear independence tests of time series. [sent-14, score-0.226]

6 We extend the theoretical and the algorithmic part to situations in which the time series have been measured with different time delays. [sent-16, score-0.212]

7 1 Introduction We first introduce the problem of causal inference on iid data, that is in the case with no time structure. [sent-18, score-0.428]

8 Let therefore X i , i ∈ V , be a set of random variables and let G be a directed acyclic graph (DAG) on V describing the causal relationships between the variables. [sent-19, score-0.375]

9 Given iid samples i from P(X ),i∈V , we aim at estimating the underlying causal structure of the variables X i , i ∈ V . [sent-20, score-0.378]

10 Xt denotes the vector of time series values at time t. [sent-39, score-0.212]

11 u 1 components of the time series as vertices and an arrow between X i and X j , i = j, if there is an j i arrow from Xt−k to Xt in the full time graph for some k. [sent-42, score-0.344]

12 , XT ) of a multivariate time series and estimate the true summary time graph. [sent-46, score-0.251]

13 Nevertheless several methods dealing with time series data are motivated by the iid setting (Section 2). [sent-51, score-0.219]

14 in the presence of a confounder) the methods draw false causal conclusions. [sent-54, score-0.321]

15 In this work, we extend the Structural Equation Model framework to time series data and call this approach time series models with independent noise (TiMINo). [sent-56, score-0.357]

16 This model formulation comes with substantial benefits: In Section 3 we prove that for TiMINo models the full causal structure can be recovered from the distribution. [sent-59, score-0.321]

17 If the data do not satisfy the model assumptions, TiMINo causality remains mostly undecided instead of drawing wrong causal conclusions. [sent-62, score-0.731]

18 Section 5 deals with time series that have been shifted by different (unknown) time delays. [sent-63, score-0.257]

19 2 Existing methods Granger causality [Granger, 1969] (G-causality for the remainder of the article) is based on the following idea: X i does not Granger cause X j if including the past of X i does not help in prej dicting Xt given the past of all all other time series X k , k = i. [sent-65, score-0.374]

20 The phrase “does not help” is translated into a significance test assuming a multivariate time series model. [sent-68, score-0.162]

21 [1996], Chu and Glymour [2008] introduce additive nonlinear time series models (ANLTSM for short) for performing relaxed conditional independence tests: If including one 2 2 2 1 1 variable, e. [sent-88, score-0.395]

22 , 2000] in order to infer the causal structure exploiting those conditional independence statements. [sent-92, score-0.435]

23 The instantaneous effects are assumed to be linear and the confounders linear and instantaneous. [sent-93, score-0.302]

24 It allows for instantaneous effects and assumes all relationships to be linear. [sent-97, score-0.232]

25 , when Xt is causing Yt , including any of the two time series helps for predicting the other and G-causality infers X → Y and Y → X. [sent-102, score-0.229]

26 ANLTSM and TS-LiNGAM only allow for linear instantaneous effects. [sent-103, score-0.171]

27 Theorem 1 shows that the summary time graph may still be identifiable when the instantaneous effects are linear and the variables are jointly Gaussian. [sent-104, score-0.375]

28 We will see empirically that TiMINo remains undecided instead; Entner and Hoyer [2010] and Janzing et al. [sent-108, score-0.172]

29 ANLTSM does not allow for nonlinear confounders or confounders with time structure and TS-LiNGAM may fail, too (Exp. [sent-110, score-0.265]

30 3 Structural Equation models for time series: TiMINo i Definition 1 Consider a time series Xt = (Xt )i∈V whose finite dimensional distributions are absolutely continuous w. [sent-127, score-0.212]

31 The time series satisfies a TiMINo if there is a p > 0 and ∀i ∈ V there are sets PAi ⊆ X V \{i} , PAi ⊆ X V , s. [sent-132, score-0.162]

32 This means that (I) causal minimality holds, a weak form of faithfulness that assumes a statistical dependence between cause and effect given all other parents [Spirtes et al. [sent-142, score-0.497]

33 Important examples include nonlinear functions with additive Gaussian noise and linear functions with additive non-Gaussian noise. [sent-147, score-0.256]

34 nonlinear functions fi with additive Gaussian noise Nti or linear functions fi with additive non-Gaussian noise Nti ). [sent-153, score-0.421]

35 the full time graph, and the summary time graph is acyclic. [sent-158, score-0.193]

36 In particular, the true causal summary time graph is identifiable. [sent-160, score-0.464]

37 Condition (ii) shows how time structure simplifies the causal inference problem. [sent-170, score-0.371]

38 For iid data the true graph is not identifiable in the linear Gaussian case; with time structure it is. [sent-171, score-0.161]

39 4 A practical method: TiMINo causality The algorithm for TiMINo causality is based on the theoretical finding in Theorem 1. [sent-174, score-0.344]

40 It takes the time series data as input and outputs either a DAG that estimates the summary time graph or remains undecided. [sent-175, score-0.305]

41 This becomes intractable for a time series with many components; for time series without feedback loops, we adapt a method for additive noise models without time structure suggested by Mooij et al. [sent-178, score-0.527]

42 [2009], the time complexity is O(d2 · f (n, d) · t(n, d)), where d is the number of time series, n the sample size and f (n, d) and t(n, d) the complexity of the user-specific regression method and independence test, respectively. [sent-182, score-0.184]

43 Algorithm 1 TiMINo causality 1: Input: Samples from a d-dimensional time series of length T : (X1 , . [sent-186, score-0.372]

44 , pa(d)) Depending on the assumed model class, TiMINo causality has to be provided with a fitting method. [sent-206, score-0.172]

45 i To test for independence between a residual time series Ntk and another time series Xt , i ∈ S, we shift the latter time series up to the maximal order ±p (but at least up to ±4); for each of those combinations we perform HSIC [Gretton et al. [sent-236, score-0.616]

46 1 in Brockwell and Davis, 1991], which is restricted to two time series and linear functions. [sent-245, score-0.162]

47 As opposed to the iid setting, testing for cross-correlation is often enough in order to reject a wrong model. [sent-246, score-0.169]

48 If the model assumptions only hold in some parts of the summary time graph, we can still try to discover parts of the causal structure. [sent-252, score-0.41]

49 This, however, requires an “unnatural” fine tuning of the functions [Janzing and Steudel, 2010] and is relevant only when there are time series without time structure or the data are non-faithful (see Theorem 1). [sent-257, score-0.212]

50 The null hypothesis of the independence test represents independence, although the scientific discovery of a causal relationship should rather be the alternative hypothesis. [sent-258, score-0.405]

51 This fact may lead to wrong causal conclusions (instead of “I do not know”) on small data sets. [sent-259, score-0.433]

52 The effect is strengthened by the Bonferroni correction of the HSIC based independence test; one may require modifications for a high number of time series components. [sent-260, score-0.246]

53 For large sample sizes, even 4 smallest differences between the true data generating process and the model may lead to rejected independence tests [discussed by Peters et al. [sent-261, score-0.172]

54 5 TiMINo for Shifted Time Series In some applications, we observe the components of the time series with varying time delay. [sent-263, score-0.212]

55 Given the (shifted) measurements Xt , we therefore have to cope with causal relationships that go backward in time. [sent-273, score-0.321]

56 Measures like Granger causality will fail in these situations. [sent-275, score-0.172]

57 It may not be possible, for example, to distinguish between a lag two effect from X 1 to X 2 and a corresponding lag one effect with a shifted time series X 2 . [sent-281, score-0.271]

58 While TiMINo exploits an asymmetry between cause and effect emerging from restricted structural equations, G-causality exploits the asymmetry of time. [sent-284, score-0.162]

59 1 Artificial Data We always included instantaneous effects, fitted models up to order p = 2 or p = 6 and set α = 0. [sent-287, score-0.171]

60 1: Part of the causal full time graph with hidden common cause Z (top left). [sent-372, score-0.465]

61 TiMINo causality does not decide (top right), whereas G-causality and TS-LiNGAM wrongly infer causal connections between X and Y (bottom). [sent-373, score-0.569]

62 5 0 100 300 500 700 length of time series 900 Table 1: Exp. [sent-379, score-0.2]

63 Nonlinear G-causality fails because it analyzes the causal structure between pairs of time series. [sent-390, score-0.371]

64 Bad model assumptions do not always lead to incorrect causal conclusions. [sent-402, score-0.348]

65 In Experiment 1, TiMINo equipped with a cross-correlation inferred a causal edge, although there were none. [sent-404, score-0.321]

66 TiMINo-gam with cross-correlation infers no causal link between X and Y , whereas TiMINo-gam with HSIC correctly identifies X → Y . [sent-409, score-0.388]

67 We sample time series (length 1000) from Gaussian noise and observe the sink node time series with a time delay of three. [sent-420, score-0.437]

68 Although it is possible to adjust for time delays, the procedure enlarges the model space, which makes rejecting all wrong models more difficult. [sent-431, score-0.162]

69 5 (92 undecided cases) and TiMINo-gam with time delay adjustment: 2. [sent-434, score-0.206]

70 2 Real Data We fitted up to order 6 and included instantaneous effects. [sent-437, score-0.171]

71 Disregarding time information leads to a wrong causal conclusion: The method described by Hoyer et al. [sent-447, score-0.529]

72 TS-LiNGAM and TiMINo give correct answers, whereas linear G-causality infers X → Y , and nonlinear G-causality infers Y → X. [sent-455, score-0.244]

73 If we model 1000 randomly chosen samples as time series, G-causality (both linear and nonlinear) infers no causal relation as expected. [sent-458, score-0.438]

74 TS-LiNGAM wrongly infers Y → X, which is probably due to the nonlinear relationship. [sent-459, score-0.188]

75 TiMINo causality then outputs a causal ordering of the variables. [sent-474, score-0.493]

76 This procedure reveals a sensible causal structure (we  arbitrarily- refer to entries larger than 0. [sent-476, score-0.321]

77 4 (living room) does not cause any other reading, but every other reading does cause it (the living room is the only room without any heating). [sent-493, score-0.172]

78 7 Conclusions and Future Work This paper shows how causal inference on time series benefits from the framework of Structural Equation Models. [sent-503, score-0.483]

79 The algorithm is based on unconditional independence tests and is applicable to multivariate, linear, nonlinear and instantaneous interactions. [sent-505, score-0.372]

80 While methods like Granger causality are built on the asymmetry of time direction, TiMINo additionally takes into account identifiability emerging from restricted structural equation models. [sent-507, score-0.305]

81 Although an extensive evaluation on real data sets is still required, we believe that our results emphasize the potential use of causal inference methods. [sent-509, score-0.321]

82 This may reduce the number of cases where TiMINo causality is undecided. [sent-514, score-0.172]

83 TiMINo causality evaluates a model fit by checking independence of the residuals. [sent-515, score-0.256]

84 (i) Assume that G and G do not differ in the instantaneous effects: 1 2 PAi (in G) = PAi (in G ) ∀i. [sent-525, score-0.171]

85 From G and Lemma 1 we have that Xt−k ⊥ Xt | S , where S = ({Xt−l , 1 ≤ l ≤ ⊥ 2 1 2 i p, i ∈ V } ∪ NDt ) \ {Xt−k , Xt }, and NDt are all Xt that are non-descendants (wrt instantaneous 1 2 ⊥ 2 effects) of Xt . [sent-527, score-0.171]

86 Applied to G, causal minimality leads to a contradiction: Xt−k ⊥ Xt | S . [sent-528, score-0.346]

87 Now, i let G and G differ in the instantaneous effects and choose S = {Xt−l , 1 ≤ l ≤ p, i ∈ V }. [sent-529, score-0.232]

88 For i i ˜ i ˜ i each s and i we have: Xt |S=s = fi (s, (PA0 )t ), where PA0 are all instantaneous parents of Xt i conditioned on S = s. [sent-530, score-0.268]

89 All Xt |S=s with the instantaneous effects describe two different structures of an IFMOC. [sent-531, score-0.232]

90 (ii) Because of Lemma 1 and faithfulness G and G only differ in the instantaneous effects. [sent-534, score-0.205]

91 But each instantaneous j j j j i arrow Xt → Xt forms a v-structure together with Xt−k → Xt ; Xt−k cannot be connected with i Xt since this introduces a cycle in the summary time graph. [sent-535, score-0.299]

92 Radial basis function approach to nonlinear Granger causality of time series. [sent-545, score-0.297]

93 Analyzing multiple nonlinear time series with extended Granger causality. [sent-602, score-0.237]

94 Investigating causal relations by econometric models and cross-spectral methods. [sent-634, score-0.321]

95 Causal modelling combining instantaneous and lagged effects: an a identifiable model based on non-gaussianity. [sent-665, score-0.171]

96 Justifying additive-noise-model based causal discovery via algorithmic information theory. [sent-670, score-0.321]

97 Regression by dependence minimization and its application o to causal inference. [sent-685, score-0.321]

98 Estimating the directed information to infer causal relationships in ensemble neural spike train recordings. [sent-738, score-0.351]

99 A linear non-Gaussian acyclic model for causal a discovery. [sent-747, score-0.321]

100 Dependence minimizing regression with model selection for non-linear causal inference under non-Gaussian noise. [sent-775, score-0.321]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('timino', 0.519), ('xt', 0.464), ('causal', 0.321), ('causality', 0.172), ('instantaneous', 0.171), ('yt', 0.133), ('peters', 0.132), ('undecided', 0.126), ('pai', 0.126), ('janzing', 0.122), ('wrong', 0.112), ('series', 0.112), ('granger', 0.099), ('mooij', 0.099), ('nti', 0.098), ('confounder', 0.087), ('independence', 0.084), ('hoyer', 0.08), ('zt', 0.076), ('nonlinear', 0.075), ('additive', 0.074), ('confounders', 0.07), ('infers', 0.067), ('fi', 0.066), ('effects', 0.061), ('iid', 0.057), ('anltsm', 0.056), ('pfull', 0.056), ('shd', 0.056), ('graph', 0.054), ('residuals', 0.053), ('glymour', 0.051), ('hsic', 0.051), ('time', 0.05), ('shimizu', 0.05), ('temperature', 0.049), ('et', 0.046), ('wrongly', 0.046), ('sch', 0.046), ('shifted', 0.045), ('structural', 0.044), ('identi', 0.043), ('brockwell', 0.042), ('entner', 0.042), ('gam', 0.042), ('prestr', 0.042), ('timinogp', 0.042), ('tests', 0.042), ('delays', 0.041), ('bell', 0.041), ('cause', 0.04), ('arrow', 0.039), ('asymmetry', 0.039), ('faithful', 0.039), ('summary', 0.039), ('dag', 0.039), ('spirtes', 0.038), ('length', 0.038), ('chu', 0.037), ('var', 0.037), ('bivariate', 0.035), ('correct', 0.035), ('faithfulness', 0.034), ('noise', 0.033), ('gretton', 0.032), ('lag', 0.032), ('room', 0.032), ('pa', 0.031), ('parents', 0.031), ('answers', 0.031), ('infer', 0.03), ('delay', 0.03), ('experiment', 0.03), ('living', 0.028), ('ancona', 0.028), ('azzalini', 0.028), ('bathroom', 0.028), ('boiler', 0.028), ('buxton', 0.028), ('cheese', 0.028), ('eruption', 0.028), ('florens', 0.028), ('heating', 0.028), ('ifmoc', 0.028), ('independences', 0.028), ('lingam', 0.028), ('ndt', 0.028), ('nolte', 0.028), ('rssfull', 0.028), ('causes', 0.027), ('rinen', 0.027), ('incorrect', 0.027), ('shed', 0.026), ('mpi', 0.026), ('tting', 0.026), ('pr', 0.026), ('minimality', 0.025), ('abalone', 0.025), ('quinn', 0.025), ('ii', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 62 nips-2013-Causal Inference on Time Series using Restricted Structural Equation Models

Author: Jonas Peters, Dominik Janzing, Bernhard Schölkopf

Abstract: Causal inference uses observational data to infer the causal structure of the data generating system. We study a class of restricted Structural Equation Models for time series that we call Time Series Models with Independent Noise (TiMINo). These models require independent residual time series, whereas traditional methods like Granger causality exploit the variance of residuals. This work contains two main contributions: (1) Theoretical: By restricting the model class (e.g. to additive noise) we provide general identifiability results. They cover lagged and instantaneous effects that can be nonlinear and unfaithful, and non-instantaneous feedbacks between the time series. (2) Practical: If there are no feedback loops between time series, we propose an algorithm based on non-linear independence tests of time series. We show empirically that when the data are causally insufficient or the model is misspecified, the method avoids incorrect answers. We extend the theoretical and the algorithmic part to situations in which the time series have been measured with different time delays. TiMINo is applied to artificial and real data and code is provided. 1

2 0.20569523 337 nips-2013-Transportability from Multiple Environments with Limited Experiments

Author: Elias Bareinboim, Sanghack Lee, Vasant Honavar, Judea Pearl

Abstract: This paper considers the problem of transferring experimental findings learned from multiple heterogeneous domains to a target domain, in which only limited experiments can be performed. We reduce questions of transportability from multiple domains and with limited scope to symbolic derivations in the causal calculus, thus extending the original setting of transportability introduced in [1], which assumes only one domain with full experimental information available. We further provide different graphical and algorithmic conditions for computing the transport formula in this setting, that is, a way of fusing the observational and experimental information scattered throughout different domains to synthesize a consistent estimate of the desired effects in the target domain. We also consider the issue of minimizing the variance of the produced estimand in order to increase power. 1 Motivation Transporting and synthesizing experimental knowledge from heterogeneous settings are central to scientific discovery. Conclusions that are obtained in a laboratory setting are transported and applied elsewhere in an environment that differs in many aspects from that of the laboratory. In data-driven sciences, experiments are conducted on disparate domains, but the intention is almost invariably to fuse the acquired knowledge, and translate it into some meaningful claim about a target domain, which is usually different than any of the individual study domains. However, the conditions under which this extrapolation can be legitimized have not been formally articulated until very recently. Although the problem has been discussed in many areas of statistics, economics, and the health sciences, under rubrics such as “external validity” [2, 3], “meta-analysis” [4], “quasi-experiments” [5], “heterogeneity” [6], these discussions are limited to verbal narratives in the form of heuristic guidelines for experimental researchers – no formal treatment of the problem has been attempted to answer the practical challenge of generalizing causal knowledge across multiple heterogeneous domains with disparate experimental data posed in this paper. The fields of artificial intelligence and statistics provide the theoretical underpinnings necessary for tackling transportability. First, the distinction between statistical and causal knowledge has received syntactic representation through causal diagrams [7, 8, 9], which became a popular tool for causal inference in data-driven fields. Second, the inferential machinery provided by the causal calculus (do-calculus) [7, 9, 10] is particularly suitable for handling knowledge transfer across domains. Armed with these techniques, [1] introduced a formal language for encoding differences and commonalities between domains accompanied with necessary or sufficient conditions under which transportability of empirical findings is feasible between two domains, a source and a target; then, these conditions were extended for a complete characterization for transportability in one domain with unrestricted experimental data [11]. Subsequently, these results were generalized for the settings when ∗ These authors contributed equally to this paper. The authors’ addresses are respectively eb@cs.ucla.edu, sxl439@ist.psu.edu, vhonavar@ist.psu.edu, judea@cs.ucla.edu. 1 only limited experiments are available in the source domain [12, 13], and further for when multiple source domains with unrestricted experimental information are available [14, 15]. This paper broadens these discussions introducing a more general setting in which multiple heterogeneous sources with limited and distinct experiments are available, a task that we call here “mz-transportability”.1 More formally, the mz-transportability problem concerns with the transfer of causal knowledge from a heterogeneous collection of source domains Π = {π1 , ..., πn } to a target domain π ∗ . In each domain πi ∈ Π, experiments over a set of variables Zi can be performed, and causal knowledge gathered. In π ∗ , potentially different from πi , only passive observations can be collected (this constraint is weakened later on). The problem is to infer a causal relationship R in π ∗ using knowledge obtained in Π. Clearly, if nothing is known about the relationship between Π and π ∗ , the problem is trivial; no transfer can be justified. Yet the fact that all scientific experiments are conducted with the intent of being used elsewhere (e.g., outside the lab) implies that scientific progress relies on the assumption that certain domains share common characteristics and that, owed to these commonalities, causal claims would be valid in new settings even where experiments cannot be conducted. The problem stated in this paper generalizes the one-dimensional version of transportability with limited scope and the multiple dimensional with unlimited scope. Remarkably, while the effects of interest might not be individually transportable to the target domain from the experiments in any of the available sources, combining different pieces from the various sources may enable the estimation of the desired effects (to be shown later on). The goal of this paper is to formally understand under which conditions the target quantity is (non-parametrically) estimable from the available data. 2 Previous work and our contributions Consider Fig. 1(a) in which the node S represents factors that produce differences between source and target populations. Assume that we conduct a randomized trial in Los Angeles (LA) and estimate the causal effect of treatment X on outcome Y for every age group Z = z, denoted by P (y|do(x), z). We now wish to generalize the results to the population of the United States (U.S.), but we find the distribution P (x, y, z) in LA to be different from the one in the U.S. (call the latter P ∗ (x, y, z)). In particular, the average age in the U.S. is significantly higher than that in LA. How are we to estimate the causal effect of X on Y in U.S., denoted R = P ∗ (y|do(x))? 2 3 The selection diagram for this example (Fig. 1(a)) conveys the assumption that the only difference between the two populations are factors determining age distributions, shown as S → Z, while agespecific effects P ∗ (y|do(x), Z = z) are invariant across populations. Difference-generating factors are represented by a special set of variables called selection variables S (or simply S-variables), which are graphically depicted as square nodes ( ). From this assumption, the overall causal effect in the U.S. can be derived as follows: R P ∗ (y|do(x), z)P ∗ (z) = z P (y|do(x), z)P ∗ (z) = (1) z The last line is the transport formula for R. It combines experimental results obtained in LA, P (y|do(x), z), with observational aspects of the U.S. population, P ∗ (z), to obtain an experimental claim P ∗ (y|do(x)) about the U.S.. In this trivial example, the transport formula amounts to a simple re-calibration (or re-weighting) of the age-specific effects to account for the new age distribution. In general, however, a more involved mixture of experimental and observational findings would be necessary to obtain a bias-free estimate of the target relation R. Fig. 1(b) depicts the smallest example in which transportability is not feasible even when experiments over X in π are available. In real world applications, it may happen that certain controlled experiments cannot be conducted in the source environment (for financial, ethical, or technical reasons), so only a limited amount 1 The machine learning literature has been concerned about discrepancies among domains in the context, almost exclusively, on predictive or classification tasks as opposed to learning causal or counterfactual measures [16, 17]. Interestingly enough, recent work on anticausal learning moves towards more general modalities of learning and also leverages knowledge about the underlying data-generating structure [18, 19]. 2 We will use Px (y | z) interchangeably with P (y | do(x), z). 3 We use the structural interpretation of causal diagrams as described in [9, pp. 205]. 2 S S Z Z1 S S X Y (a) X Y (b) Z2 X Y (c) Figure 1: The selection variables S are depicted as square nodes ( ). (a) Selection diagram illustrating when transportability between two domains is trivially solved through simple recalibration. (b) The smallest possible selection diagram in which a causal relation is not transportable. (c) Selection diagram illustrating transportability when only experiments over {Z1 } are available in the source. of experimental information can be gathered. A natural question arises whether the investigator in possession of a limited set of experiments would still be able to estimate the desired effects at the target domain. For instance, we assume in Fig. 1(c) that experiments over Z1 are available and the target quantity is R = P ∗ (y|do(x)), which can be shown to be equivalent to P (y|x, do(Z1 )), the conditional distribution of Y given X in the experimental study when Z1 is randomized. 4 One might surmise that multiple pairwise z-transportability would be sufficient to solve the mztransportability problem, but this is not the case. To witness, consider Fig. 2(a,b) which concerns the transport of experimental results from two sources ({πa , πb }) to infer the effect of X on Y in π ∗ , R = P ∗ (y|do(x)). In these diagrams, X may represent the treatment (e.g., cholesterol level), Z1 represents a pre-treatment variable (e.g., diet), Z2 represents an intermediate variable (e.g., biomarker), and Y represents the outcome (e.g., heart failure). We assume that experimental studies randomizing {Z1 , Z2 } can be conducted in both domains. A simple analysis based on [12] can show that R cannot be z-transported from either source alone, but it turns out that combining in a special way experiments from both sources allows one to determine the effect in the target. More interestingly, we consider the more stringent scenario where only certain experiments can be performed in each of the domains. For instance, assume that it is only possible to conduct experiments over {Z2 } in πa and over {Z1 } in πb . Obviously, R will not be z-transported individually from these domains, but it turns out that taking both sets of experiments into account, R = z2 P (a) (y|do(z2 ))P (b) (z2 |x, do(Z1 )), which fully uses all pieces of experimental data available. In other words, we were able to decompose R into subrelations such that each one is separately z-transportable from the source domains, and so is the desired target quantity. Interestingly, it is the case in this example that if the domains in which experiments were conducted were reversed (i.e., {Z1 } randomized in πa , {Z2 } in πb ), it will not be possible to transport R by any method – the target relation is simply not computable from the available data (formally shown later on). This illustrates some of the subtle issues mz-transportability entails, which cannot be immediately cast in terms of previous instances of the transportability class. In the sequel, we try to better understand some of these issues, and we develop sufficient or (specific) necessary conditions for deciding special transportability for arbitrary collection of selection diagrams and set of experiments. We further construct an algorithm for deciding mz-transportability of joint causal effects and returning the correct transport formula whenever this is possible. We also consider issues relative to the variance of the estimand aiming for improving sample efficiency and increasing statistical power. 3 Graphical conditions for mz-transportability The basic semantical framework in our analysis rests on structural causal models as defined in [9, pp. 205], also called data-generating models. In the structural causal framework [9, Ch. 7], actions are modifications of functional relationships, and each action do(x) on a causal model M produces 4 A typical example is whether we can estimate the effect of cholesterol (X) on heart failure (Y ) by experiments on diet (Z1 ) given that cholesterol levels cannot be randomized [20]. 3 Z2 Z1 Z1 X Z1 X X Z2 W Z2 Y (a) Z2 Z1 Y X Z3 W U U Y Y (b) (c) Z3 (d) Figure 2: Selection diagrams illustrating impossibility of estimating R = P ∗ (y|do(x)) through individual transportability from πa and πb even when Z = {Z1 , Z2 } (for (a, b), (c, d))). If we assume, more stringently, availability of experiments Za = {Z2 }, Zb = {Z1 }, Z∗ = {}, a more elaborated analysis can show that R can be estimated combining different pieces from both domains. a new model Mx = U, V, Fx , P (U) , where Fx is obtained after replacing fX ∈ F for every X ∈ X with a new function that outputs a constant value x given by do(x). 5 We follow the conventions given in [9]. We denote variables by capital letters and their realized values by small letters. Similarly, sets of variables will be denoted by bold capital letters, sets of realized values by bold small letters. We use the typical graph-theoretic terminology with the corresponding abbreviations P a(Y)G and An(Y)G , which will denote respectively the set of observable parents and ancestors of the node set Y in G. A graph GY will denote the induced subgraph G containing nodes in Y and all arrows between such nodes. Finally, GXZ stands for the edge subgraph of G where all incoming arrows into X and all outgoing arrows from Z are removed. Key to the analysis of transportability is the notion of “identifiability,” defined below, which expresses the requirement that causal effects are computable from a combination of data P and assumptions embodied in a causal graph G. Definition 1 (Causal Effects Identifiability (Pearl, 2000, pp. 77)). The causal effect of an action do(x) on a set of variables Y such that Y ∩ X = ∅ is said to be identifiable from P in G if Px (y) is uniquely computable from P (V) in any model that induces G. Causal models and their induced graphs are usually associated with one particular domain (also called setting, study, population, or environment). In ordinary transportability, this representation was extended to capture properties of two domains simultaneously. This is possible if we assume that the structural equations share the same set of arguments, though the functional forms of the equations may vary arbitrarily [11]. 6 Definition 2 (Selection Diagram). Let M, M ∗ be a pair of structural causal models [9, pp. 205] relative to domains π, π ∗ , sharing a causal diagram G. M, M ∗ is said to induce a selection diagram D if D is constructed as follows: 1. Every edge in G is also an edge in D; 2. D contains an extra edge Si → Vi whenever there might exist a discrepancy fi = fi∗ or P (Ui ) = P ∗ (Ui ) between M and M ∗ . In words, the S-variables locate the mechanisms where structural discrepancies between the two domains are suspected to take place.7 Alternatively, the absence of a selection node pointing to a variable represents the assumption that the mechanism responsible for assigning value to that variable is identical in both domains. 5 The results presented here are also valid in other formalisms for causality based on potential outcomes. As discussed in the reference, the assumption of no structural changes between domains can be relaxed, but some structural assumptions regarding the discrepancies between domains must still hold. 7 Transportability assumes that enough structural knowledge about both domains is known in order to substantiate the production of their respective causal diagrams. In the absence of such knowledge, causal discovery algorithms might be used to infer the diagrams from data [8, 9]. 6 4 Armed with the concept of identifiability and selection diagrams, mz-transportability of causal effects can be defined as follows: Definition 3 (mz-Transportability). Let D = {D(1) , ..., D(n) } be a collection of selection diagrams relative to source domains Π = {π1 , ..., πn }, and target domain π ∗ , respectively, and Zi (and Z∗ ) i be the variables in which experiments can be conducted in domain πi (and π ∗ ). Let P i , Iz be i i the pair of observational and interventional distributions of πi , where Iz = Z ⊆Zi P (v|do(z )), ∗ and in an analogous manner, P ∗ , Iz be the observational and interventional distributions of π ∗ . ∗ ∗ The causal effect R = Px (y|w) is said to be mz-transportable from Π to π ∗ in D if Px (y|w) is i ∗ uniquely computable from i=1,...,n P i , Iz ∪ P ∗ , Iz in any model that induces D. ∗ i The requirement that R is uniquely computable from P ∗ , Iz and P i , Iz from all sources has a syntactic image in the causal calculus, which is captured by the following sufficient condition. Theorem 1. Let D = {D(1) , ..., D(n) } be a collection of selection diagrams relative to source domains Π = {π1 , ..., πn }, and target domain π ∗ , respectively, and Si represents the collection of i ∗ S-variables in the selection diagram D(i) . Let { P i , Iz } and P ∗ , Iz be respectively the pairs of observational and interventional distributions in the sources Π and target π ∗ . The relation R = P ∗ (y|do(x), w) is mz-transportable from Π to π ∗ in D if the expression P (y|do(x), w, S1 , ..., Sn ) is reducible, using the rules of the causal calculus, to an expression in which (1) do-operators that ∗ i apply to subsets of Iz have no Si -variables or (2) do-operators apply only to subsets of Iz . This result provides a powerful way to syntactically establish mz-transportability, but it is not immediately obvious whether a sequence of applications of the rules of causal calculus that achieves the reduction required by the theorem exists, and even if such sequence exists, it is not obvious how to obtain it. For concreteness, we illustrate this result using the selection diagrams in Fig. 2(a,b). Corollary 1. P ∗ (y|do(x)) is mz-transportable in Fig. 2(a,b) with Za = {Z2 } and Zb = {Z1 }. Proof. The goal is to show that R = P ∗ (y|do(x)) is mz-transportable from {πa , πb } to π ∗ using experiments conducted over {Z2 } in πa and {Z1 } in πb . Note that naively trying to transport R from each of the domains individually is not possible, but R can be decomposed as follows: P ∗ (y|do(x)) = P ∗ (y|do(x), do(Z1 )) (2) P ∗ (y|do(x), do(Z1 ), z2 )P ∗ (z2 |do(x), do(Z1 )) (3) P ∗ (y|do(x), do(Z1 ), do(z2 ))P ∗ (z2 |do(x), do(Z1 )), = (4) z2 = z2 where Eq. (2) follows by rule 3 of the causal calculus since (Z1 ⊥ Y |X)DX,Z holds, we con⊥ 1 dition on Z2 in Eq. (3), and Eq. (4) follows by rule 2 of the causal calculus since (Z2 ⊥ ⊥ Y |X, Z1 )DX,Z ,Z , where D is the diagram in π ∗ (despite the location of the S-nodes). 1 2 Now we can rewrite the first term of Eq. (4) as indicated by the Theorem (and suggested by Def. 2): P ∗ (y|do(x), do(Z1 ), do(z2 )) = P (y|do(x), do(Z1 ), do(z2 ), Sa , Sb ) (5) = P (y|do(x), do(Z1 ), do(z2 ), Sb ) (6) = P (y|do(z2 ), Sb ) (7) = P (a) (y|do(z2 )), (8) where Eq. (5) follows from the theorem (and the definition of selection diagram), Eq. (6) follows , Eq. (7) follows from rule from rule 1 of the causal calculus since (Sa ⊥ Y |Z1 , Z2 , X)D(a) ⊥ Z1 ,Z2 ,X 3 of the causal calculus since (Z1 , X ⊥ Y |Z2 )D(a) ⊥ . Note that this equation matches with the Z1 ,Z2 ,X a syntactic goal of Theorem 1 since we have precisely do(z2 ) separated from Sa (and Z2 ∈ Iz ); so, we can rewrite the expression which results in Eq. (8) by the definition of selection diagram. Finally, we can rewrite the second term of Eq. (4) as follows: P ∗ (z2 |do(x), do(Z1 )) = P (z2 |do(x), do(Z1 ), Sa , Sb ) = P (z2 |do(x), do(Z1 ), Sa ) = P (z2 |x, do(Z1 ), Sa ) = P (b) (z2 |x, do(Z1 )), 5 (9) (10) (11) (12) where Eq. (9) follows from the theorem (and the definition of selection diagram), Eq. (10) follows from rule 1 of the causal calculus since (Sb ⊥ Z2 |Z1 , X)D(b) , Eq. (11) follows from rule 2 of ⊥ Z1 ,X the causal calculus since (X ⊥ Z2 |Z1 )D(b) . Note that this equation matches the condition of the ⊥ Z1 X theorem, separate do(Z1 ) from Sb (i.e., experiments over Z1 can be used since they are available in πb ), so we can rewrite Eq. (12) using the definition of selection diagrams, the corollary follows. The next condition for mz-transportability is more visible than Theorem 1 (albeit weaker), which also demonstrates the challenge of relating mz-transportability to other types of transportability. Corollary 2. R = P ∗ (y|do(x)) is mz-transportable in D if there exists Zi ⊆ Zi such that all paths from Zi to Y are blocked by X, (Si ⊥ Y|X, Zi )D(i) , and R is computable from do(Zi ). ⊥ X,Z i Remarkably, randomizing Z2 when applying Corollary 1 was instrumental to yield transportability in the previous example, despite the fact that the directed paths from Z2 to Y were not blocked by X, which suggests how different this transportability is from z-identifiability. So, it is not immediately obvious how to combine the topological relations of Zi ’s with X and Y in order to create a general condition for mz-transportability, the relationship between the distributions in the different domains can get relatively intricate, but we defer this discussion for now and consider a simpler case. It is not usually trivial to pursue a derivation of mz-transportability in causal calculus, and next we show an example in which such derivation does not even exist. Consider again the diagrams in Fig. 2(a,b), and assume that randomized experiments are available over {Z1 } in πa and {Z2 } in πb . Theorem 2. P ∗ (y|do(x)) is not mz-transportable in Fig. 2(a,b) with Za = {Z1 } and Zb = {Z2 }. Proof. Formally, we need to display two models M1 , M2 such that the following relations hold (as implied by Def. 3):  (a) (a)  PM1 (Z1 , X, Z2 , Y ) = PM2 (Z1 , X, Z2 , Y ),   (b)   P (Z1 , X, Z2 , Y ) = P (b) (Z1 , X, Z2 , Y ),  M1 M2 (a) (a) (13) PM1 (X, Z2 , Y |do(Z1 )) = PM2 (X, Z2 , Y |do(Z1 )),  (b)   P (Z , X, Y |do(Z )) = P (b) (Z , X, Y |do(Z )),  M 2 1 2  M2  ∗1 1 ∗ PM1 (Z1 , X, Z2 , Y ) = PM2 (Z1 , X, Z2 , Y ), for all values of Z1 , X, Z2 , and Y , and also, ∗ ∗ PM1 (Y |do(X)) = PM2 (Y |do(X)), (14) for some value of X and Y . Let V be the set of observable variables and U be the set of unobservable variables in D. Let us assume that all variables in U ∪ V are binary. Let U1 , U2 ∈ U be the common causes of Z1 and X and Z2 , respectively; let U3 , U4 , U5 ∈ U be a random disturbance exclusive to Z1 , Z2 , and Y , respectively, and U6 ∈ U be an extra random disturbance exclusive to Z2 , and U7 , U8 ∈ U to Y . Let Sa and Sb index the model in the following way: the tuples Sa = 1, Sb = 0 , Sa = 0, Sb = 1 , Sa = 0, Sb = 0 represent domains πa , πb , and π ∗ , respectively. Define the two models as follows:    Z1 = U1 ⊕ U2 ⊕ U3 ∧ Sa  Z1 = U1 ⊕ U2 ⊕ U3 ∧ Sa     X=U X = Z1 ⊕ U1 1 M1 = M2 =  Z2 = (X ⊕ U2 ⊕ (U4 ∧ Sa )) ∨ U6  Z2 = U4 ∧ Sa ∧ U6 ⊕ U6     Y = (Z2 ∧ U5 ) ⊕ (U5 ∧ U7 ) ⊕ (Sb ∧ U8 ) Y = (Z2 ∧ U5 ) ⊕ (U5 ∧ U7 ) ⊕ (Sb ∧ U8 ) where ⊕ represents the exclusive or function. Both models agree in respect to P (U), which is defined as P (Ui ) = 1/2, i = 1, ..., 8. It is not difficult to evaluate these models and note that the constraints given in Eqs. (13) and (14) are satisfied (including positivity), the theorem follows. 4 Algorithm for computing mz-transportability In this section, we build on previous analyses of identifiability [7, 21, 22, 23] in order to obtain a mechanic procedure in which a collection of selection diagrams and experimental data is inputted, and the procedure returns a transport formula whenever it is able to produce one. More specifically, 6 PROCEDURE TRmz (y, x, P, I, S, W, D) INPUT: x, y: value assignments; P: local distribution relative to domain S (S = 0 indexes π ∗ ) and active experiments I; W: weighting scheme; D: backbone of selection diagram; Si : selection nodes in πi (S0 = ∅ (i) relative to π ∗ ); [The following set and distributions are globally defined: Zi , P ∗ , PZi .] (i) ∗ ∗ OUTPUT: Px (y) in terms of P ∗ , PZ , PZi or F AIL(D, C0 ). P 1 if x = ∅, return V\Y P. P 2 if V \ An(Y)D = ∅, return TRmz (y, x ∩ An(Y)D , V\An(Y)D P, I, S, W, DAn(Y) ). 3 set W = (V \ X) \ An(Y)DX . if W = ∅, return TRmz (y, x ∪ w, P, I, S, W, D). Q P 4 if C(D \ X) = {C0 , C1 , ..., Ck }, return V\{Y,X} i TRmz (ci , v \ ci , P, I, S, W, D). 5 if C(D \ X) = {C0 }, 6 if C(D) = {D}, Q P P 7 if C0 ∈ C(D), return i|Vi ∈C0 V\V (i) P/ V\V (i−1) P. D 8 9 10 11 12 D if (∃C )C0 ⊂ C ∈ C(D), (i−1) for {i|Vi ∈ C }, set κi = κi ∪ vD \C . Q (i−1) mz return TR (y, x ∩ C , i|Vi ∈C P(Vi |VD ∩ C , κi ), I, S, W, C ). else, if I`= ∅, for i = 0, ..., |D|, ´ if (Si ⊥ Y | X)D(i) ∧ (Zi ∩ X = ∅) , Ei = TRmz (y, x \ zi , P, Zi ∩ X, i, W, D \ {Zi ∩ X}). ⊥ PX (j) if |E| > 0, return |E| wi Ei . i=1 else, FAIL(D, C0 ). Figure 3: Modified version of identification algorithm capable of recognizing mz-transportability. our algorithm is called TRmz (see Fig. 3), and is based on the C-component decomposition for identification of causal effects [22, 23] (and a version of the identification algorithm called ID). The rationale behind TRmz is to apply Tian’s factorization and decompose the target relation into smaller, more manageable sub-expressions, and then try to evaluate whether each sub-expression can be computed in the target domain. Whenever this evaluation fails, TRmz tries to use the experiments available from the target and, if possible, from the sources; this essentially implements the declarative condition delineated in Theorem 1. Next, we consider the soundness of the algorithm. ∗ Theorem 3 (soundness). Whenever TRmz returns an expression for Px (y), it is correct. In the sequel, we demonstrate how the algorithm works through the mz-transportability of Q = P ∗ (y|do(x)) in Fig. 2(c,d) with Z∗ = {Z1 }, Za = {Z2 }, and Zb = {Z1 }. Since (V \ X) \ An(Y)DX = {Z2 }, TRmz invokes line 3 with {Z2 } ∪ {X} as interventional set. The new call triggers line 4 and C(D \ {X, Z2 }) = {C0 , C1 , C2 , C3 }, where C0 = DZ1 , C1 = DZ3 , C2 = DU , and C3 = DW,Y , we invoke line 4 and try to mz-transport individ∗ ∗ ∗ ually Q0 = Px,z2 ,z3 ,u,w,y (z1 ), Q1 = Px,z1 ,z2 ,u,w,y (z3 ), Q2 = Px,z1 ,z2 ,z3 ,w,y (u), and Q3 = ∗ Px,z1 ,z2 ,z3 ,u (w, y). Thus the original problem reduces to try to evaluate the equivalent expression ∗ ∗ ∗ ∗ z1 ,z3 ,u,w Px,z2 ,z3 ,u,w,y (z1 )Px,z1 ,z2 ,u,w,y (z3 ) Px,z1 ,z2 ,z3 ,w,y (u)Px,z1 ,z2 ,z3 ,u (w, y). First, TRmz evaluates the expression Q0 and triggers line 2, noting that all nodes can be ignored ∗ since they are not ancestors of {Z1 }, which implies after line 1 that Px,z2 ,z3 ,u,w,y (z1 ) = P ∗ (z1 ). Second, TRmz evaluates the expression Q1 triggering line 2, which implies that ∗ ∗ Px,z1 ,z2 ,u,w,y (z3 ) = Px,z1 ,z2 (z3 ) with induced subgraph D1 = DX,Z1 ,Z2 ,Z3 . TRmz goes to line 5, in which in the local call C(D \ {X, Z1 , Z2 }) = {DZ3 }. Thus it proceeds to line 6 testing whether C(D \ {X, Z1 , Z2 }) is different from D1 , which is false. In this call, ordinary identifiability would fail, but TRmz proceeds to line 9. The goal of this line is to test whether some experiment can help for computing Q1 . In this case, πa fails immediately the test in line 10, but πb and π ∗ succeed, (i) which means experiments in these domains may eventually help; the new call is Px,z2 (z3 )D\Z1 , for mz i = {b, ∗} with induced graph D1 = DX,Z2 ,Z3 . Finally, TR triggers line 8 since X is not part of Z3 ’s components in D1 (or, Z3 ∈ C = {Z2 Z3 }), so line 2 is triggered since Z2 is no longer an ancestor of Z3 in D1 , and then line 1 is triggered since the interventional set is empty in (i) (i) ∗ this local call, so Px,z1 ,z2 (z3 ) = Z Pz1 (z3 |x, Z2 )Pz1 (Z2 ), for i = {b, ∗}. 2 7 ∗ Third, evaluating the expression Q2 , TRmz goes to line 2, which implies that Px,z1 ,z2 ,z3 ,w,y (u) = ∗ mz Px,z1 ,z2 ,z3 ,w (u) with induced subgraph D2 = DX,Z1 ,Z2 ,Z3 ,W,U . TR goes to line 5, and in this local call C(D \ {X, Z1 , Z2 , Z3 , W }) = {DU }, and the test in 6 succeed, since there are more components in D. So, it triggers line 8 since W is not part of U ’s component ∗ ∗ in D2 . The algorithm makes Px,z1 ,z2 ,z3 ,w (u) = Px,z1 ,z2 ,z3 (u)D2 |W (and update the working distribution); note that in this call, ordinary identifiability would fail since the nodes are in the same C-component and the test in line 6 fails. But TRmz proceeds to line 9 trying to find experiments that can help in Q2 ’s computation. In this case, πb cannot help but πa (a) and π ∗ perhaps can, noting that new calls are launched for computing Px,z1 ,z3 (u)D2 \Z2 |W rel∗ ative to πa , and Px,z2 ,z3 (u)D2 \Z1 |W relative to π ∗ with the corresponding data structures set. (a) (a) In πa , the algorithm triggers line 7, which yields Px,z1 ,z3 (u)D2 \Z2 |W = Pz2 (u|w, z3 , x, z1 ), ∗ and a bit more involved analysis for πb yields (after simplification) Px,z2 ,z3 (u)D2 \Z1 |W = ∗ ∗ ∗ ∗ ∗ Z Pz1 (z3 |x, Z2 )Pz1 (Z2 ) . Z Pz1 (u|w, z3 , x, Z2 )Pz1 (z3 |x, Z2 )Pz1 (Z2 ) / 2 2 mz Fourth, TR evaluates the expression Q3 and triggers line 5, C(D\{X, Z1 , Z2 , Z3 , U }) = DW,Y . ∗ In turn, both tests at lines 6 and 7 succeed, which makes the procedure to return Px,z1 ,z2 ,z3 ,u (w, y) = ∗ ∗ P (w|z3 , x, z1 , z2 )P (y|w, x, z1 , z2 , z3 , u). The composition of the return of these calls generates the following expression: ! ∗ Px (y) = X ∗ P (z1 ) z1 ,z3 ,w,u (2) w1 (1) w1 X ∗ ∗ Pz1 (z3 |x, Z2 )Pz1 (Z2 ) X (b) (b) Pz1 (z3 |x, Z2 )Pz1 (Z2 ) Z2 Z2 „X + (1) w2 « « „X ∗ ∗ ∗ ∗ ∗ Pz1 (z3 |x, Z2 )Pz1 (Z2 ) Pz1 (u|w, z3 , x, Z2 )Pz1 (z3 |x, Z2 )Pz1 (Z2 ) / Z2 Z2 ! (2) (a) + w2 Pz2 (u|w, z3 , x, z1 ) P ∗ (w|x, z1 , z2 , z3 ) P ∗ (y|x, z1 , z2 , z3 , w, u) (15) (k) where wi represents the weight for each factor in estimand k (i = 1, ..., nk ), and nk is the number of feasible estimands of k. Eq. (15) depicts a powerful way to estimate P ∗ (y|do(x)) in the target domain, and depending on weighting choice a different estimand will be entailed. For instance, one might use an analogous to inverse-variance weighting, which sets the weights for the normalized (k) nk −2 −2 2 inverse of their variances (i.e., wi = σi / j=1 σj , where σj is the variance of the jth component of estimand k). Our strategy resembles the approach taken in meta-analysis [4], albeit the latter usually disregards the intricacies of the relationships between variables, so producing a statistically less powerful estimand. Our method leverages this non-trivial and highly structured relationships, as exemplified in Eq. (15), which yields an estimand with less variance and statistically more powerful. 5 Conclusions In this paper, we treat a special type of transportability in which experiments can be conducted only over limited sets of variables in the sources and target domains, and the goal is to infer whether a certain effect can be estimated in the target using the information scattered throughout the domains. We provide a general sufficient graphical conditions for transportability based on the causal calculus along with a necessary condition for a specific scenario, which should be generalized for arbitrary structures. We further provide a procedure for computing transportability, that is, generate a formula for fusing the available observational and experimental data to synthesize an estimate of the desired causal effects. Our algorithm also allows for generic weighting schemes, which generalizes standard statistical procedures and leads to the construction of statistically more powerful estimands. Acknowledgment The work of Judea Pearl and Elias Bareinboim was supported in part by grants from NSF (IIS1249822, IIS-1302448), and ONR (N00014-13-1-0153, N00014-10-1-0933). The work of Sanghack Lee and Vasant Honavar was partially completed while they were with the Department of Computer Science at Iowa State University. The work of Vasant Honavar while working at the National Science Foundation (NSF) was supported by the NSF. The work of Sanghack Lee was supported in part by the grant from NSF (IIS-0711356). Any opinions, findings, and conclusions contained in this article are those of the authors and do not necessarily reflect the views of the sponsors. 8 References [1] J. Pearl and E. Bareinboim. Transportability of causal and statistical relations: A formal approach. In W. Burgard and D. Roth, editors, Proceedings of the Twenty-Fifth National Conference on Artificial Intelligence, pages 247–254. AAAI Press, Menlo Park, CA, 2011. [2] D. Campbell and J. Stanley. Experimental and Quasi-Experimental Designs for Research. Wadsworth Publishing, Chicago, 1963. [3] C. Manski. Identification for Prediction and Decision. Harvard University Press, Cambridge, Massachusetts, 2007. [4] L. V. Hedges and I. Olkin. Statistical Methods for Meta-Analysis. Academic Press, January 1985. [5] W.R. Shadish, T.D. Cook, and D.T. Campbell. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton-Mifflin, Boston, second edition, 2002. [6] S. Morgan and C. Winship. Counterfactuals and Causal Inference: Methods and Principles for Social Research (Analytical Methods for Social Research). Cambridge University Press, New York, NY, 2007. [7] J. Pearl. Causal diagrams for empirical research. Biometrika, 82(4):669–710, 1995. [8] P. Spirtes, C.N. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, Cambridge, MA, 2nd edition, 2000. [9] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, 2000. 2nd edition, 2009. [10] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [11] E. Bareinboim and J. Pearl. Transportability of causal effects: Completeness results. In J. Hoffmann and B. Selman, editors, Proceedings of the Twenty-Sixth National Conference on Artificial Intelligence, pages 698–704. AAAI Press, Menlo Park, CA, 2012. [12] E. Bareinboim and J. Pearl. Causal transportability with limited experiments. In M. desJardins and M. Littman, editors, Proceedings of the Twenty-Seventh National Conference on Artificial Intelligence, pages 95–101, Menlo Park, CA, 2013. AAAI Press. [13] S. Lee and V. Honavar. Causal transportability of experiments on controllable subsets of variables: ztransportability. In A. Nicholson and P. Smyth, editors, Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI), pages 361–370. AUAI Press, 2013. [14] E. Bareinboim and J. Pearl. Meta-transportability of causal effects: A formal approach. In C. Carvalho and P. Ravikumar, editors, Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics (AISTATS), pages 135–143. JMLR W&CP; 31, 2013. [15] S. Lee and V. Honavar. m-transportability: Transportability of a causal effect from multiple environments. In M. desJardins and M. Littman, editors, Proceedings of the Twenty-Seventh National Conference on Artificial Intelligence, pages 583–590, Menlo Park, CA, 2013. AAAI Press. [16] H. Daume III and D. Marcu. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26:101–126, 2006. [17] A.J. Storkey. When training and test sets are different: characterising learning transfer. In J. Candela, M. Sugiyama, A. Schwaighofer, and N.D. Lawrence, editors, Dataset Shift in Machine Learning, pages 3–28. MIT Press, Cambridge, MA, 2009. [18] B. Sch¨ lkopf, D. Janzing, J. Peters, E. Sgouritsa, K. Zhang, and J. Mooij. On causal and anticausal o learning. In J Langford and J Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML), pages 1255–1262, New York, NY, USA, 2012. Omnipress. [19] K. Zhang, B. Sch¨ lkopf, K. Muandet, and Z. Wang. Domain adaptation under target and conditional o shift. In Proceedings of the 30th International Conference on Machine Learning (ICML). JMLR: W&CP; volume 28, 2013. [20] E. Bareinboim and J. Pearl. Causal inference by surrogate experiments: z-identifiability. In N. Freitas and K. Murphy, editors, Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI), pages 113–120. AUAI Press, 2012. [21] M. Kuroki and M. Miyakawa. Identifiability criteria for causal effects of joint interventions. Journal of the Royal Statistical Society, 29:105–117, 1999. [22] J. Tian and J. Pearl. A general identification condition for causal effects. In Proceedings of the Eighteenth National Conference on Artificial Intelligence, pages 567–573. AAAI Press/The MIT Press, Menlo Park, CA, 2002. [23] I. Shpitser and J. Pearl. Identification of joint interventional distributions in recursive semi-Markovian causal models. In Proceedings of the Twenty-First National Conference on Artificial Intelligence, pages 1219–1226. AAAI Press, Menlo Park, CA, 2006. 9

3 0.19481342 266 nips-2013-Recurrent linear models of simultaneously-recorded neural populations

Author: Marius Pachitariu, Biljana Petreska, Maneesh Sahani

Abstract: Population neural recordings with long-range temporal structure are often best understood in terms of a common underlying low-dimensional dynamical process. Advances in recording technology provide access to an ever-larger fraction of the population, but the standard computational approaches available to identify the collective dynamics scale poorly with the size of the dataset. We describe a new, scalable approach to discovering low-dimensional dynamics that underlie simultaneously recorded spike trains from a neural population. We formulate the Recurrent Linear Model (RLM) by generalising the Kalman-filter-based likelihood calculation for latent linear dynamical systems to incorporate a generalised-linear observation process. We show that RLMs describe motor-cortical population data better than either directly-coupled generalised-linear models or latent linear dynamical system models with generalised-linear observations. We also introduce the cascaded generalised-linear model (CGLM) to capture low-dimensional instantaneous correlations in neural populations. The CGLM describes the cortical recordings better than either Ising or Gaussian models and, like the RLM, can be fit exactly and quickly. The CGLM can also be seen as a generalisation of a lowrank Gaussian model, in this case factor analysis. The computational tractability of the RLM and CGLM allow both to scale to very high-dimensional neural data. 1

4 0.17114083 89 nips-2013-Dimension-Free Exponentiated Gradient

Author: Francesco Orabona

Abstract: I present a new online learning algorithm that extends the exponentiated gradient framework to infinite dimensional spaces. My analysis shows that the algorithm is implicitly able to estimate the L2 norm of the unknown competitor, U , achieving √ a regret bound of the order of O(U log(U T + 1)) T ), instead of the standard √ O((U 2 + 1) T ), achievable without knowing U . For this analysis, I introduce novel tools for algorithms with time-varying regularizers, through the use of local smoothness. Through a lower bound, I also show that the algorithm is optimal up to log(U T ) term for linear and Lipschitz losses. 1

5 0.1676438 269 nips-2013-Regression-tree Tuning in a Streaming Setting

Author: Samory Kpotufe, Francesco Orabona

Abstract: We consider the problem of maintaining the data-structures of a partition-based regression procedure in a setting where the training data arrives sequentially over time. We prove that it is possible to maintain such a structure in time O (log n) at ˜ any time step n while achieving a nearly-optimal regression rate of O n−2/(2+d) in terms of the unknown metric dimension d. Finally we prove a new regression lower-bound which is independent of a given data size, and hence is more appropriate for the streaming setting. 1

6 0.15596405 39 nips-2013-Approximate Gaussian process inference for the drift function in stochastic differential equations

7 0.15007807 48 nips-2013-Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC

8 0.14652872 348 nips-2013-Variational Policy Search via Trajectory Optimization

9 0.13724013 17 nips-2013-A multi-agent control framework for co-adaptation in brain-computer interfaces

10 0.12926386 24 nips-2013-Actor-Critic Algorithms for Risk-Sensitive MDPs

11 0.11697283 230 nips-2013-Online Learning with Costly Features and Labels

12 0.11218809 127 nips-2013-Generalized Denoising Auto-Encoders as Generative Models

13 0.11175736 310 nips-2013-Statistical analysis of coupled time series with Kernel Cross-Spectral Density operators.

14 0.10579733 240 nips-2013-Optimization, Learning, and Games with Predictable Sequences

15 0.093681812 188 nips-2013-Memory Limited, Streaming PCA

16 0.091612823 146 nips-2013-Large Scale Distributed Sparse Precision Estimation

17 0.089218281 41 nips-2013-Approximate inference in latent Gaussian-Markov models from continuous time observations

18 0.088244356 68 nips-2013-Confidence Intervals and Hypothesis Testing for High-Dimensional Statistical Models

19 0.087135501 298 nips-2013-Small-Variance Asymptotics for Hidden Markov Models

20 0.080764085 145 nips-2013-It is all in the noise: Efficient multi-task Gaussian process inference with structured residuals


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.172), (1, -0.016), (2, 0.076), (3, -0.062), (4, -0.096), (5, 0.048), (6, 0.026), (7, 0.146), (8, -0.004), (9, -0.15), (10, -0.123), (11, -0.319), (12, -0.066), (13, 0.032), (14, -0.098), (15, 0.025), (16, 0.037), (17, 0.133), (18, 0.007), (19, 0.087), (20, 0.106), (21, 0.02), (22, 0.118), (23, -0.042), (24, -0.044), (25, -0.06), (26, 0.063), (27, -0.013), (28, -0.04), (29, -0.026), (30, 0.083), (31, -0.064), (32, -0.037), (33, 0.059), (34, -0.017), (35, -0.053), (36, 0.019), (37, 0.109), (38, 0.066), (39, 0.053), (40, -0.098), (41, 0.035), (42, -0.034), (43, 0.094), (44, 0.096), (45, -0.027), (46, 0.022), (47, 0.063), (48, -0.068), (49, 0.001)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97697073 62 nips-2013-Causal Inference on Time Series using Restricted Structural Equation Models

Author: Jonas Peters, Dominik Janzing, Bernhard Schölkopf

Abstract: Causal inference uses observational data to infer the causal structure of the data generating system. We study a class of restricted Structural Equation Models for time series that we call Time Series Models with Independent Noise (TiMINo). These models require independent residual time series, whereas traditional methods like Granger causality exploit the variance of residuals. This work contains two main contributions: (1) Theoretical: By restricting the model class (e.g. to additive noise) we provide general identifiability results. They cover lagged and instantaneous effects that can be nonlinear and unfaithful, and non-instantaneous feedbacks between the time series. (2) Practical: If there are no feedback loops between time series, we propose an algorithm based on non-linear independence tests of time series. We show empirically that when the data are causally insufficient or the model is misspecified, the method avoids incorrect answers. We extend the theoretical and the algorithmic part to situations in which the time series have been measured with different time delays. TiMINo is applied to artificial and real data and code is provided. 1

2 0.79595935 17 nips-2013-A multi-agent control framework for co-adaptation in brain-computer interfaces

Author: Josh S. Merel, Roy Fox, Tony Jebara, Liam Paninski

Abstract: In a closed-loop brain-computer interface (BCI), adaptive decoders are used to learn parameters suited to decoding the user’s neural response. Feedback to the user provides information which permits the neural tuning to also adapt. We present an approach to model this process of co-adaptation between the encoding model of the neural signal and the decoding algorithm as a multi-agent formulation of the linear quadratic Gaussian (LQG) control problem. In simulation we characterize how decoding performance improves as the neural encoding and adaptive decoder optimize, qualitatively resembling experimentally demonstrated closed-loop improvement. We then propose a novel, modified decoder update rule which is aware of the fact that the encoder is also changing and show it can improve simulated co-adaptation dynamics. Our modeling approach offers promise for gaining insights into co-adaptation as well as improving user learning of BCI control in practical settings.

3 0.77158815 266 nips-2013-Recurrent linear models of simultaneously-recorded neural populations

Author: Marius Pachitariu, Biljana Petreska, Maneesh Sahani

Abstract: Population neural recordings with long-range temporal structure are often best understood in terms of a common underlying low-dimensional dynamical process. Advances in recording technology provide access to an ever-larger fraction of the population, but the standard computational approaches available to identify the collective dynamics scale poorly with the size of the dataset. We describe a new, scalable approach to discovering low-dimensional dynamics that underlie simultaneously recorded spike trains from a neural population. We formulate the Recurrent Linear Model (RLM) by generalising the Kalman-filter-based likelihood calculation for latent linear dynamical systems to incorporate a generalised-linear observation process. We show that RLMs describe motor-cortical population data better than either directly-coupled generalised-linear models or latent linear dynamical system models with generalised-linear observations. We also introduce the cascaded generalised-linear model (CGLM) to capture low-dimensional instantaneous correlations in neural populations. The CGLM describes the cortical recordings better than either Ising or Gaussian models and, like the RLM, can be fit exactly and quickly. The CGLM can also be seen as a generalisation of a lowrank Gaussian model, in this case factor analysis. The computational tractability of the RLM and CGLM allow both to scale to very high-dimensional neural data. 1

4 0.68691909 39 nips-2013-Approximate Gaussian process inference for the drift function in stochastic differential equations

Author: Andreas Ruttor, Philipp Batz, Manfred Opper

Abstract: We introduce a nonparametric approach for estimating drift functions in systems of stochastic differential equations from sparse observations of the state vector. Using a Gaussian process prior over the drift as a function of the state vector, we develop an approximate EM algorithm to deal with the unobserved, latent dynamics between observations. The posterior over states is approximated by a piecewise linearized process of the Ornstein-Uhlenbeck type and the MAP estimation of the drift is facilitated by a sparse Gaussian process regression. 1

5 0.66502351 41 nips-2013-Approximate inference in latent Gaussian-Markov models from continuous time observations

Author: Botond Cseke, Manfred Opper, Guido Sanguinetti

Abstract: We propose an approximate inference algorithm for continuous time Gaussian Markov process models with both discrete and continuous time likelihoods. We show that the continuous time limit of the expectation propagation algorithm exists and results in a hybrid fixed point iteration consisting of (1) expectation propagation updates for discrete time terms and (2) variational updates for the continuous time term. We introduce postinference corrections methods that improve on the marginals of the approximation. This approach extends the classical Kalman-Bucy smoothing procedure to non-Gaussian observations, enabling continuous-time inference in a variety of models, including spiking neuronal models (state-space models with point process observations) and box likelihood models. Experimental results on real and simulated data demonstrate high distributional accuracy and significant computational savings compared to discrete-time approaches in a neural application. 1

6 0.61562043 269 nips-2013-Regression-tree Tuning in a Streaming Setting

7 0.55350929 127 nips-2013-Generalized Denoising Auto-Encoders as Generative Models

8 0.55197746 89 nips-2013-Dimension-Free Exponentiated Gradient

9 0.54770035 48 nips-2013-Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC

10 0.53304374 337 nips-2013-Transportability from Multiple Environments with Limited Experiments

11 0.51932311 24 nips-2013-Actor-Critic Algorithms for Risk-Sensitive MDPs

12 0.47811022 348 nips-2013-Variational Policy Search via Trajectory Optimization

13 0.47286454 240 nips-2013-Optimization, Learning, and Games with Predictable Sequences

14 0.45495176 45 nips-2013-BIG & QUIC: Sparse Inverse Covariance Estimation for a Million Variables

15 0.44806021 211 nips-2013-Non-Linear Domain Adaptation with Boosting

16 0.41364869 298 nips-2013-Small-Variance Asymptotics for Hidden Markov Models

17 0.41255492 314 nips-2013-Stochastic Optimization of PCA with Capped MSG

18 0.40927878 146 nips-2013-Large Scale Distributed Sparse Precision Estimation

19 0.38827106 247 nips-2013-Phase Retrieval using Alternating Minimization

20 0.38587973 154 nips-2013-Learning Gaussian Graphical Models with Observed or Latent FVSs


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.026), (16, 0.032), (33, 0.089), (34, 0.087), (41, 0.364), (49, 0.035), (56, 0.118), (70, 0.024), (85, 0.036), (89, 0.042), (93, 0.02), (95, 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.9214232 257 nips-2013-Projected Natural Actor-Critic

Author: Philip S. Thomas, William C. Dabney, Stephen Giguere, Sridhar Mahadevan

Abstract: Natural actor-critics form a popular class of policy search algorithms for finding locally optimal policies for Markov decision processes. In this paper we address a drawback of natural actor-critics that limits their real-world applicability—their lack of safety guarantees. We present a principled algorithm for performing natural gradient descent over a constrained domain. In the context of reinforcement learning, this allows for natural actor-critic algorithms that are guaranteed to remain within a known safe region of policy space. While deriving our class of constrained natural actor-critic algorithms, which we call Projected Natural ActorCritics (PNACs), we also elucidate the relationship between natural gradient descent and mirror descent. 1

2 0.87204158 189 nips-2013-Message Passing Inference with Chemical Reaction Networks

Author: Nils E. Napp, Ryan P. Adams

Abstract: Recent work on molecular programming has explored new possibilities for computational abstractions with biomolecules, including logic gates, neural networks, and linear systems. In the future such abstractions might enable nanoscale devices that can sense and control the world at a molecular scale. Just as in macroscale robotics, it is critical that such devices can learn about their environment and reason under uncertainty. At this small scale, systems are typically modeled as chemical reaction networks. In this work, we develop a procedure that can take arbitrary probabilistic graphical models, represented as factor graphs over discrete random variables, and compile them into chemical reaction networks that implement inference. In particular, we show that marginalization based on sum-product message passing can be implemented in terms of reactions between chemical species whose concentrations represent probabilities. We show algebraically that the steady state concentration of these species correspond to the marginal distributions of the random variables in the graph and validate the results in simulations. As with standard sum-product inference, this procedure yields exact results for tree-structured graphs, and approximate solutions for loopy graphs.

3 0.83340037 193 nips-2013-Mixed Optimization for Smooth Functions

Author: Mehrdad Mahdavi, Lijun Zhang, Rong Jin

Abstract: It is well known that the optimal convergence rate for stochastic optimization of √ smooth functions is O(1/ T ), which is same as stochastic optimization of Lipschitz continuous convex functions. This is in contrast to optimizing smooth functions using full gradients, which yields a convergence rate of O(1/T 2 ). In this work, we consider a new setup for optimizing smooth functions, termed as Mixed Optimization, which allows to access both a stochastic oracle and a full gradient oracle. Our goal is to significantly improve the convergence rate of stochastic optimization of smooth functions by having an additional small number of accesses to the full gradient oracle. We show that, with an O(ln T ) calls to the full gradient oracle and an O(T ) calls to the stochastic oracle, the proposed mixed optimization algorithm is able to achieve an optimization error of O(1/T ). 1

same-paper 4 0.80015653 62 nips-2013-Causal Inference on Time Series using Restricted Structural Equation Models

Author: Jonas Peters, Dominik Janzing, Bernhard Schölkopf

Abstract: Causal inference uses observational data to infer the causal structure of the data generating system. We study a class of restricted Structural Equation Models for time series that we call Time Series Models with Independent Noise (TiMINo). These models require independent residual time series, whereas traditional methods like Granger causality exploit the variance of residuals. This work contains two main contributions: (1) Theoretical: By restricting the model class (e.g. to additive noise) we provide general identifiability results. They cover lagged and instantaneous effects that can be nonlinear and unfaithful, and non-instantaneous feedbacks between the time series. (2) Practical: If there are no feedback loops between time series, we propose an algorithm based on non-linear independence tests of time series. We show empirically that when the data are causally insufficient or the model is misspecified, the method avoids incorrect answers. We extend the theoretical and the algorithmic part to situations in which the time series have been measured with different time delays. TiMINo is applied to artificial and real data and code is provided. 1

5 0.79880953 258 nips-2013-Projecting Ising Model Parameters for Fast Mixing

Author: Justin Domke, Xianghang Liu

Abstract: Inference in general Ising models is difficult, due to high treewidth making treebased algorithms intractable. Moreover, when interactions are strong, Gibbs sampling may take exponential time to converge to the stationary distribution. We present an algorithm to project Ising model parameters onto a parameter set that is guaranteed to be fast mixing, under several divergences. We find that Gibbs sampling using the projected parameters is more accurate than with the original parameters when interaction strengths are strong and when limited time is available for sampling.

6 0.7436412 180 nips-2013-Low-rank matrix reconstruction and clustering via approximate message passing

7 0.74070698 175 nips-2013-Linear Convergence with Condition Number Independent Access of Full Gradients

8 0.7033329 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

9 0.68993157 11 nips-2013-A New Convex Relaxation for Tensor Completion

10 0.58873659 311 nips-2013-Stochastic Convex Optimization with Multiple Objectives

11 0.58345789 184 nips-2013-Marginals-to-Models Reducibility

12 0.58206356 239 nips-2013-Optimistic policy iteration and natural actor-critic: A unifying view and a non-optimality result

13 0.58127654 20 nips-2013-Accelerating Stochastic Gradient Descent using Predictive Variance Reduction

14 0.55938953 54 nips-2013-Bayesian optimization explains human active search

15 0.55465257 240 nips-2013-Optimization, Learning, and Games with Predictable Sequences

16 0.55432814 250 nips-2013-Policy Shaping: Integrating Human Feedback with Reinforcement Learning

17 0.55046695 282 nips-2013-Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching

18 0.54977769 113 nips-2013-Exact and Stable Recovery of Pairwise Interaction Tensors

19 0.5481512 318 nips-2013-Structured Learning via Logistic Regression

20 0.54668874 79 nips-2013-DESPOT: Online POMDP Planning with Regularization