andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1418 knowledge-graph by maker-knowledge-mining

1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings


meta infos for this blog

Source: html

Introduction: Elias Bareinboim asked what I thought about his comment on selection bias in which he referred to a paper by himself and Judea Pearl, “Controlling Selection Bias in Causal Inference.” I replied that I have no problem with what he wrote, but that from my perspective I find it easier to conceptualize such problems in terms of multilevel models. I elaborated on that point in a recent post , “Hierarchical modeling as a framework for extrapolation,” which I think was read by only a few people (I say this because it received only two comments). I don’t think Bareinboim objected to anything I wrote, but like me he is comfortable working within his own framework. He wrote the following to me: In some sense, “not ad hoc” could mean logically consistent. In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. I am not aware of any other way of doing mathematics. As it turns out, to get causa


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. [sent-6, score-0.558]

2 As it turns out, to get causal conclusions, we need causal assumptions (“no causes in-no causes out”, see Cartwright), because causality is not some entity outside the realm of mathematics. [sent-8, score-1.093]

3 It is true that the language of (causal) DAGs provides a nice way to encode causal assumptions, but it does not mean that they are not mathematical-compatible, or that mathematics cannot be in tune with intuition and the way we think about causality. [sent-16, score-0.543]

4 (*) In regard to the backdoor criterion, and other graphical methods to remove *confounding* bias, we usually assume *local qualitative* knowledge about the causal mechanisms, and then we ask the question of whether a causal query Q can be estimated from the assumptions A together with data D. [sent-18, score-1.208]

5 , given a set of assumptions A and a causal query Q, there exists a procedure that is capable of removing this bias if (and only if) it is possible to remove this bias with the assumptions A. [sent-26, score-1.531]

6 Interestingly, even though you could express the causal assumptions in the language of causal DAGs, so far, we did not have a sound theory on how to use this language to produce coherent results for the problem of external validity. [sent-39, score-1.431]

7 A quick example is the case of the front-door criterion (Pearl Chapter 3, I am in a coffee shop without the book here, probably around page 90, but not sure), in which there is NOT an ignorable adjustment but we DO have a way to get a unbiased estimate of the causal effects. [sent-55, score-0.511]

8 We know that any causal inference in observational studies requires some untested causal assumptions. [sent-79, score-0.716]

9 How does one express causal assumptions mathematically, say that “seatbelt usage” is correlated with, but does not affect choice of treatment? [sent-80, score-0.737]

10 How those assumptions mix with the bayesian hierarchical modeling framework? [sent-81, score-0.618]

11 pdf Putting in a simple way, the idea is that you can formally decide whether a given causal effect is “generalizable” among settings in a principle way; and when those effects are indeed generalizable, we are able to pinpoint what is the mapping between the source and the target settings. [sent-101, score-0.565]

12 On assumptions You say that “in the Bayesian framework the assumptions go into the model of the joint distribution of the potential outcomes”. [sent-126, score-0.889]

13 On testability of assumptions You write that “The testability of the assumptions depend on the data. [sent-135, score-0.896]

14 When I say, “the testability of the assumptions depends on the data,” I mean that any given dataset or data structure will allow some assumptions to be tested but not others. [sent-170, score-0.857]

15 For example, if you have two-level hierarchical data you can directly test various assumptions at the two levels but you won’t be able to say much about the third level. [sent-171, score-0.577]

16 Bareinboim replies: On tolerating bias in the Bayesian framework: Pearl (Causality, 2009, pages 279-280) provides a simple illustration of how Bayesian posteriors behave when the causal effect is not identified. [sent-188, score-0.623]

17 In (Pearl and Bareinboim 2011) we analyze three toy examples, and vividly demonstrate how mathematical routines can tell us whether and how experimental results from one population can be used to estimate causal effects in another population, potentially different from the first. [sent-193, score-0.869]

18 pdf Bareinboim writes above that “mathematical routines can tell us whether and how experimental results from one population can be used to estimate causal effects in another population, potentially different from the first. [sent-201, score-0.793]

19 ” From (my) Bayesian perspective, experimental results from one population can always be used to estimate a causal effect in another population (assuming there is some connection; obviously we would not be doing this for unrelated topics). [sent-202, score-0.73]

20 For another sort of example, we used hierarchical prior distributions to make causal inference in toxicology, combining data from different sources; see here . [sent-206, score-0.628]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('assumptions', 0.338), ('causal', 0.33), ('bareinboim', 0.27), ('bias', 0.234), ('transportability', 0.164), ('pearl', 0.155), ('framework', 0.153), ('dags', 0.15), ('ignorability', 0.11), ('testability', 0.11), ('hierarchical', 0.107), ('bayesian', 0.104), ('causality', 0.095), ('language', 0.095), ('results', 0.09), ('entailed', 0.082), ('guarantees', 0.08), ('sound', 0.077), ('theory', 0.076), ('toy', 0.076), ('backdoor', 0.075), ('data', 0.071), ('estimate', 0.07), ('modeling', 0.069), ('one', 0.069), ('exchangeable', 0.065), ('whether', 0.064), ('different', 0.064), ('variables', 0.064), ('able', 0.061), ('judea', 0.06), ('potential', 0.06), ('effect', 0.059), ('confounding', 0.059), ('way', 0.059), ('set', 0.057), ('population', 0.056), ('models', 0.056), ('concept', 0.056), ('inference', 0.056), ('ajs', 0.055), ('confounder', 0.055), ('guarantee', 0.053), ('somehow', 0.053), ('criterion', 0.052), ('measure', 0.051), ('qualitative', 0.051), ('decide', 0.051), ('wrote', 0.05), ('routines', 0.05)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

Introduction: Elias Bareinboim asked what I thought about his comment on selection bias in which he referred to a paper by himself and Judea Pearl, “Controlling Selection Bias in Causal Inference.” I replied that I have no problem with what he wrote, but that from my perspective I find it easier to conceptualize such problems in terms of multilevel models. I elaborated on that point in a recent post , “Hierarchical modeling as a framework for extrapolation,” which I think was read by only a few people (I say this because it received only two comments). I don’t think Bareinboim objected to anything I wrote, but like me he is comfortable working within his own framework. He wrote the following to me: In some sense, “not ad hoc” could mean logically consistent. In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. I am not aware of any other way of doing mathematics. As it turns out, to get causa

2 0.39779475 1425 andrew gelman stats-2012-07-23-Examples of the use of hierarchical modeling to generalize to new settings

Introduction: In a link to our back-and-forth on causal inference and the use of hierarchical models to bridge between different inferential settings, Elias Bareinboim (a computer scientist who is working with Judea Pearl) writes : In the past week, I have been engaged in a discussion with Andrew Gelman and his blog readers regarding causal inference, selection bias, confounding, and generalizability. I was trying to understand how his method which he calls “hierarchical modeling” would handle these issues and what guarantees it provides. . . . If anyone understands how “hierarchical modeling” can solve a simple toy problem (e.g., M-bias, control of confounding, mediation, generalizability), please share with us. In his post, Bareinboim raises a direct question about hierarchical modeling and also indirectly brings up larger questions about what is convincing evidence when evaluating a statistical method. As I wrote earlier, Bareinboim believes that “The only way investigators can decide w

3 0.29348874 2170 andrew gelman stats-2014-01-13-Judea Pearl overview on causal inference, and more general thoughts on the reexpression of existing methods by considering their implicit assumptions

Introduction: This material should be familiar to many of you but could be helpful to newcomers. Pearl writes: ALL causal conclusions in nonexperimental settings must be based on untested, judgmental assumptions that investigators are prepared to defend on scientific grounds. . . . To understand what the world should be like for a given procedure to work is of no lesser scientific value than seeking evidence for how the world works . . . Assumptions are self-destructive in their honesty. The more explicit the assumption, the more criticism it invites . . . causal diagrams invite the harshest criticism because they make assumptions more explicit and more transparent than other representation schemes. As regular readers know (for example, search this blog for “Pearl”), I have not got much out of the causal-diagrams approach myself, but in general I think that when there are multiple, mathematically equivalent methods of getting the same answer, we tend to go with the framework we are used

4 0.27094066 1469 andrew gelman stats-2012-08-25-Ways of knowing

Introduction: In this discussion from last month, computer science student and Judea Pearl collaborator Elias Barenboim expressed an attitude that hierarchical Bayesian methods might be fine in practice but that they lack theory, that Bayesians can’t succeed in toy problems. I posted a P.S. there which might not have been noticed so I will put it here: I now realize that there is some disagreement about what constitutes a “guarantee.” In one of his comments, Barenboim writes, “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee.” In that sense, yes, we have guarantees! It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. We have lots of that in Bayesian Data Analysis (particularly in the first four chapters but implicitly elsewhere as well), and this is also covered in the classic books by Lindley, Jaynes, and others. This sort of guarantee is indeed p

5 0.2643643 1939 andrew gelman stats-2013-07-15-Forward causal reasoning statements are about estimation; reverse causal questions are about model checking and hypothesis generation

Introduction: Consider two broad classes of inferential questions : 1. Forward causal inference . What might happen if we do X? What are the effects of smoking on health, the effects of schooling on knowledge, the effect of campaigns on election outcomes, and so forth? 2. Reverse causal inference . What causes Y? Why do more attractive people earn more money? Why do many poor people vote for Republicans and rich people vote for Democrats? Why did the economy collapse? When statisticians and econometricians write about causal inference, they focus on forward causal questions. Rubin always told us: Never ask Why? Only ask What if? And, from the econ perspective, causation is typically framed in terms of manipulations: if x had changed by 1, how much would y be expected to change, holding all else constant? But reverse causal questions are important too. They’re a natural way to think (consider the importance of the word “Why”) and are arguably more important than forward questions.

6 0.23596078 879 andrew gelman stats-2011-08-29-New journal on causal inference

7 0.23501794 1888 andrew gelman stats-2013-06-08-New Judea Pearl journal of causal inference

8 0.21694781 1675 andrew gelman stats-2013-01-15-“10 Things You Need to Know About Causal Effects”

9 0.21382543 1136 andrew gelman stats-2012-01-23-Fight! (also a bit of reminiscence at the end)

10 0.21047103 1763 andrew gelman stats-2013-03-14-Everyone’s trading bias for variance at some point, it’s just done at different places in the analyses

11 0.20543863 1492 andrew gelman stats-2012-09-11-Using the “instrumental variables” or “potential outcomes” approach to clarify causal thinking

12 0.20529087 2286 andrew gelman stats-2014-04-08-Understanding Simpson’s paradox using a graph

13 0.20380171 960 andrew gelman stats-2011-10-15-The bias-variance tradeoff

14 0.18524803 1996 andrew gelman stats-2013-08-24-All inference is about generalizing from sample to population

15 0.18436339 1732 andrew gelman stats-2013-02-22-Evaluating the impacts of welfare reform?

16 0.18386543 1383 andrew gelman stats-2012-06-18-Hierarchical modeling as a framework for extrapolation

17 0.18049397 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

18 0.17562692 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

19 0.1702368 602 andrew gelman stats-2011-03-06-Assumptions vs. conditions

20 0.16955841 1336 andrew gelman stats-2012-05-22-Battle of the Repo Man quotes: Reid Hastie’s turn


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.362), (1, 0.153), (2, -0.018), (3, -0.077), (4, -0.038), (5, 0.026), (6, -0.067), (7, 0.025), (8, 0.119), (9, 0.059), (10, -0.073), (11, 0.01), (12, 0.029), (13, -0.015), (14, 0.026), (15, 0.056), (16, -0.035), (17, 0.006), (18, -0.058), (19, 0.087), (20, -0.044), (21, -0.098), (22, 0.103), (23, 0.08), (24, 0.11), (25, 0.209), (26, 0.066), (27, -0.035), (28, -0.019), (29, 0.097), (30, 0.025), (31, -0.065), (32, -0.037), (33, 0.017), (34, -0.089), (35, -0.002), (36, -0.028), (37, -0.059), (38, -0.002), (39, 0.062), (40, -0.024), (41, -0.024), (42, -0.007), (43, -0.028), (44, -0.07), (45, 0.015), (46, 0.035), (47, 0.048), (48, -0.022), (49, -0.026)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96509701 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

Introduction: Elias Bareinboim asked what I thought about his comment on selection bias in which he referred to a paper by himself and Judea Pearl, “Controlling Selection Bias in Causal Inference.” I replied that I have no problem with what he wrote, but that from my perspective I find it easier to conceptualize such problems in terms of multilevel models. I elaborated on that point in a recent post , “Hierarchical modeling as a framework for extrapolation,” which I think was read by only a few people (I say this because it received only two comments). I don’t think Bareinboim objected to anything I wrote, but like me he is comfortable working within his own framework. He wrote the following to me: In some sense, “not ad hoc” could mean logically consistent. In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. I am not aware of any other way of doing mathematics. As it turns out, to get causa

2 0.87828857 1492 andrew gelman stats-2012-09-11-Using the “instrumental variables” or “potential outcomes” approach to clarify causal thinking

Introduction: As I’ve written here many times, my experiences in social science and public health research have left me skeptical of statistical methods that hypothesize or try to detect zero relationships between observational data (see, for example, the discussion starting at the bottom of page 960 in my review of causal inference in the American Journal of Sociology). In short, I have a taste for continuous rather than discrete models. As discussed in the above-linked article (with respect to the writings of cognitive scientist Steven Sloman), I think that common-sense thinking about causal inference can often mislead. In many cases, I have found that that the theoretical frameworks of instrumental variables and potential outcomes (for a review see, for example, chapters 9 and 10 of my book with Jennifer) help clarify my thinking. Here is an example that came up in a recent blog discussion. Computer science student Elias Bareinboim gave the following example: “suppose we know nothing a

3 0.87723082 1939 andrew gelman stats-2013-07-15-Forward causal reasoning statements are about estimation; reverse causal questions are about model checking and hypothesis generation

Introduction: Consider two broad classes of inferential questions : 1. Forward causal inference . What might happen if we do X? What are the effects of smoking on health, the effects of schooling on knowledge, the effect of campaigns on election outcomes, and so forth? 2. Reverse causal inference . What causes Y? Why do more attractive people earn more money? Why do many poor people vote for Republicans and rich people vote for Democrats? Why did the economy collapse? When statisticians and econometricians write about causal inference, they focus on forward causal questions. Rubin always told us: Never ask Why? Only ask What if? And, from the econ perspective, causation is typically framed in terms of manipulations: if x had changed by 1, how much would y be expected to change, holding all else constant? But reverse causal questions are important too. They’re a natural way to think (consider the importance of the word “Why”) and are arguably more important than forward questions.

4 0.87184799 1136 andrew gelman stats-2012-01-23-Fight! (also a bit of reminiscence at the end)

Introduction: Martin Lindquist and Michael Sobel published a fun little article in Neuroimage on models and assumptions for causal inference with intermediate outcomes. As their subtitle indicates (“A response to the comments on our comment”), this is a topic of some controversy. Lindquist and Sobel write: Our original comment (Lindquist and Sobel, 2011) made explicit the types of assumptions neuroimaging researchers are making when directed graphical models (DGMs), which include certain types of structural equation models (SEMs), are used to estimate causal effects. When these assumptions, which many researchers are not aware of, are not met, parameters of these models should not be interpreted as effects. . . . [Judea] Pearl does not disagree with anything we stated. However, he takes exception to our use of potential outcomes notation, which is the standard notation used in the statistical literature on causal inference, and his comment is devoted to promoting his alternative conventions. [C

5 0.86455101 1996 andrew gelman stats-2013-08-24-All inference is about generalizing from sample to population

Introduction: Jeff Walker writes: Your blog has skirted around the value of observational studies and chided folks for using causal language when they only have associations but I sense that you ultimately find value in these associations. I would love for you to expand this thought in a blog. Specifically: Does a measured association “suggest” a causal relationship? Are measured associations a good and efficient way to narrow the field of things that should be studied? Of all the things we should pursue, should we start with the stuff that has some largish measured association? Certainly many associations are not directly causal but due to joint association. Similarly, there must be many variables that are directly causally associated ( A -> B) but the effect, measured as an association, is masked by confounders. So if we took the “measured associations are worthwhile” approach, we’d never or rarely find the masked effects. But I’d also like to know if one is more likely to find a large causal

6 0.85622805 1732 andrew gelman stats-2013-02-22-Evaluating the impacts of welfare reform?

7 0.852081 1675 andrew gelman stats-2013-01-15-“10 Things You Need to Know About Causal Effects”

8 0.84576416 2286 andrew gelman stats-2014-04-08-Understanding Simpson’s paradox using a graph

9 0.84575629 2170 andrew gelman stats-2014-01-13-Judea Pearl overview on causal inference, and more general thoughts on the reexpression of existing methods by considering their implicit assumptions

10 0.82429492 393 andrew gelman stats-2010-11-04-Estimating the effect of A on B, and also the effect of B on A

11 0.82008857 1133 andrew gelman stats-2012-01-21-Judea Pearl on why he is “only a half-Bayesian”

12 0.79580855 550 andrew gelman stats-2011-02-02-An IV won’t save your life if the line is tangled

13 0.79261702 1336 andrew gelman stats-2012-05-22-Battle of the Repo Man quotes: Reid Hastie’s turn

14 0.78351301 2097 andrew gelman stats-2013-11-11-Why ask why? Forward causal inference and reverse causal questions

15 0.7781074 1802 andrew gelman stats-2013-04-14-Detecting predictability in complex ecosystems

16 0.76271129 807 andrew gelman stats-2011-07-17-Macro causality

17 0.75226605 2274 andrew gelman stats-2014-03-30-Adjudicating between alternative interpretations of a statistical interaction?

18 0.75005734 1425 andrew gelman stats-2012-07-23-Examples of the use of hierarchical modeling to generalize to new settings

19 0.74955189 1888 andrew gelman stats-2013-06-08-New Judea Pearl journal of causal inference

20 0.74812227 340 andrew gelman stats-2010-10-13-Randomized experiments, non-randomized experiments, and observational studies


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.012), (15, 0.051), (16, 0.079), (21, 0.029), (24, 0.156), (53, 0.016), (76, 0.013), (84, 0.017), (85, 0.061), (86, 0.027), (89, 0.015), (95, 0.012), (99, 0.337)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.99019921 1374 andrew gelman stats-2012-06-11-Convergence Monitoring for Non-Identifiable and Non-Parametric Models

Introduction: Becky Passonneau and colleagues at the Center for Computational Learning Systems (CCLS) at Columbia have been working on a project for ConEd (New York’s major electric utility) to rank structures based on vulnerability to secondary events (e.g., transformer explosions, cable meltdowns, electrical fires). They’ve been using the R implementation BayesTree of Chipman, George and McCulloch’s Bayesian Additive Regression Trees (BART). BART is a Bayesian non-parametric method that is non-identifiable in two ways. Firstly, it is an additive tree model with a fixed number of trees, the indexes of which aren’t identified (you get the same predictions in a model swapping the order of the trees). This is the same kind of non-identifiability you get with any mixture model (additive or interpolated) with an exchangeable prior on the mixture components. Secondly, the trees themselves have varying structure over samples in terms of number of nodes and their topology (depth, branching, etc

2 0.98656285 167 andrew gelman stats-2010-07-27-Why don’t more medical discoveries become cures?

Introduction: Interesting article by Sharon Begley and Mary Carmichael. They discuss how there is tons of federal support for basic research but that there’s a big gap between research findings and medical applications–a gap that, according to them, arises not just from the inevitable problem that not all research hypotheses pan out, but because actual promising potential cures don’t get researched because of the cost. I have two thoughts on this. First, in my experience, research at any level requires a continuing forward momentum, a push from somebody to keep it going. I’ve worked on some great projects (some of which had Federal research funding) that ground to a halt because the original motivation died. I expect this is true with medical research also. One of the projects that I’m thinking of, which I’ve made almost no progress on for several years, I’m sure would make a useful contribution. I pretty much know it would work–it just takes work to make it work, and it’s hard to do this

3 0.98405439 540 andrew gelman stats-2011-01-26-Teaching evaluations, instructor effectiveness, the Journal of Political Economy, and the Holy Roman Empire

Introduction: Joan Nix writes: Your comments on this paper by Scott Carrell and James West would be most appreciated. I’m afraid the conclusions of this paper are too strong given the data set and other plausible explanations. But given where it is published, this paper is receiving and will continue to receive lots of attention. It will be used to draw deeper conclusions regarding effective teaching and experience. Nix also links to this discussion by Jeff Ely. I don’t completely follow Ely’s criticism, which seems to me to be too clever by half, but I agree with Nix that the findings in the research article don’t seem to fit together very well. For example, Carrell and West estimate that the effects of instructors on performance in the follow-on class is as large as the effects on the class they’re teaching. This seems hard to believe, and it seems central enough to their story that I don’t know what to think about everything else in the paper. My other thought about teaching eva

same-blog 4 0.983944 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

Introduction: Elias Bareinboim asked what I thought about his comment on selection bias in which he referred to a paper by himself and Judea Pearl, “Controlling Selection Bias in Causal Inference.” I replied that I have no problem with what he wrote, but that from my perspective I find it easier to conceptualize such problems in terms of multilevel models. I elaborated on that point in a recent post , “Hierarchical modeling as a framework for extrapolation,” which I think was read by only a few people (I say this because it received only two comments). I don’t think Bareinboim objected to anything I wrote, but like me he is comfortable working within his own framework. He wrote the following to me: In some sense, “not ad hoc” could mean logically consistent. In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. I am not aware of any other way of doing mathematics. As it turns out, to get causa

5 0.98286963 2300 andrew gelman stats-2014-04-21-Ticket to Baaaath

Introduction: Ooooooh, I never ever thought I’d have a legitimate excuse to tell this story, and now I do! The story took place many years ago, but first I have to tell you what made me think of it: Rasmus Bååth posted the following comment last month: On airplane tickets a Swedish “å” is written as “aa” resulting in Rasmus Baaaath. Once I bought a ticket online and five minutes later a guy from Lufthansa calls me and asks if I misspelled my name… OK, now here’s my story (which is not nearly as good). A long time ago (but when I was already an adult), I was in England for some reason, and I thought I’d take a day trip from London to Bath. So here I am on line, trying to think of what to say at the ticket counter. I remember that in England, they call Bath, Bahth. So, should I ask for “a ticket to Bahth”? I’m not sure, I’m afraid that it will sound silly, like I’m trying to fake an English accent. So, when I get to the front of the line, I say, hesitantly, “I’d like a ticket to Bath?

6 0.98258024 584 andrew gelman stats-2011-02-22-“Are Wisconsin Public Employees Underpaid?”

7 0.98085511 375 andrew gelman stats-2010-10-28-Matching for preprocessing data for causal inference

8 0.98071986 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

9 0.97951102 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

10 0.97908533 2350 andrew gelman stats-2014-05-27-A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, “The Economic Effects of Climate Change”

11 0.97894681 1162 andrew gelman stats-2012-02-11-Adding an error model to a deterministic model

12 0.97891349 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

13 0.9785701 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

14 0.97824889 2227 andrew gelman stats-2014-02-27-“What Can we Learn from the Many Labs Replication Project?”

15 0.97800821 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

16 0.97757447 2281 andrew gelman stats-2014-04-04-The Notorious N.H.S.T. presents: Mo P-values Mo Problems

17 0.97751069 2120 andrew gelman stats-2013-12-02-Does a professor’s intervention in online discussions have the effect of prolonging discussion or cutting it off?

18 0.97744256 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

19 0.97743571 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

20 0.97722822 796 andrew gelman stats-2011-07-10-Matching and regression: two great tastes etc etc