andrew_gelman_stats andrew_gelman_stats-2014 andrew_gelman_stats-2014-2170 knowledge-graph by maker-knowledge-mining

2170 andrew gelman stats-2014-01-13-Judea Pearl overview on causal inference, and more general thoughts on the reexpression of existing methods by considering their implicit assumptions


meta infos for this blog

Source: html

Introduction: This material should be familiar to many of you but could be helpful to newcomers. Pearl writes: ALL causal conclusions in nonexperimental settings must be based on untested, judgmental assumptions that investigators are prepared to defend on scientific grounds. . . . To understand what the world should be like for a given procedure to work is of no lesser scientific value than seeking evidence for how the world works . . . Assumptions are self-destructive in their honesty. The more explicit the assumption, the more criticism it invites . . . causal diagrams invite the harshest criticism because they make assumptions more explicit and more transparent than other representation schemes. As regular readers know (for example, search this blog for “Pearl”), I have not got much out of the causal-diagrams approach myself, but in general I think that when there are multiple, mathematically equivalent methods of getting the same answer, we tend to go with the framework we are used


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This material should be familiar to many of you but could be helpful to newcomers. [sent-1, score-0.079]

2 Pearl writes: ALL causal conclusions in nonexperimental settings must be based on untested, judgmental assumptions that investigators are prepared to defend on scientific grounds. [sent-2, score-0.555]

3 To understand what the world should be like for a given procedure to work is of no lesser scientific value than seeking evidence for how the world works . [sent-6, score-0.154]

4 The more explicit the assumption, the more criticism it invites . [sent-10, score-0.259]

5 causal diagrams invite the harshest criticism because they make assumptions more explicit and more transparent than other representation schemes. [sent-13, score-0.815]

6 Thus, my unfamiliarity or discomfort with Pearl’s causal diagrams does not represent an anti-endorsement but rather just an open statement about my own experiences. [sent-15, score-0.322]

7 (I do have disagreements with some explicators of Pearl, for example I think Steven Sloman made fundamental misconceptions in his book that I reviewed a few years ago, but that’s another story. [sent-16, score-0.14]

8 I can’t fault a method because it can lead to errors if used without full understanding, any more than I would slam Bayesian inference in general just because it can give bad results when people inappropriately assume flat priors. [sent-17, score-0.232]

9 ) In any case, though, I resonate with Pearl’s general point that making strong assumptions can be good: strong assumptions give many handles for model checking (see chapter 6 of BDA) and ultimately for model improvement. [sent-18, score-0.795]

10 At the conclusion of his post, Pearl comments on the difficulties of working with the categories from Rubin’s 1976 paper on inference and missing data, with the three categories being Missing Completely At Random (MCAR), Missing At Random (MAR), and Missing Not At Random (MNAR). [sent-21, score-0.55]

11 Many years ago I asked Rubin about this point, that these particular definitions seem like an odd way to divide the world into three parts. [sent-23, score-0.206]

12 Rubin’s reply, when I asked him this, was that he used this awkward partition with these awkward names to be consistent with the existing statistical literature. [sent-26, score-0.522]

13 What was happening was that researchers were already using “missing at random” and similar terms but in a sloppy way, without any mathematical definition or clear statistical justification. [sent-27, score-0.148]

14 So his 1976 paper was, to a large extent, an effort at rationalizing existing terms and practices, taking what people were already doing and uncovering the implicit models underlying these methods. [sent-28, score-0.655]

15 ) So, although Pearl seems to think of work based on Rubin’s 1976 paper as somewhat unscientific, I think he should consider the history behind this: Rubin’s definitions clarify the assumptions underlying the methods that people were already happy to use. [sent-32, score-0.652]

16 I think this sort of activity—taking a proposed or existing method and considering what underlying model it corresponds to—to be an excellent thing to do, and very much in the Bayesian tradition. [sent-33, score-0.402]

17 Here are two early examples of my own such efforts, from 1990 on a paper by Silverman et al. [sent-34, score-0.178]

18 In both cases I don’t think the authors of the original papers really saw the point of my Bayesian reinterpretations, but I found it very helpful to take a method and consider what it meant as a model. [sent-36, score-0.172]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('pearl', 0.409), ('rubin', 0.268), ('missing', 0.25), ('random', 0.225), ('assumptions', 0.222), ('diagrams', 0.148), ('missingness', 0.143), ('definitions', 0.129), ('existing', 0.128), ('awkward', 0.117), ('explicit', 0.116), ('underlying', 0.114), ('paper', 0.106), ('causal', 0.102), ('categories', 0.097), ('method', 0.093), ('names', 0.088), ('nonexperimental', 0.085), ('feldman', 0.085), ('harshest', 0.085), ('rationalizing', 0.085), ('already', 0.081), ('realization', 0.08), ('untested', 0.08), ('unscientific', 0.08), ('observed', 0.079), ('helpful', 0.079), ('world', 0.077), ('sloman', 0.076), ('handles', 0.076), ('resonate', 0.076), ('judgmental', 0.076), ('criticism', 0.075), ('inappropriately', 0.074), ('uncovering', 0.074), ('et', 0.072), ('partition', 0.072), ('compliance', 0.072), ('discomfort', 0.072), ('misconceptions', 0.07), ('investigators', 0.07), ('disagreements', 0.07), ('invites', 0.068), ('model', 0.067), ('transparent', 0.067), ('jasa', 0.067), ('terms', 0.067), ('bayesian', 0.066), ('efron', 0.066), ('general', 0.065)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 2170 andrew gelman stats-2014-01-13-Judea Pearl overview on causal inference, and more general thoughts on the reexpression of existing methods by considering their implicit assumptions

Introduction: This material should be familiar to many of you but could be helpful to newcomers. Pearl writes: ALL causal conclusions in nonexperimental settings must be based on untested, judgmental assumptions that investigators are prepared to defend on scientific grounds. . . . To understand what the world should be like for a given procedure to work is of no lesser scientific value than seeking evidence for how the world works . . . Assumptions are self-destructive in their honesty. The more explicit the assumption, the more criticism it invites . . . causal diagrams invite the harshest criticism because they make assumptions more explicit and more transparent than other representation schemes. As regular readers know (for example, search this blog for “Pearl”), I have not got much out of the causal-diagrams approach myself, but in general I think that when there are multiple, mathematically equivalent methods of getting the same answer, we tend to go with the framework we are used

2 0.29348874 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

Introduction: Elias Bareinboim asked what I thought about his comment on selection bias in which he referred to a paper by himself and Judea Pearl, “Controlling Selection Bias in Causal Inference.” I replied that I have no problem with what he wrote, but that from my perspective I find it easier to conceptualize such problems in terms of multilevel models. I elaborated on that point in a recent post , “Hierarchical modeling as a framework for extrapolation,” which I think was read by only a few people (I say this because it received only two comments). I don’t think Bareinboim objected to anything I wrote, but like me he is comfortable working within his own framework. He wrote the following to me: In some sense, “not ad hoc” could mean logically consistent. In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. I am not aware of any other way of doing mathematics. As it turns out, to get causa

3 0.28778827 1888 andrew gelman stats-2013-06-08-New Judea Pearl journal of causal inference

Introduction: Pearl reports that his Journal of Causal Inference has just posted its first issue , which contains a mix of theoretical and applied papers. Pearl writes that they welcome submissions on all aspects of causal inference.

4 0.19898522 1133 andrew gelman stats-2012-01-21-Judea Pearl on why he is “only a half-Bayesian”

Introduction: In an article published in 2001, Pearl wrote: I [Pearl] turned Bayesian in 1971, as soon as I began reading Savage’s monograph The Foundations of Statistical Inference [Savage, 1962]. The arguments were unassailable: (i) It is plain silly to ignore what we know, (ii) It is natural and useful to cast what we know in the language of probabilities, and (iii) If our subjective probabilities are erroneous, their impact will get washed out in due time, as the number of observations increases. Thirty years later, I [Pearl] am still a devout Bayesian in the sense of (i), but I now doubt the wisdom of (ii) and I know that, in general, (iii) is false. He elaborates: The bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships. Specifically, the building blocks of our scientific and everyday knowledge are elementary facts such as “mud does not cause rain” and “symptom

5 0.18753982 1136 andrew gelman stats-2012-01-23-Fight! (also a bit of reminiscence at the end)

Introduction: Martin Lindquist and Michael Sobel published a fun little article in Neuroimage on models and assumptions for causal inference with intermediate outcomes. As their subtitle indicates (“A response to the comments on our comment”), this is a topic of some controversy. Lindquist and Sobel write: Our original comment (Lindquist and Sobel, 2011) made explicit the types of assumptions neuroimaging researchers are making when directed graphical models (DGMs), which include certain types of structural equation models (SEMs), are used to estimate causal effects. When these assumptions, which many researchers are not aware of, are not met, parameters of these models should not be interpreted as effects. . . . [Judea] Pearl does not disagree with anything we stated. However, he takes exception to our use of potential outcomes notation, which is the standard notation used in the statistical literature on causal inference, and his comment is devoted to promoting his alternative conventions. [C

6 0.1723381 1624 andrew gelman stats-2012-12-15-New prize on causality in statstistics education

7 0.16300943 879 andrew gelman stats-2011-08-29-New journal on causal inference

8 0.15808341 2359 andrew gelman stats-2014-06-04-All the Assumptions That Are My Life

9 0.1456693 1425 andrew gelman stats-2012-07-23-Examples of the use of hierarchical modeling to generalize to new settings

10 0.14212926 1939 andrew gelman stats-2013-07-15-Forward causal reasoning statements are about estimation; reverse causal questions are about model checking and hypothesis generation

11 0.14192502 1469 andrew gelman stats-2012-08-25-Ways of knowing

12 0.14055943 1527 andrew gelman stats-2012-10-10-Another reason why you can get good inferences from a bad model

13 0.13307168 109 andrew gelman stats-2010-06-25-Classics of statistics

14 0.1268708 1962 andrew gelman stats-2013-07-30-The Roy causal model?

15 0.12163621 2245 andrew gelman stats-2014-03-12-More on publishing in journals

16 0.12077025 1763 andrew gelman stats-2013-03-14-Everyone’s trading bias for variance at some point, it’s just done at different places in the analyses

17 0.12052523 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

18 0.11712325 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

19 0.11700012 45 andrew gelman stats-2010-05-20-Domain specificity: Does being really really smart or really really rich qualify you to make economic policy?

20 0.11539583 1628 andrew gelman stats-2012-12-17-Statistics in a world where nothing is random


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.239), (1, 0.09), (2, -0.033), (3, -0.046), (4, -0.05), (5, -0.009), (6, -0.046), (7, 0.005), (8, 0.124), (9, 0.0), (10, -0.001), (11, 0.008), (12, -0.03), (13, 0.01), (14, 0.038), (15, 0.046), (16, -0.005), (17, 0.01), (18, -0.049), (19, 0.053), (20, -0.015), (21, -0.06), (22, 0.065), (23, 0.051), (24, 0.075), (25, 0.145), (26, 0.005), (27, 0.018), (28, 0.036), (29, 0.095), (30, 0.002), (31, -0.006), (32, -0.026), (33, 0.046), (34, -0.076), (35, -0.005), (36, -0.001), (37, 0.013), (38, -0.037), (39, 0.027), (40, -0.041), (41, -0.011), (42, 0.036), (43, -0.013), (44, -0.071), (45, 0.008), (46, -0.018), (47, 0.014), (48, -0.018), (49, 0.025)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95262647 2170 andrew gelman stats-2014-01-13-Judea Pearl overview on causal inference, and more general thoughts on the reexpression of existing methods by considering their implicit assumptions

Introduction: This material should be familiar to many of you but could be helpful to newcomers. Pearl writes: ALL causal conclusions in nonexperimental settings must be based on untested, judgmental assumptions that investigators are prepared to defend on scientific grounds. . . . To understand what the world should be like for a given procedure to work is of no lesser scientific value than seeking evidence for how the world works . . . Assumptions are self-destructive in their honesty. The more explicit the assumption, the more criticism it invites . . . causal diagrams invite the harshest criticism because they make assumptions more explicit and more transparent than other representation schemes. As regular readers know (for example, search this blog for “Pearl”), I have not got much out of the causal-diagrams approach myself, but in general I think that when there are multiple, mathematically equivalent methods of getting the same answer, we tend to go with the framework we are used

2 0.87113464 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

Introduction: Elias Bareinboim asked what I thought about his comment on selection bias in which he referred to a paper by himself and Judea Pearl, “Controlling Selection Bias in Causal Inference.” I replied that I have no problem with what he wrote, but that from my perspective I find it easier to conceptualize such problems in terms of multilevel models. I elaborated on that point in a recent post , “Hierarchical modeling as a framework for extrapolation,” which I think was read by only a few people (I say this because it received only two comments). I don’t think Bareinboim objected to anything I wrote, but like me he is comfortable working within his own framework. He wrote the following to me: In some sense, “not ad hoc” could mean logically consistent. In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. I am not aware of any other way of doing mathematics. As it turns out, to get causa

3 0.82899725 1136 andrew gelman stats-2012-01-23-Fight! (also a bit of reminiscence at the end)

Introduction: Martin Lindquist and Michael Sobel published a fun little article in Neuroimage on models and assumptions for causal inference with intermediate outcomes. As their subtitle indicates (“A response to the comments on our comment”), this is a topic of some controversy. Lindquist and Sobel write: Our original comment (Lindquist and Sobel, 2011) made explicit the types of assumptions neuroimaging researchers are making when directed graphical models (DGMs), which include certain types of structural equation models (SEMs), are used to estimate causal effects. When these assumptions, which many researchers are not aware of, are not met, parameters of these models should not be interpreted as effects. . . . [Judea] Pearl does not disagree with anything we stated. However, he takes exception to our use of potential outcomes notation, which is the standard notation used in the statistical literature on causal inference, and his comment is devoted to promoting his alternative conventions. [C

4 0.80953419 1939 andrew gelman stats-2013-07-15-Forward causal reasoning statements are about estimation; reverse causal questions are about model checking and hypothesis generation

Introduction: Consider two broad classes of inferential questions : 1. Forward causal inference . What might happen if we do X? What are the effects of smoking on health, the effects of schooling on knowledge, the effect of campaigns on election outcomes, and so forth? 2. Reverse causal inference . What causes Y? Why do more attractive people earn more money? Why do many poor people vote for Republicans and rich people vote for Democrats? Why did the economy collapse? When statisticians and econometricians write about causal inference, they focus on forward causal questions. Rubin always told us: Never ask Why? Only ask What if? And, from the econ perspective, causation is typically framed in terms of manipulations: if x had changed by 1, how much would y be expected to change, holding all else constant? But reverse causal questions are important too. They’re a natural way to think (consider the importance of the word “Why”) and are arguably more important than forward questions.

5 0.80453938 1133 andrew gelman stats-2012-01-21-Judea Pearl on why he is “only a half-Bayesian”

Introduction: In an article published in 2001, Pearl wrote: I [Pearl] turned Bayesian in 1971, as soon as I began reading Savage’s monograph The Foundations of Statistical Inference [Savage, 1962]. The arguments were unassailable: (i) It is plain silly to ignore what we know, (ii) It is natural and useful to cast what we know in the language of probabilities, and (iii) If our subjective probabilities are erroneous, their impact will get washed out in due time, as the number of observations increases. Thirty years later, I [Pearl] am still a devout Bayesian in the sense of (i), but I now doubt the wisdom of (ii) and I know that, in general, (iii) is false. He elaborates: The bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships. Specifically, the building blocks of our scientific and everyday knowledge are elementary facts such as “mud does not cause rain” and “symptom

6 0.80313414 1492 andrew gelman stats-2012-09-11-Using the “instrumental variables” or “potential outcomes” approach to clarify causal thinking

7 0.79384959 1996 andrew gelman stats-2013-08-24-All inference is about generalizing from sample to population

8 0.78414273 2286 andrew gelman stats-2014-04-08-Understanding Simpson’s paradox using a graph

9 0.775859 1336 andrew gelman stats-2012-05-22-Battle of the Repo Man quotes: Reid Hastie’s turn

10 0.76666141 1675 andrew gelman stats-2013-01-15-“10 Things You Need to Know About Causal Effects”

11 0.75992787 393 andrew gelman stats-2010-11-04-Estimating the effect of A on B, and also the effect of B on A

12 0.73724008 550 andrew gelman stats-2011-02-02-An IV won’t save your life if the line is tangled

13 0.72509247 1962 andrew gelman stats-2013-07-30-The Roy causal model?

14 0.72273821 1425 andrew gelman stats-2012-07-23-Examples of the use of hierarchical modeling to generalize to new settings

15 0.71595287 2097 andrew gelman stats-2013-11-11-Why ask why? Forward causal inference and reverse causal questions

16 0.71440995 1732 andrew gelman stats-2013-02-22-Evaluating the impacts of welfare reform?

17 0.70681083 1802 andrew gelman stats-2013-04-14-Detecting predictability in complex ecosystems

18 0.70364213 1624 andrew gelman stats-2012-12-15-New prize on causality in statstistics education

19 0.69867897 1888 andrew gelman stats-2013-06-08-New Judea Pearl journal of causal inference

20 0.68982059 1645 andrew gelman stats-2012-12-31-Statistical modeling, causal inference, and social science


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(0, 0.017), (8, 0.017), (15, 0.084), (16, 0.064), (21, 0.058), (22, 0.015), (24, 0.156), (41, 0.012), (53, 0.02), (73, 0.012), (76, 0.041), (86, 0.028), (99, 0.332)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98114866 2013 andrew gelman stats-2013-09-08-What we need here is some peer review for statistical graphics

Introduction: Under the heading, “Bad graph candidate,” Kevin Wright points to this article [link fixed], writing: Some of the figures use the same line type for two different series. More egregious are the confidence intervals that are constant width instead of increasing in width into the future. Indeed. What’s even more embarrassing is that these graphs appeared in an article in the magazine Significance, sponsored by the American Statistical Association and the Royal Statistical Society. Perhaps every scientific journal could have a graphics editor whose job is to point out really horrible problems and require authors to make improvements. The difficulty, as always, is that scientists write these articles for free and as a public service (publishing in Significance doesn’t pay, nor does it count as a publication in an academic record), so it might be difficult to get authors to fix their graphs. On the other hand, if an article is worth writing at all, it’s worth trying to conv

2 0.98073852 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

Introduction: Over the years I’ve written a dozen or so journal articles that have appeared with discussions, and I’ve participated in many published discussions of others’ articles as well. I get a lot out of these article-discussion-rejoinder packages, in all three of my roles as reader, writer, and discussant. Part 1: The story of an unsuccessful discussion The first time I had a discussion article was the result of an unfortunate circumstance. I had a research idea that resulted in an article with Don Rubin on monitoring the mixing of Markov chain simulations. I new the idea was great, but back then we worked pretty slowly so it was awhile before we had a final version to submit to a journal. (In retrospect I wish I’d just submitted the draft version as it was.) In the meantime I presented the paper at a conference. Our idea was very well received (I had a sheet of paper so people could write their names and addresses to get preprints, and we got either 50 or 150 (I can’t remembe

3 0.98039061 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

Introduction: In my comments on academic cheating , I briefly discussed the question of how some of these papers could’ve been published in the first place, given that they tend to be of low quality. (It’s rare that people plagiarize the good stuff, and, when they do—for example when a senior scholar takes credit for a junior researcher’s contributions without giving proper credit—there’s not always a paper trail, and there can be legitimate differences of opinion about the relative contributions of the participants.) Anyway, to get back to the cases at hand: how did these rulebreakers get published in the first place? The question here is not how did they get away with cheating but how is it that top journals were publishing mediocre research? In the case of the profs who falsified data (Diederik Stapel) or did not follow scientific protocol (Mark Hauser), the answer is clear: By cheating, they were able to get the sort of too-good-to-be-true results which, if they were true, would be

4 0.97872198 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

Introduction: Stan Liebowitz writes: Have you ever heard of an article being retracted in economics? I know you have only been doing this for a few years but I suspect that the answer is that none or very few are retracted. No economist would ever deceive another. There is virtually no interest in detecting cheating. And what good would that do if there is no form of punishment? I say this because I think I have found a case in one of our top journals but the editor allowed the authors of the original article to write an anonymous referee report defending themselves and used this report to reject my comment even though an independent referee recommended publication. My reply: I wonder how this sort of thing will change in the future as journals become less important. My impression is that, on one side, researchers are increasingly citing NBER reports, Arxiv preprints, and the like; while, from the other direction, journals such as Science and Nature are developing the reputations of being “t

5 0.97812027 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai

same-blog 6 0.9775818 2170 andrew gelman stats-2014-01-13-Judea Pearl overview on causal inference, and more general thoughts on the reexpression of existing methods by considering their implicit assumptions

7 0.97756654 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

8 0.97670352 2227 andrew gelman stats-2014-02-27-“What Can we Learn from the Many Labs Replication Project?”

9 0.97555608 431 andrew gelman stats-2010-11-26-One fun thing about physicists . . .

10 0.97548997 274 andrew gelman stats-2010-09-14-Battle of the Americans: Writer at the American Enterprise Institute disparages the American Political Science Association

11 0.97536731 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

12 0.97506553 2350 andrew gelman stats-2014-05-27-A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, “The Economic Effects of Climate Change”

13 0.97463858 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

14 0.97386956 1683 andrew gelman stats-2013-01-19-“Confirmation, on the other hand, is not sexy”

15 0.97346121 1395 andrew gelman stats-2012-06-27-Cross-validation (What is it good for?)

16 0.97304851 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

17 0.9727 2006 andrew gelman stats-2013-09-03-Evaluating evidence from published research

18 0.97245365 2191 andrew gelman stats-2014-01-29-“Questioning The Lancet, PLOS, And Other Surveys On Iraqi Deaths, An Interview With Univ. of London Professor Michael Spagat”

19 0.97240031 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

20 0.97237736 1162 andrew gelman stats-2012-02-11-Adding an error model to a deterministic model