andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-785 knowledge-graph by maker-knowledge-mining

785 andrew gelman stats-2011-07-02-Experimental reasoning in social science


meta infos for this blog

Source: html

Introduction: As a statistician, I was trained to think of randomized experimentation as representing the gold standard of knowledge in the social sciences, and, despite having seen occasional arguments to the contrary, I still hold that view, expressed pithily by Box, Hunter, and Hunter (1978) that “To find out what happens when you change something, it is necessary to change it.” At the same time, in my capacity as a social scientist, I’ve published many applied research papers, almost none of which have used experimental data. In the present article, I’ll address the following questions: 1. Why do I agree with the consensus characterization of randomized experimentation as a gold standard? 2. Given point 1 above, why does almost all my research use observational data? In confronting these issues, we must consider some general issues in the strategy of social science research. We also take from the psychology methods literature a more nuanced perspective that considers several differen


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 ” At the same time, in my capacity as a social scientist, I’ve published many applied research papers, almost none of which have used experimental data. [sent-2, score-0.701]

2 In the present article, I’ll address the following questions: 1. [sent-3, score-0.217]

3 Why do I agree with the consensus characterization of randomized experimentation as a gold standard? [sent-4, score-0.979]

4 Given point 1 above, why does almost all my research use observational data? [sent-6, score-0.393]

5 In confronting these issues, we must consider some general issues in the strategy of social science research. [sent-7, score-0.606]

6 We also take from the psychology methods literature a more nuanced perspective that considers several different aspects of research design and goes beyond the simple division into randomized experiments, observational studies, and formal theory. [sent-8, score-1.177]

7 Here’s the full article , which is appearing in a volume, Field Experiments and Their Critics, edited by Dawn Teele. [sent-9, score-0.343]

8 It was fun to write a whole article on causal inference in social science without duplicating the article that I’d recently written for the American Journal of Sociology. [sent-10, score-0.749]

9 Actually, it contains the material for several blog entries had I chosen to present it that way. [sent-12, score-0.518]

10 In any case, I think points 1 and 2 are central to any consideration of causal inference in applied statistics. [sent-13, score-0.463]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('hunter', 0.283), ('randomized', 0.277), ('experimentation', 0.235), ('gold', 0.232), ('social', 0.186), ('observational', 0.184), ('pithily', 0.162), ('confronting', 0.146), ('experiments', 0.144), ('characterization', 0.134), ('nuanced', 0.134), ('present', 0.133), ('causal', 0.13), ('article', 0.126), ('consideration', 0.121), ('capacity', 0.118), ('issues', 0.116), ('considers', 0.115), ('change', 0.113), ('almost', 0.113), ('edited', 0.111), ('contrary', 0.109), ('applied', 0.109), ('trained', 0.106), ('appearing', 0.106), ('representing', 0.105), ('inference', 0.103), ('standard', 0.102), ('several', 0.102), ('occasional', 0.101), ('consensus', 0.101), ('critics', 0.099), ('division', 0.099), ('volume', 0.097), ('box', 0.096), ('research', 0.096), ('chosen', 0.095), ('entries', 0.094), ('contains', 0.094), ('formal', 0.091), ('expressed', 0.089), ('address', 0.084), ('despite', 0.081), ('hold', 0.081), ('strategy', 0.08), ('aspects', 0.079), ('none', 0.079), ('sciences', 0.079), ('science', 0.078), ('arguments', 0.077)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 785 andrew gelman stats-2011-07-02-Experimental reasoning in social science

Introduction: As a statistician, I was trained to think of randomized experimentation as representing the gold standard of knowledge in the social sciences, and, despite having seen occasional arguments to the contrary, I still hold that view, expressed pithily by Box, Hunter, and Hunter (1978) that “To find out what happens when you change something, it is necessary to change it.” At the same time, in my capacity as a social scientist, I’ve published many applied research papers, almost none of which have used experimental data. In the present article, I’ll address the following questions: 1. Why do I agree with the consensus characterization of randomized experimentation as a gold standard? 2. Given point 1 above, why does almost all my research use observational data? In confronting these issues, we must consider some general issues in the strategy of social science research. We also take from the psychology methods literature a more nuanced perspective that considers several differen

2 0.25557163 1778 andrew gelman stats-2013-03-27-My talk at the University of Michigan today 4pm

Introduction: Causality and Statistical Learning Andrew Gelman, Statistics and Political Science, Columbia University Wed 27 Mar, 4pm, Betty Ford Auditorium, Ford School of Public Policy Causal inference is central to the social and biomedical sciences. There are unresolved debates about the meaning of causality and the methods that should be used to measure it. As a statistician, I am trained to say that randomized experiments are a gold standard, yet I have spent almost all my applied career analyzing observational data. In this talk we shall consider various approaches to causal reasoning from the perspective of an applied statistician who recognizes the importance of causal identification yet must learn from available information. Two relevant papers are here and here .

3 0.23629567 1721 andrew gelman stats-2013-02-13-A must-read paper on statistical analysis of experimental data

Introduction: Russ Lyons points to an excellent article on statistical experimentation by Ron Kohavi, Alex Deng, Brian Frasca, Roger Longbotham, Toby Walker, Ya Xu, a group of software engineers (I presume) at Microsoft. Kohavi et al. write: Online controlled experiments are often utilized to make data-driven decisions at Amazon, Microsoft . . . deployment and mining of online controlled experiments at scale—thousands of experiments now—has taught us many lessons. The paper is well written and has excellent examples (unfortunately the substantive topics are unexciting things like clicks and revenue per user, but the general principles remain important). The ideas will be familiar to anyone with experience in practical statistics but don’t always make it into textbooks or courses, so I think many people could learn a lot from this article. I was disappointed that they didn’t cite much of the statistics literature— not even the classic Box, Hunter, and Hunter book on industrial experimentat

4 0.19292387 2207 andrew gelman stats-2014-02-11-My talks in Bristol this Wed and London this Thurs

Introduction: 1. Causality and statistical learning (Wed 12 Feb 2014, 16:00, at University of Bristol): Causal inference is central to the social and biomedical sciences. There are unresolved debates about the meaning of causality and the methods that should be used to measure it. As a statistician, I am trained to say that randomized experiments are a gold standard, yet I have spent almost all my applied career analyzing observational data. In this talk we shall consider various approaches to causal reasoning from the perspective of an applied statistician who recognizes the importance of causal identification, yet must learn from available information. This is a good one. They laughed their asses off when I did it in Ann Arbor. But it has serious stuff too. As George Carlin (or, for that matter, John or Brad) might say, it’s funny because it’s true. Here are some old slides, but I plan to mix in a bit of new material. 2. Theoretical Statistics is the Theory of Applied Statistics

5 0.18938828 186 andrew gelman stats-2010-08-04-“To find out what happens when you change something, it is necessary to change it.”

Introduction: From the classic Box, Hunter, and Hunter book. The point of the saying is pretty clear, I think: There are things you learn from perturbing a system that you’ll never find out from any amount of passive observation. This is not always true–sometimes “nature” does the experiment for you–but I think it represents an important insight. I’m currently writing (yet another) review article on causal inference and am planning use this quote. P.S. I find it helpful to write these reviews for a similar reason that I like to blog on certain topics over and over, each time going a bit further (I hope) than the time before. Beyond the benefit of communicating my recommendations to new audiences, writing these sorts of reviews gives me an excuse to explore my thoughts in more rigor. P.P.S. In the original version of this blog entry, I correctly attributed the quote to Box but I incorrectly remembered it as “No understanding without manipulation.” Karl Broman (see comment below) gave me

6 0.17818524 2268 andrew gelman stats-2014-03-26-New research journal on observational studies

7 0.15956904 2245 andrew gelman stats-2014-03-12-More on publishing in journals

8 0.13957703 340 andrew gelman stats-2010-10-13-Randomized experiments, non-randomized experiments, and observational studies

9 0.13890958 1952 andrew gelman stats-2013-07-23-Christakis response to my comment on his comments on social science (or just skip to the P.P.P.S. at the end)

10 0.13805784 879 andrew gelman stats-2011-08-29-New journal on causal inference

11 0.13314138 1888 andrew gelman stats-2013-06-08-New Judea Pearl journal of causal inference

12 0.13088198 32 andrew gelman stats-2010-05-14-Causal inference in economics

13 0.13033617 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

14 0.12286326 1939 andrew gelman stats-2013-07-15-Forward causal reasoning statements are about estimation; reverse causal questions are about model checking and hypothesis generation

15 0.11717212 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

16 0.11577796 1268 andrew gelman stats-2012-04-18-Experimenting on your intro stat course, as a way of teaching experimentation in your intro stat course (and also to improve the course itself)

17 0.11233872 109 andrew gelman stats-2010-06-25-Classics of statistics

18 0.11097016 1630 andrew gelman stats-2012-12-18-Postdoc positions at Microsoft Research – NYC

19 0.10937482 854 andrew gelman stats-2011-08-15-A silly paper that tries to make fun of multilevel models

20 0.10785042 1117 andrew gelman stats-2012-01-13-What are the important issues in ethics and statistics? I’m looking for your input!


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.198), (1, -0.029), (2, -0.093), (3, -0.104), (4, -0.077), (5, 0.023), (6, -0.092), (7, -0.015), (8, -0.016), (9, 0.065), (10, 0.004), (11, -0.009), (12, 0.002), (13, 0.016), (14, -0.01), (15, -0.004), (16, -0.039), (17, 0.019), (18, -0.035), (19, 0.033), (20, 0.001), (21, -0.096), (22, 0.034), (23, 0.072), (24, 0.094), (25, 0.112), (26, 0.053), (27, -0.039), (28, -0.015), (29, 0.026), (30, 0.048), (31, -0.061), (32, -0.023), (33, -0.037), (34, -0.028), (35, -0.007), (36, 0.024), (37, -0.026), (38, 0.023), (39, 0.029), (40, -0.052), (41, 0.016), (42, 0.027), (43, 0.018), (44, -0.017), (45, 0.007), (46, -0.03), (47, 0.034), (48, -0.007), (49, -0.012)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97722208 785 andrew gelman stats-2011-07-02-Experimental reasoning in social science

Introduction: As a statistician, I was trained to think of randomized experimentation as representing the gold standard of knowledge in the social sciences, and, despite having seen occasional arguments to the contrary, I still hold that view, expressed pithily by Box, Hunter, and Hunter (1978) that “To find out what happens when you change something, it is necessary to change it.” At the same time, in my capacity as a social scientist, I’ve published many applied research papers, almost none of which have used experimental data. In the present article, I’ll address the following questions: 1. Why do I agree with the consensus characterization of randomized experimentation as a gold standard? 2. Given point 1 above, why does almost all my research use observational data? In confronting these issues, we must consider some general issues in the strategy of social science research. We also take from the psychology methods literature a more nuanced perspective that considers several differen

2 0.86738056 1802 andrew gelman stats-2013-04-14-Detecting predictability in complex ecosystems

Introduction: A couple people pointed me to a recent article , “Detecting Causality in Complex Ecosystems,” by fisheries researchers George Sugihara, Robert May, Hao Ye, Chih-hao Hsieh, Ethan Deyle, Michael Fogarty, and Stephan Munch. I don’t know anything about ecology research but I could imagine this method being useful in that field. I can’t see the approach doing much in political science, where I think their stated goal of “identifying causal networks” is typically irrelevant. That said, if you replace the word “causality” by “predictability” everywhere in the paper, it starts to make a lot more sense. As they write, they are working within “a framework that uses predictability as opposed to correlation to identify causation between time-series variables.” Setting causation aside, predictability is an important topic in itself. The search for patterns of predictability in complex structures may motivate causal hypotheses that can be studied more directly, using more traditional statis

3 0.8601371 879 andrew gelman stats-2011-08-29-New journal on causal inference

Introduction: Judea Pearl is starting an (online) Journal of Causal Inference. The first issue is planned for Fall 2011 and the website is now open for submissions. Here’s the background (from Pearl): Existing discipline-specific journals tend to bury causal analysis in the language and methods of traditional statistical methodologies, creating the inaccurate impression that causal questions can be handled by routine methods of regression, simultaneous equations or logical implications, and glossing over the special ingredients needed for causal analysis. In contrast, Journal of Causal Inference highlights both the uniqueness and interdisciplinary nature of causal research. In addition to significant original research articles, Journal of Causal Inference also welcomes: 1) Submissions that synthesize and assess cross-disciplinary methodological research 2) Submissions that discuss the history of the causal inference field and its philosophical underpinnings 3) Unsolicited short communi

4 0.79831636 2286 andrew gelman stats-2014-04-08-Understanding Simpson’s paradox using a graph

Introduction: Joshua Vogelstein pointed me to this post by Michael Nielsen on how to teach Simpson’s paradox. I don’t know if Nielsen (and others) are aware that people have developed some snappy graphical methods for displaying Simpson’s paradox (and, more generally, aggregation issues). We do some this in our Red State Blue State book, but before that was the BK plot, named by Howard Wainer after a 2001 paper by Stuart Baker and Barnett Kramer, although in apparently appeared earlier in a 1987 paper by Jeon, Chung, and Bae, and doubtless was made by various other people before then. Here’s Wainer’s graphical explication from 2002 (adapted from Baker and Kramer’s 2001 paper): Here’s the version from our 2007 article (with Boris Shor, Joe Bafumi, and David Park): But I recommend Wainer’s article (linked to above) as the first thing to read on the topic of presenting aggregation paradoxes in a clear and grabby way. P.S. Robert Long writes in: I noticed your post ab

5 0.79743433 1778 andrew gelman stats-2013-03-27-My talk at the University of Michigan today 4pm

Introduction: Causality and Statistical Learning Andrew Gelman, Statistics and Political Science, Columbia University Wed 27 Mar, 4pm, Betty Ford Auditorium, Ford School of Public Policy Causal inference is central to the social and biomedical sciences. There are unresolved debates about the meaning of causality and the methods that should be used to measure it. As a statistician, I am trained to say that randomized experiments are a gold standard, yet I have spent almost all my applied career analyzing observational data. In this talk we shall consider various approaches to causal reasoning from the perspective of an applied statistician who recognizes the importance of causal identification yet must learn from available information. Two relevant papers are here and here .

6 0.76120341 1888 andrew gelman stats-2013-06-08-New Judea Pearl journal of causal inference

7 0.75741935 1624 andrew gelman stats-2012-12-15-New prize on causality in statstistics education

8 0.75581825 550 andrew gelman stats-2011-02-02-An IV won’t save your life if the line is tangled

9 0.74814147 1645 andrew gelman stats-2012-12-31-Statistical modeling, causal inference, and social science

10 0.7458865 340 andrew gelman stats-2010-10-13-Randomized experiments, non-randomized experiments, and observational studies

11 0.72723073 1492 andrew gelman stats-2012-09-11-Using the “instrumental variables” or “potential outcomes” approach to clarify causal thinking

12 0.72425151 32 andrew gelman stats-2010-05-14-Causal inference in economics

13 0.72338229 1996 andrew gelman stats-2013-08-24-All inference is about generalizing from sample to population

14 0.71793622 1555 andrew gelman stats-2012-10-31-Social scientists who use medical analogies to explain causal inference are, I think, implicitly trying to borrow some of the scientific and cultural authority of that field for our own purposes

15 0.70567799 2207 andrew gelman stats-2014-02-11-My talks in Bristol this Wed and London this Thurs

16 0.70259362 1336 andrew gelman stats-2012-05-22-Battle of the Repo Man quotes: Reid Hastie’s turn

17 0.69267792 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

18 0.69136411 1675 andrew gelman stats-2013-01-15-“10 Things You Need to Know About Causal Effects”

19 0.68450332 120 andrew gelman stats-2010-06-30-You can’t put Pandora back in the box

20 0.67573035 789 andrew gelman stats-2011-07-07-Descriptive statistics, causal inference, and story time


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(6, 0.013), (15, 0.038), (16, 0.082), (21, 0.048), (24, 0.142), (31, 0.011), (33, 0.05), (48, 0.014), (49, 0.014), (59, 0.011), (73, 0.01), (78, 0.01), (79, 0.012), (82, 0.011), (84, 0.039), (86, 0.03), (95, 0.054), (99, 0.328)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98219526 785 andrew gelman stats-2011-07-02-Experimental reasoning in social science

Introduction: As a statistician, I was trained to think of randomized experimentation as representing the gold standard of knowledge in the social sciences, and, despite having seen occasional arguments to the contrary, I still hold that view, expressed pithily by Box, Hunter, and Hunter (1978) that “To find out what happens when you change something, it is necessary to change it.” At the same time, in my capacity as a social scientist, I’ve published many applied research papers, almost none of which have used experimental data. In the present article, I’ll address the following questions: 1. Why do I agree with the consensus characterization of randomized experimentation as a gold standard? 2. Given point 1 above, why does almost all my research use observational data? In confronting these issues, we must consider some general issues in the strategy of social science research. We also take from the psychology methods literature a more nuanced perspective that considers several differen

2 0.9745211 670 andrew gelman stats-2011-04-20-Attractive but hard-to-read graph could be made much much better

Introduction: Matthew Yglesias shares this graph from the Economist : I hate this graph. OK, sure, I don’t hate hate hate hate it: it’s not a 3-d exploding pie chart or anything. It’s not misleading, it’s just extremely difficult to read. Basically, you have to go back and forth between the colors and the labels and the countries and read it like a table. OK, so here’s the table: Average Hours Per Day Spent in Each Activity Work, Unpaid Eating, Personal Country study work sleeping care Leisure Other France 4 3 11 1 2 2 Germany 4 3 10 1 3 3 Japan 6 2 10 1 2 2 Britain 4 3 10 1 3 3 USA 5 3 10 1 3 2 Turkey 4 3 11 1 3 2 Hmm, that didn’t work too well. Let’s try subtracting the average from each column (for these six countries,

3 0.97323406 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

Introduction: Ole Rogeberg points me to a discussion of a discussion of a paper: Did pre-release of my [Rogeberg's] PNAS paper on methodological problems with Meier et al’s 2012 paper on cannabis and IQ reduce the chances that it will have its intended effect? In my case, serious methodological issues related to causal inference from non-random observational data became framed as a conflict over conclusions, forcing the original research team to respond rapidly and insufficiently to my concerns, and prompting them to defend their conclusions and original paper in a way that makes a later, more comprehensive reanalysis of their data less likely. This fits with a recurring theme on this blog: the defensiveness of researchers who don’t want to admit they were wrong. Setting aside cases of outright fraud and plagiarism, I think the worst case remains that of psychologists Neil Anderson and Deniz Ones, who denied any problems even in the presence of a smoking gun of a graph revealing their data

4 0.97097224 284 andrew gelman stats-2010-09-18-Continuing efforts to justify false “death panels” claim

Introduction: Brendan Nyhan gives the story . Here’s Sarah Palin’s statement introducing the now-notorious phrase: The America I know and love is not one in which my parents or my baby with Down Syndrome will have to stand in front of Obama’s “death panel” so his bureaucrats can decide, based on a subjective judgment of their “level of productivity in society,” whether they are worthy of health care. And now Brendan: Palin’s language suggests that a “death panel” would determine whether individual patients receive care based on their “level of productivity in society.” This was — and remains — false. Denying coverage at a system level for specific treatments or drugs is not equivalent to “decid[ing], based on a subjective judgment of their ‘level of productivity in society.’” Seems like an open-and-shut case to me. The “bureaucrats” (I think Palin is referring to “government employees”) are making decisions based on studies of the drug’s effectiveness: An FDA advisory committee

5 0.97091776 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

Introduction: Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been obse

6 0.9707247 2350 andrew gelman stats-2014-05-27-A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, “The Economic Effects of Climate Change”

7 0.97040439 2218 andrew gelman stats-2014-02-20-Do differences between biology and statistics explain some of our diverging attitudes regarding criticism and replication of scientific claims?

8 0.96948314 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

9 0.96922612 186 andrew gelman stats-2010-08-04-“To find out what happens when you change something, it is necessary to change it.”

10 0.96808159 1070 andrew gelman stats-2011-12-19-The scope for snooping

11 0.96757972 2341 andrew gelman stats-2014-05-20-plus ça change, plus c’est la même chose

12 0.96711856 2227 andrew gelman stats-2014-02-27-“What Can we Learn from the Many Labs Replication Project?”

13 0.96695411 2154 andrew gelman stats-2013-12-30-Bill Gates’s favorite graph of the year

14 0.96638465 277 andrew gelman stats-2010-09-14-In an introductory course, when does learning occur?

15 0.96613574 532 andrew gelman stats-2011-01-23-My Wall Street Journal story

16 0.96595919 262 andrew gelman stats-2010-09-08-Here’s how rumors get started: Lineplots, dotplots, and nonfunctional modernist architecture

17 0.96563661 1162 andrew gelman stats-2012-02-11-Adding an error model to a deterministic model

18 0.96543062 1996 andrew gelman stats-2013-08-24-All inference is about generalizing from sample to population

19 0.96531749 2137 andrew gelman stats-2013-12-17-Replication backlash

20 0.96520859 1117 andrew gelman stats-2012-01-13-What are the important issues in ethics and statistics? I’m looking for your input!