andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1605 knowledge-graph by maker-knowledge-mining

1605 andrew gelman stats-2012-12-04-Write This Book


meta infos for this blog

Source: html

Introduction: This post is by Phil Price. I’ve been preparing a review of a new statistics textbook aimed at students and practitioners in the “physical sciences,” as distinct from the social sciences and also distinct from people who intend to take more statistics courses. I figured that since it’s been years since I looked at an intro stats textbook, I should look at a few others and see how they differ from this one, so in addition to the book I’m reviewing I’ve looked at some other textbooks aimed at similar audiences: Milton and Arnold; Hines, Montgomery, Goldsman, and Borror; and a few others. I also looked at the table of contents of several more. There is a lot of overlap in the coverage of these books — they all have discussions of common discrete and continuous distributions, joint distributions, descriptive statistics, parameter estimation, hypothesis testing, linear regression, ANOVA, factorial experimental design, and a few other topics. I can see how, from a statisti


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I’ve been preparing a review of a new statistics textbook aimed at students and practitioners in the “physical sciences,” as distinct from the social sciences and also distinct from people who intend to take more statistics courses. [sent-2, score-0.599]

2 A partial list of topics I’ve worked on includes the geographical and statistical distribution of indoor radon in the U. [sent-9, score-0.344]

3 (keep reading below the fold) To go ahead and shoot the largest fish in the barrel, in most of these books there is far too much discussion of hypothesis tests and far too little discussion of what people ought to do instead of hypothesis testing. [sent-13, score-0.822]

4 First, no matter what caveats are put in the books, many people will incorrectly interpret “we cannot reject the hypothesis that A=B” as meaning “we can safely assume that A=B”; after all, if that’s not the point of the test then what IS the point of the test? [sent-15, score-0.501]

5 Second — and this might actually be the more important point — people who know of the existence of hypothesis tests often assume that that’s what they want, which prevents them from pondering what they really _do_ want. [sent-16, score-0.484]

6 To give one example out of many in my own experience: I have worked with a group that is trying to provide small cookstoves to desperately poor people, mostly in Africa, to decrease the amount of wood they need to gather in order to cook their food. [sent-17, score-0.512]

7 The group had cooked a standard meal several times, using the type of wood and the type of pan available to the people of north Sudan, and using cookstoves of different designs, and they wanted to see which cookstove required the least wood on average. [sent-18, score-0.983]

8 They approached me with the request that I help them do a hypothesis test to see whether all of the stoves are equivalent. [sent-19, score-0.577]

9 This is an example of a place where a hypothesis test is not what you want: the stoves couldn’t possibly perform exactly equally, so all the test will tell you is whether you have enough statistical power to convincingly demonstrate the difference. [sent-20, score-0.88]

10 In any case there is usually little point in testing a hypothesis (in this case: that all of the stoves are exactly equal) that you know to be false. [sent-22, score-0.489]

11 These textbooks (and courses) should eliminate the chapter on hypothesis testing, replacing it with a one-page description of what hypothesis testing is and why it’s less useful than it seems. [sent-24, score-0.755]

12 I’ll admit that hypothesis tests have their place and that it is a pity if students only get a one-page discussion of them, but something has to give. [sent-25, score-0.346]

13 Once the hypothesis test chapter is gone, it can be replaced by something useful. [sent-26, score-0.496]

14 I’ll again illustrate with a real example: an experiment to manipulate ventilation rates and to quantify the effect of ventilation rate on the performance of workers in a “call center. [sent-37, score-0.808]

15 There are hundreds of workers in the building, and the experiment involved varying the amount of outdoor air they got, ranging from the typical amount to several times that much. [sent-39, score-0.492]

16 Partway through the study,the business introduced a new computer system, which led to an immediate drop in performance that was much larger than the effect of interest could possibly be, with performance gradually improving again as employees learned the system. [sent-43, score-0.387]

17 Additionally, several times large numbers of new workers were added, and again there was a short-term effect on productivity that was large compared to the effect of interest (and the available data recorded only the average call processing times, not the times for individual workers). [sent-44, score-0.766]

18 The easiest way to answer this is often to sample from the input distributions and generate the resulting output value; repeat as needed to get a statistical distribution of outputs. [sent-53, score-0.495]

19 Overall, I don’t much like the statistics book that I’m reviewing, but I can’t say it’s any worse than the typical stats book aimed at physical scientists and engineers who do not plan to take further statistics courses. [sent-59, score-0.435]

20 Are there any books out there that cover exploratory data analysis (including exploratory graphics), and dealing with common problems of real-world data, and other things that I think should be in these books? [sent-61, score-0.308]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('hypothesis', 0.264), ('ventilation', 0.233), ('wood', 0.216), ('test', 0.173), ('cookstoves', 0.154), ('topics', 0.151), ('books', 0.148), ('workers', 0.146), ('stoves', 0.14), ('stove', 0.132), ('plots', 0.129), ('performance', 0.127), ('stats', 0.108), ('distributions', 0.104), ('several', 0.102), ('aimed', 0.099), ('amount', 0.085), ('testing', 0.085), ('large', 0.084), ('textbooks', 0.083), ('textbook', 0.082), ('tests', 0.082), ('exploratory', 0.08), ('type', 0.079), ('statistics', 0.078), ('looked', 0.076), ('clean', 0.076), ('times', 0.074), ('often', 0.074), ('standard', 0.073), ('physical', 0.072), ('statistical', 0.072), ('plotted', 0.071), ('analytical', 0.071), ('effect', 0.069), ('sciences', 0.068), ('buildings', 0.068), ('needed', 0.066), ('inputs', 0.065), ('distinct', 0.065), ('distribution', 0.064), ('interest', 0.064), ('people', 0.064), ('chapter', 0.059), ('reviewing', 0.059), ('output', 0.058), ('power', 0.058), ('intro', 0.058), ('input', 0.057), ('worked', 0.057)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000005 1605 andrew gelman stats-2012-12-04-Write This Book

Introduction: This post is by Phil Price. I’ve been preparing a review of a new statistics textbook aimed at students and practitioners in the “physical sciences,” as distinct from the social sciences and also distinct from people who intend to take more statistics courses. I figured that since it’s been years since I looked at an intro stats textbook, I should look at a few others and see how they differ from this one, so in addition to the book I’m reviewing I’ve looked at some other textbooks aimed at similar audiences: Milton and Arnold; Hines, Montgomery, Goldsman, and Borror; and a few others. I also looked at the table of contents of several more. There is a lot of overlap in the coverage of these books — they all have discussions of common discrete and continuous distributions, joint distributions, descriptive statistics, parameter estimation, hypothesis testing, linear regression, ANOVA, factorial experimental design, and a few other topics. I can see how, from a statisti

2 0.2145519 423 andrew gelman stats-2010-11-20-How to schedule projects in an introductory statistics course?

Introduction: John Haubrick writes: Next semester I want to center my statistics class around independent projects that they will present at the end of the semester. My question is, by centering around a project and teaching for the different parts that they need at the time, should topics such as hypothesis testing be moved toward the beginning of the course? Or should I only discuss setting up a research hypothesis and discuss the actual testing later after they have the data? My reply: I’m not sure. There always is a difficulty of what can be covered in a project. My quick thought is that a project will perhaps work better if it is focused on data collection or exploratory data analysis rather than on estimation and hypothesis testing, which are topics that get covered pretty well in the course as a whole.

3 0.20791113 2295 andrew gelman stats-2014-04-18-One-tailed or two-tailed?

Introduction: Someone writes: Suppose I have two groups of people, A and B, which differ on some characteristic of interest to me; and for each person I measure a single real-valued quantity X. I have a theory that group A has a higher mean value of X than group B. I test this theory by using a t-test. Am I entitled to use a *one-tailed* t-test? Or should I use a *two-tailed* one (thereby giving a p-value that is twice as large)? I know you will probably answer: Forget the t-test; you should use Bayesian methods instead. But what is the standard frequentist answer to this question? My reply: The quick answer here is that different people will do different things here. I would say the 2-tailed p-value is more standard but some people will insist on the one-tailed version, and it’s hard to make a big stand on this one, given all the other problems with p-values in practice: http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf http://www.stat.columbia.edu/~gelm

4 0.18909465 2263 andrew gelman stats-2014-03-24-Empirical implications of Empirical Implications of Theoretical Models

Introduction: Robert Bloomfield writes: Most of the people in my field (accounting, which is basically applied economics and finance, leavened with psychology and organizational behavior) use ‘positive research methods’, which are typically described as coming to the data with a predefined theory, and using hypothesis testing to accept or reject the theory’s predictions. But a substantial minority use ‘interpretive research methods’ (sometimes called qualitative methods, for those that call positive research ‘quantitative’). No one seems entirely happy with the definition of this method, but I’ve found it useful to think of it as an attempt to see the world through the eyes of your subjects, much as Jane Goodall lived with gorillas and tried to see the world through their eyes.) Interpretive researchers often criticize positive researchers by noting that the latter don’t make the best use of their data, because they come to the data with a predetermined theory, and only test a narrow set of h

5 0.18250459 1289 andrew gelman stats-2012-04-29-We go to war with the data we have, not the data we want

Introduction: This post is by Phil. Psychologists perform experiments on Canadian undergraduate psychology students and draws conclusions that (they believe) apply to humans in general; they publish in Science. A drug company decides to embark on additional trials that will cost tens of millions of dollars based on the results of a careful double-blind study….whose patients are all volunteers from two hospitals. A movie studio holds 9 screenings of a new movie for volunteer viewers and, based on their survey responses, decides to spend another $8 million to re-shoot the ending.  A researcher interested in the effect of ventilation on worker performance conducts a months-long study in which ventilation levels are varied and worker performance is monitored…in a single building. In almost all fields of research, most studies are based on convenience samples, or on random samples from a larger population that is itself a convenience sample. The paragraph above gives just a few examples.  The benefit

6 0.17522375 1024 andrew gelman stats-2011-11-23-Of hypothesis tests and Unitarians

7 0.16410726 2183 andrew gelman stats-2014-01-23-Discussion on preregistration of research studies

8 0.1582206 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

9 0.15813173 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

10 0.15812622 1582 andrew gelman stats-2012-11-18-How to teach methods we don’t like?

11 0.15716586 1883 andrew gelman stats-2013-06-04-Interrogating p-values

12 0.15329534 256 andrew gelman stats-2010-09-04-Noooooooooooooooooooooooooooooooooooooooooooooooo!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

13 0.15322305 1869 andrew gelman stats-2013-05-24-In which I side with Neyman over Fisher

14 0.14708316 1355 andrew gelman stats-2012-05-31-Lindley’s paradox

15 0.14553943 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

16 0.14043309 870 andrew gelman stats-2011-08-25-Why it doesn’t make sense in general to form confidence intervals by inverting hypothesis tests

17 0.14034542 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

18 0.13837452 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

19 0.13712157 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

20 0.13528271 972 andrew gelman stats-2011-10-25-How do you interpret standard errors from a regression fit to the entire population?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.338), (1, 0.027), (2, -0.022), (3, -0.066), (4, 0.059), (5, -0.002), (6, -0.013), (7, 0.092), (8, 0.029), (9, -0.065), (10, -0.077), (11, 0.007), (12, 0.044), (13, -0.144), (14, 0.011), (15, -0.05), (16, -0.041), (17, -0.054), (18, 0.051), (19, -0.071), (20, 0.055), (21, 0.001), (22, 0.003), (23, 0.036), (24, -0.037), (25, -0.016), (26, 0.023), (27, -0.018), (28, 0.054), (29, 0.021), (30, 0.02), (31, -0.017), (32, 0.042), (33, 0.021), (34, -0.034), (35, -0.007), (36, 0.049), (37, -0.005), (38, 0.05), (39, 0.014), (40, -0.015), (41, -0.061), (42, -0.029), (43, 0.026), (44, -0.041), (45, 0.036), (46, -0.005), (47, -0.067), (48, 0.039), (49, -0.014)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97245461 1605 andrew gelman stats-2012-12-04-Write This Book

Introduction: This post is by Phil Price. I’ve been preparing a review of a new statistics textbook aimed at students and practitioners in the “physical sciences,” as distinct from the social sciences and also distinct from people who intend to take more statistics courses. I figured that since it’s been years since I looked at an intro stats textbook, I should look at a few others and see how they differ from this one, so in addition to the book I’m reviewing I’ve looked at some other textbooks aimed at similar audiences: Milton and Arnold; Hines, Montgomery, Goldsman, and Borror; and a few others. I also looked at the table of contents of several more. There is a lot of overlap in the coverage of these books — they all have discussions of common discrete and continuous distributions, joint distributions, descriptive statistics, parameter estimation, hypothesis testing, linear regression, ANOVA, factorial experimental design, and a few other topics. I can see how, from a statisti

2 0.87949467 2295 andrew gelman stats-2014-04-18-One-tailed or two-tailed?

Introduction: Someone writes: Suppose I have two groups of people, A and B, which differ on some characteristic of interest to me; and for each person I measure a single real-valued quantity X. I have a theory that group A has a higher mean value of X than group B. I test this theory by using a t-test. Am I entitled to use a *one-tailed* t-test? Or should I use a *two-tailed* one (thereby giving a p-value that is twice as large)? I know you will probably answer: Forget the t-test; you should use Bayesian methods instead. But what is the standard frequentist answer to this question? My reply: The quick answer here is that different people will do different things here. I would say the 2-tailed p-value is more standard but some people will insist on the one-tailed version, and it’s hard to make a big stand on this one, given all the other problems with p-values in practice: http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf http://www.stat.columbia.edu/~gelm

3 0.83790517 2263 andrew gelman stats-2014-03-24-Empirical implications of Empirical Implications of Theoretical Models

Introduction: Robert Bloomfield writes: Most of the people in my field (accounting, which is basically applied economics and finance, leavened with psychology and organizational behavior) use ‘positive research methods’, which are typically described as coming to the data with a predefined theory, and using hypothesis testing to accept or reject the theory’s predictions. But a substantial minority use ‘interpretive research methods’ (sometimes called qualitative methods, for those that call positive research ‘quantitative’). No one seems entirely happy with the definition of this method, but I’ve found it useful to think of it as an attempt to see the world through the eyes of your subjects, much as Jane Goodall lived with gorillas and tried to see the world through their eyes.) Interpretive researchers often criticize positive researchers by noting that the latter don’t make the best use of their data, because they come to the data with a predetermined theory, and only test a narrow set of h

4 0.822182 2183 andrew gelman stats-2014-01-23-Discussion on preregistration of research studies

Introduction: Chris Chambers and I had an enlightening discussion the other day at the blog of Rolf Zwaan, regarding the Garden of Forking Paths ( go here and scroll down through the comments). Chris sent me the following note: I’m writing a book at the moment about reforming practices in psychological research (focusing on various bad practices such as p-hacking, HARKing, low statistical power, publication bias, lack of data sharing etc. – and posing solutions such as pre-registration, Bayesian hypothesis testing, mandatory data archiving etc.) and I am arriving at rather unsettling conclusion: that null hypothesis significance testing (NHST) simply isn’t valid for observational research. If this is true then most of the psychological literature is statistically flawed. I was wonder what your thoughts were on this, both from a statistical point of view and from your experience working in an observational field. We all know about the dangers of researcher degrees of freedom. We also know

5 0.78644061 256 andrew gelman stats-2010-09-04-Noooooooooooooooooooooooooooooooooooooooooooooooo!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Introduction: Masanao sends this one in, under the heading, “another incident of misunderstood p-value”: Warren Davies, a positive psychology MSc student at UEL, provides the latest in our ongoing series of guest features for students. Warren has just released a Psychology Study Guide, which covers information on statistics, research methods and study skills for psychology students. Despite the myriad rules and procedures of science, some research findings are pure flukes. Perhaps you’re testing a new drug, and by chance alone, a large number of people spontaneously get better. The better your study is conducted, the lower the chance that your result was a fluke – but still, there is always a certain probability that it was. Statistical significance testing gives you an idea of what this probability is. In science we’re always testing hypotheses. We never conduct a study to ‘see what happens’, because there’s always at least one way to make any useless set of data look important. We take

6 0.78366536 212 andrew gelman stats-2010-08-17-Futures contracts, Granger causality, and my preference for estimation to testing

7 0.78259259 351 andrew gelman stats-2010-10-18-“I was finding the test so irritating and boring that I just started to click through as fast as I could”

8 0.77642214 1883 andrew gelman stats-2013-06-04-Interrogating p-values

9 0.77268851 1195 andrew gelman stats-2012-03-04-Multiple comparisons dispute in the tabloids

10 0.77134347 2127 andrew gelman stats-2013-12-08-The never-ending (and often productive) race between theory and practice

11 0.76986563 1776 andrew gelman stats-2013-03-25-The harm done by tests of significance

12 0.76971513 2281 andrew gelman stats-2014-04-04-The Notorious N.H.S.T. presents: Mo P-values Mo Problems

13 0.7670424 423 andrew gelman stats-2010-11-20-How to schedule projects in an introductory statistics course?

14 0.76590979 791 andrew gelman stats-2011-07-08-Censoring on one end, “outliers” on the other, what can we do with the middle?

15 0.7611658 1024 andrew gelman stats-2011-11-23-Of hypothesis tests and Unitarians

16 0.76116121 1702 andrew gelman stats-2013-02-01-Don’t let your standard errors drive your research agenda

17 0.75927621 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

18 0.75746924 360 andrew gelman stats-2010-10-21-Forensic bioinformatics, or, Don’t believe everything you read in the (scientific) papers

19 0.7538439 938 andrew gelman stats-2011-10-03-Comparing prediction errors

20 0.75272202 401 andrew gelman stats-2010-11-08-Silly old chi-square!


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.017), (4, 0.063), (15, 0.03), (16, 0.061), (21, 0.032), (24, 0.203), (42, 0.014), (45, 0.012), (63, 0.015), (73, 0.012), (77, 0.013), (84, 0.01), (86, 0.036), (96, 0.022), (99, 0.288)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97485328 1240 andrew gelman stats-2012-04-02-Blogads update

Introduction: A few months ago I reported on someone who wanted to insert text links into the blog. I asked her how much they would pay and got no answer. Yesterday, though, I received this reply: Hello Andrew, I am sorry for the delay in getting back to you. I’d like to make a proposal for your site. Please refer below. We would like to place a simple text link ad on page http://andrewgelman.com/2011/07/super_sam_fuld/ to link to *** with the key phrase ***. We will incorporate the key phrase into a sentence so it would read well. Rest assured it won’t sound obnoxious or advertorial. We will then process the final text link code as soon as you agree to our proposal. We can offer you $200 for this with the assumption that you will keep the link “live” on that page for 12 months or longer if you prefer. Please get back to us with a quick reply on your thoughts on this and include your Paypal ID for payment process. Hoping for a positive response from you. I wrote back: Hi,

same-blog 2 0.9741919 1605 andrew gelman stats-2012-12-04-Write This Book

Introduction: This post is by Phil Price. I’ve been preparing a review of a new statistics textbook aimed at students and practitioners in the “physical sciences,” as distinct from the social sciences and also distinct from people who intend to take more statistics courses. I figured that since it’s been years since I looked at an intro stats textbook, I should look at a few others and see how they differ from this one, so in addition to the book I’m reviewing I’ve looked at some other textbooks aimed at similar audiences: Milton and Arnold; Hines, Montgomery, Goldsman, and Borror; and a few others. I also looked at the table of contents of several more. There is a lot of overlap in the coverage of these books — they all have discussions of common discrete and continuous distributions, joint distributions, descriptive statistics, parameter estimation, hypothesis testing, linear regression, ANOVA, factorial experimental design, and a few other topics. I can see how, from a statisti

3 0.97306085 1792 andrew gelman stats-2013-04-07-X on JLP

Introduction: Christian Robert writes on the Jeffreys-Lindley paradox. I have nothing to add to this beyond my recent comments : To me, the Lindley paradox falls apart because of its noninformative prior distribution on the parameter of interest. If you really think there’s a high probability the parameter is nearly exactly zero, I don’t see the point of the model saying that you have no prior information at all on the parameter. In short: my criticism of so-called Bayesian hypothesis testing is that it’s insufficiently Bayesian. To clarify, I’m speaking of all the examples I’ve ever worked on in social and environmental science, where in some settings I can imagine a parameter being very close to zero and in other settings I can imagine a parameter taking on just about any value in a wide range, but where I’ve never seen an example where a parameter could be either right at zero or taking on any possible value. But such examples might occur in areas of application that I haven’t worked on.

4 0.97288024 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

Introduction: From my new article in the journal Epidemiology: Sander Greenland and Charles Poole accept that P values are here to stay but recognize that some of their most common interpretations have problems. The casual view of the P value as posterior probability of the truth of the null hypothesis is false and not even close to valid under any reasonable model, yet this misunderstanding persists even in high-stakes settings (as discussed, for example, by Greenland in 2011). The formal view of the P value as a probability conditional on the null is mathematically correct but typically irrelevant to research goals (hence, the popularity of alternative—if wrong—interpretations). A Bayesian interpretation based on a spike-and-slab model makes little sense in applied contexts in epidemiology, political science, and other fields in which true effects are typically nonzero and bounded (thus violating both the “spike” and the “slab” parts of the model). I find Greenland and Poole’s perspective t

5 0.97279179 2208 andrew gelman stats-2014-02-12-How to think about “identifiability” in Bayesian inference?

Introduction: We had some questions on the Stan list regarding identification. The topic arose because people were fitting models with improper posterior distributions, the kind of model where there’s a ridge in the likelihood and the parameters are not otherwise constrained. I tried to help by writing something on Bayesian identifiability for the Stan list. Then Ben Goodrich came along and cleaned up what I wrote. I think this might be of interest to many of you so I’ll repeat the discussion here. Here’s what I wrote: Identification is actually a tricky concept and is not so clearly defined. In the broadest sense, a Bayesian model is identified if the posterior distribution is proper. Then one can do Bayesian inference and that’s that. No need to require a finite variance or even a finite mean, all that’s needed is a finite integral of the probability distribution. That said, there are some reasons why a stronger definition can be useful: 1. Weak identification. Suppose that, wit

6 0.97266376 807 andrew gelman stats-2011-07-17-Macro causality

7 0.97255129 1801 andrew gelman stats-2013-04-13-Can you write a program to determine the causal order?

8 0.97216129 1644 andrew gelman stats-2012-12-30-Fixed effects, followed by Bayes shrinkage?

9 0.97201514 2040 andrew gelman stats-2013-09-26-Difficulties in making inferences about scientific truth from distributions of published p-values

10 0.971946 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

11 0.97190028 1941 andrew gelman stats-2013-07-16-Priors

12 0.97183472 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

13 0.97152674 899 andrew gelman stats-2011-09-10-The statistical significance filter

14 0.97098517 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

15 0.97087693 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

16 0.97063494 2086 andrew gelman stats-2013-11-03-How best to compare effects measured in two different time periods?

17 0.97057509 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

18 0.9701252 970 andrew gelman stats-2011-10-24-Bell Labs

19 0.97011971 1883 andrew gelman stats-2013-06-04-Interrogating p-values

20 0.97000027 351 andrew gelman stats-2010-10-18-“I was finding the test so irritating and boring that I just started to click through as fast as I could”