andrew_gelman_stats andrew_gelman_stats-2014 andrew_gelman_stats-2014-2235 knowledge-graph by maker-knowledge-mining

2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?


meta infos for this blog

Source: html

Introduction: I had a brief email exchange with Jeff Leek regarding our recent discussions of replication, criticism, and the self-correcting process of science. Jeff writes: (1) I can see the problem with serious, evidence-based criticisms not being published in the same journal (and linked to) studies that are shown to be incorrect. I have been mostly seeing these sorts of things show up in blogs. But I’m not sure that is a bad thing. I think people read blogs more than they read the literature. I wonder if this means that blogs will eventually be a sort of “shadow literature”? (2) I think there is a ton of bad literature out there, just like there is a ton of bad stuff on Google. If we focus too much on the bad stuff we will be paralyzed. I still manage to find good papers despite all the bad papers. (3) I think one positive solution to this problem is to incentivize/publish referee reports and give people credit for a good referee report just like they get credit for a good paper. T


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 (2) I think there is a ton of bad literature out there, just like there is a ton of bad stuff on Google. [sent-7, score-0.955]

2 If we focus too much on the bad stuff we will be paralyzed. [sent-8, score-0.367]

3 I still manage to find good papers despite all the bad papers. [sent-9, score-0.279]

4 (3) I think one positive solution to this problem is to incentivize/publish referee reports and give people credit for a good referee report just like they get credit for a good paper. [sent-10, score-0.542]

5 A key decision point is what to do when we encounter bad research that gets publicity. [sent-12, score-0.357]

6 Should we hype it up (the “Psychological Science” strategy), slam it (which is often what I do), ignore it (Jeff’s suggestion), or do further research to contextualize it (as Dan Kahan sometimes does )? [sent-13, score-0.415]

7 So let’s talk about the other two options: slamming bad research or ignoring it. [sent-16, score-0.61]

8 So maybe ignoring the bad stuff is the better option. [sent-18, score-0.537]

9 What if, every time someone pointed me to a bad paper, I were to just ignore it and instead post on something good? [sent-20, score-0.324]

10 The good news blog, just like the happy newspaper that only prints stories of firemen who rescue cats stuck in trees and cures for cancer. [sent-22, score-0.255]

11 Still and all, maybe it would be best for me, Ivan Oransky, Uri Simonsohn, and all the rest of us to just turn the other cheek, ignore the bad stuff and just resolutely focus on good news. [sent-29, score-0.695]

12 Why, then, do I spend time criticizing research mistakes and misconduct, given that it could even be counterproductive by drawing attention to sorry efforts that otherwise might be more quickly forgotten? [sent-32, score-0.321]

13 Beyond this, exploring errors can be a useful research direction. [sent-35, score-0.308]

14 For example, our criticism in 2007 of the notorious beauty-and-sex-ratio study led in 2009 to a more general exploration of the issue of statistical significance, which in turn led to a currently-in-the-revise-and-resubmit-stage article on a new approach to design analysis. [sent-36, score-0.372]

15 Similarly, the anti-plagiarism rants of Thomas Basbøll and myself led to a paper on the connection between plagiarism and ideas of statistical evidence, and another paper storytelling as model checking. [sent-37, score-0.29]

16 So, for me, criticism can open doors to new research. [sent-38, score-0.254]

17 Recall that the paper was published in 2009, its errors came to light in 2013, but as early as 2010, Dean Baker was publicly asking for the data. [sent-45, score-0.308]

18 Ir maybe I should chuck it all and do direct services with poor people, or get a million-dollar job, make a ton of money, and then give it all away. [sent-67, score-0.274]

19 “Bumblers and pointers” A few months ago after I published an article criticizing some low-quality published research, I received the following email: There are two kinds of people in science: bumblers and pointers. [sent-71, score-0.703]

20 I like to do both, indeed at the same time: When I do research (“bumble”), I aim criticism at myself, poking holes in everything I do (“pointing”). [sent-78, score-0.385]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('bad', 0.214), ('bumblers', 0.198), ('ton', 0.187), ('criticism', 0.176), ('stuff', 0.153), ('research', 0.143), ('published', 0.133), ('cures', 0.124), ('stereotypes', 0.115), ('pointers', 0.115), ('ignore', 0.11), ('mistakes', 0.103), ('jeff', 0.102), ('led', 0.098), ('paper', 0.096), ('slamming', 0.096), ('hype', 0.096), ('referee', 0.094), ('newspapers', 0.091), ('people', 0.09), ('maybe', 0.087), ('exploring', 0.086), ('ignoring', 0.083), ('ridiculous', 0.081), ('email', 0.081), ('sad', 0.08), ('errors', 0.079), ('open', 0.078), ('cancer', 0.076), ('criticizing', 0.075), ('two', 0.074), ('blogs', 0.074), ('pointing', 0.073), ('items', 0.072), ('criticisms', 0.071), ('choices', 0.068), ('credit', 0.067), ('truth', 0.067), ('scientific', 0.066), ('cheek', 0.066), ('poking', 0.066), ('resolutely', 0.066), ('prints', 0.066), ('kristina', 0.066), ('vladas', 0.066), ('sidelines', 0.066), ('ivan', 0.066), ('oransky', 0.066), ('contextualize', 0.066), ('good', 0.065)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

Introduction: I had a brief email exchange with Jeff Leek regarding our recent discussions of replication, criticism, and the self-correcting process of science. Jeff writes: (1) I can see the problem with serious, evidence-based criticisms not being published in the same journal (and linked to) studies that are shown to be incorrect. I have been mostly seeing these sorts of things show up in blogs. But I’m not sure that is a bad thing. I think people read blogs more than they read the literature. I wonder if this means that blogs will eventually be a sort of “shadow literature”? (2) I think there is a ton of bad literature out there, just like there is a ton of bad stuff on Google. If we focus too much on the bad stuff we will be paralyzed. I still manage to find good papers despite all the bad papers. (3) I think one positive solution to this problem is to incentivize/publish referee reports and give people credit for a good referee report just like they get credit for a good paper. T

2 0.22932467 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Introduction: Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition. Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe: In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . . The bias agai

3 0.21356823 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?

Introduction: I received two emails yesterday on related topics. First, Stephen Olivier pointed me to this post by Daniel Lakens, who wrote the following open call to statisticians: You would think that if you are passionate about statistics, then you want to help people to calculate them correctly in any way you can. . . . you’d think some statisticians would be interested in helping a poor mathematically challenged psychologist out by offering some practical advice. I’m the right person to ask this question, since I actually have written a lot of material that helps psychologists (and others) with their data analysis. But there clearly are communication difficulties, in that my work and that of other statisticians hasn’t reached Lakens. Sometimes the contributions of statisticians are made indirectly. For example, I wrote Bayesian Data Analysis, and then Kruschke wrote Doing Bayesian Data Analysis. Our statistics book made it possible for Kruschke to write his excellent book for psycholo

4 0.20882131 1844 andrew gelman stats-2013-05-06-Against optimism about social science

Introduction: Social science research has been getting pretty bad press recently, what with the Excel buccaneers who didn’t know how to handle data with different numbers of observations per country, and the psychologist who published dozens of papers based on fabricated data, and the Evilicious guy who wouldn’t let people review his data tapes, etc etc. And that’s not even considering Dr. Anil Potti. On the other hand, the revelation of all these problems can be taken as evidence that things are getting better. Psychology researcher Gary Marcus writes : There is something positive that has come out of the crisis of replicability—something vitally important for all experimental sciences. For years, it was extremely difficult to publish a direct replication, or a failure to replicate an experiment, in a good journal. . . . Now, happily, the scientific culture has changed. . . . The Reproducibility Project, from the Center for Open Science is now underway . . . And sociologist Fabio Rojas

5 0.20609057 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

Introduction: Stan Liebowitz writes: Have you ever heard of an article being retracted in economics? I know you have only been doing this for a few years but I suspect that the answer is that none or very few are retracted. No economist would ever deceive another. There is virtually no interest in detecting cheating. And what good would that do if there is no form of punishment? I say this because I think I have found a case in one of our top journals but the editor allowed the authors of the original article to write an anonymous referee report defending themselves and used this report to reject my comment even though an independent referee recommended publication. My reply: I wonder how this sort of thing will change in the future as journals become less important. My impression is that, on one side, researchers are increasingly citing NBER reports, Arxiv preprints, and the like; while, from the other direction, journals such as Science and Nature are developing the reputations of being “t

6 0.20087627 2279 andrew gelman stats-2014-04-02-Am I too negative?

7 0.19243652 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

8 0.18136103 2232 andrew gelman stats-2014-03-03-What is the appropriate time scale for blogging—the day or the week?

9 0.18119757 2236 andrew gelman stats-2014-03-07-Selection bias in the reporting of shaky research

10 0.17998052 2245 andrew gelman stats-2014-03-12-More on publishing in journals

11 0.17971966 1588 andrew gelman stats-2012-11-23-No one knows what it’s like to be the bad man

12 0.17723981 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

13 0.17550536 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

14 0.16746955 2326 andrew gelman stats-2014-05-08-Discussion with Steven Pinker on research that is attached to data that are so noisy as to be essentially uninformative

15 0.16598275 6 andrew gelman stats-2010-04-27-Jelte Wicherts lays down the stats on IQ

16 0.1639677 2006 andrew gelman stats-2013-09-03-Evaluating evidence from published research

17 0.15404686 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

18 0.15362373 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

19 0.15119417 1733 andrew gelman stats-2013-02-22-Krugman sets the bar too high

20 0.15005478 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.331), (1, -0.131), (2, -0.07), (3, -0.091), (4, -0.056), (5, -0.057), (6, 0.029), (7, -0.106), (8, -0.019), (9, -0.034), (10, 0.046), (11, 0.005), (12, -0.037), (13, 0.021), (14, -0.029), (15, -0.019), (16, -0.011), (17, 0.009), (18, -0.006), (19, 0.01), (20, -0.003), (21, -0.078), (22, -0.011), (23, -0.002), (24, -0.049), (25, -0.006), (26, -0.019), (27, -0.011), (28, -0.031), (29, 0.04), (30, 0.002), (31, -0.004), (32, -0.002), (33, -0.034), (34, -0.003), (35, 0.025), (36, 0.029), (37, 0.001), (38, -0.052), (39, -0.047), (40, 0.038), (41, -0.0), (42, -0.015), (43, -0.104), (44, 0.083), (45, 0.013), (46, 0.044), (47, 0.013), (48, 0.012), (49, -0.051)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98045319 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

Introduction: I had a brief email exchange with Jeff Leek regarding our recent discussions of replication, criticism, and the self-correcting process of science. Jeff writes: (1) I can see the problem with serious, evidence-based criticisms not being published in the same journal (and linked to) studies that are shown to be incorrect. I have been mostly seeing these sorts of things show up in blogs. But I’m not sure that is a bad thing. I think people read blogs more than they read the literature. I wonder if this means that blogs will eventually be a sort of “shadow literature”? (2) I think there is a ton of bad literature out there, just like there is a ton of bad stuff on Google. If we focus too much on the bad stuff we will be paralyzed. I still manage to find good papers despite all the bad papers. (3) I think one positive solution to this problem is to incentivize/publish referee reports and give people credit for a good referee report just like they get credit for a good paper. T

2 0.87459528 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Introduction: Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition. Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe: In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . . The bias agai

3 0.86884391 2177 andrew gelman stats-2014-01-19-“The British amateur who debunked the mathematics of happiness”

Introduction: Andrew Anthony tells the excellent story of how Nick Brown, Alan Sokal, and Harris Friedman shot down some particularly silly work in psychology. (“According to the graph, it all came down to a specific ratio of positive emotions to negative emotions. If your ratio was greater than 2.9013 positive emotions to 1 negative emotion you were flourishing in life. If your ratio was less than that number you were languishing.” And, yes, the work they were shooting down really is that bad.) If you want to see what the fuss is about, just google “2.9013.” Here’s an example (from 2012) of an uncritical reporting of the claim, here’s another one from 2010, here’s one from 2011 . . . well, you get the idea. And here’s a quick summary posted by Rolf Zwaan after Brown et al. came out with their paper. I know Sokal and Brown and so this story was not news to me. I didn’t post anything about it on this blog because it seemed like it was getting enough coverage elsewhere. I think Ni

4 0.86581045 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

Introduction: Stan Liebowitz writes: Have you ever heard of an article being retracted in economics? I know you have only been doing this for a few years but I suspect that the answer is that none or very few are retracted. No economist would ever deceive another. There is virtually no interest in detecting cheating. And what good would that do if there is no form of punishment? I say this because I think I have found a case in one of our top journals but the editor allowed the authors of the original article to write an anonymous referee report defending themselves and used this report to reject my comment even though an independent referee recommended publication. My reply: I wonder how this sort of thing will change in the future as journals become less important. My impression is that, on one side, researchers are increasingly citing NBER reports, Arxiv preprints, and the like; while, from the other direction, journals such as Science and Nature are developing the reputations of being “t

5 0.8558563 2006 andrew gelman stats-2013-09-03-Evaluating evidence from published research

Introduction: Following up on my entry the other day on post-publication peer review, Dan Kahan writes: You give me credit, I think, for merely participating in what I think is a systemic effect in the practice of empirical inquiry that conduces to quality control & hence the advance of knowledge by such means (likely the title conveys that!). I’d say: (a) by far the greatest weakness in the “publication regime” in social sciences today is the systematic disregard for basic principles of valid causal inference, a deficiency either in comprehension or craft that is at the root of scholars’ resort to (and journals’ tolerance for) invalid samples, the employment of designs that don’t generate observations more consistent with a hypothesis than with myriad rival ones, and the resort to deficient statistical modes of analysis that treat detection of “statististically significant difference” rather than “practical corroboration of practical meaningful effect” as the goal of such analysis (especial

6 0.84523916 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

7 0.84127808 2269 andrew gelman stats-2014-03-27-Beyond the Valley of the Trolls

8 0.84044927 2137 andrew gelman stats-2013-12-17-Replication backlash

9 0.84019828 675 andrew gelman stats-2011-04-22-Arrow’s other theorem

10 0.83775467 1844 andrew gelman stats-2013-05-06-Against optimism about social science

11 0.83622324 2233 andrew gelman stats-2014-03-04-Literal vs. rhetorical

12 0.83388287 2218 andrew gelman stats-2014-02-20-Do differences between biology and statistics explain some of our diverging attitudes regarding criticism and replication of scientific claims?

13 0.83132684 2358 andrew gelman stats-2014-06-03-Did you buy laundry detergent on their most recent trip to the store? Also comments on scientific publication and yet another suggestion to do a study that allows within-person comparisons

14 0.82288241 2361 andrew gelman stats-2014-06-06-Hurricanes vs. Himmicanes

15 0.82160103 2245 andrew gelman stats-2014-03-12-More on publishing in journals

16 0.81550425 601 andrew gelman stats-2011-03-05-Against double-blind reviewing: Political science and statistics are not like biology and physics

17 0.81506437 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

18 0.81322169 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

19 0.81294167 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

20 0.81224704 2191 andrew gelman stats-2014-01-29-“Questioning The Lancet, PLOS, And Other Surveys On Iraqi Deaths, An Interview With Univ. of London Professor Michael Spagat”


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.017), (2, 0.011), (15, 0.033), (16, 0.116), (18, 0.013), (21, 0.028), (24, 0.136), (31, 0.012), (36, 0.017), (42, 0.02), (43, 0.054), (47, 0.023), (52, 0.01), (63, 0.026), (65, 0.022), (75, 0.021), (76, 0.02), (85, 0.012), (86, 0.024), (99, 0.305)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98396087 481 andrew gelman stats-2010-12-22-The Jumpstart financial literacy survey and the different purposes of tests

Introduction: Mark Palko comments on the (presumably) well-intentioned but silly Jumpstart test of financial literacy , which was given to 7000 high school seniors Given that, as we heard a few years back, most high school seniors can’t locate Miami on a map of the U.S., you won’t be surprised to hear that they flubbed item after item on this quiz. But, as Palko points out, the concept is better than the execution: With the complex, unstable economy, the shift away from traditional pensions and the constant flood of new financial products, financial literacy might be more important now than it has been for decades. You could even make the case for financial illiteracy being a major cause of the economic crisis. But if the supporters of financial literacy need a good measure of how well we’re doing, they’ll need to find a better instrument than the Jump$tart survey. The ‘test’ part of the survey consists of thirty-one questions. That’s not very long but that many questions should be su

same-blog 2 0.9692052 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

Introduction: I had a brief email exchange with Jeff Leek regarding our recent discussions of replication, criticism, and the self-correcting process of science. Jeff writes: (1) I can see the problem with serious, evidence-based criticisms not being published in the same journal (and linked to) studies that are shown to be incorrect. I have been mostly seeing these sorts of things show up in blogs. But I’m not sure that is a bad thing. I think people read blogs more than they read the literature. I wonder if this means that blogs will eventually be a sort of “shadow literature”? (2) I think there is a ton of bad literature out there, just like there is a ton of bad stuff on Google. If we focus too much on the bad stuff we will be paralyzed. I still manage to find good papers despite all the bad papers. (3) I think one positive solution to this problem is to incentivize/publish referee reports and give people credit for a good referee report just like they get credit for a good paper. T

3 0.96911573 2179 andrew gelman stats-2014-01-20-The AAA Tranche of Subprime Science

Introduction: In our new ethics column for Chance , Eric Loken and I write about our current favorite topic: One of our ongoing themes when discussing scientific ethics is the central role of statistics in recognizing and communicating uncer- tainty. Unfortunately, statistics—and the scientific process more generally—often seems to be used more as a way of laundering uncertainty, processing data until researchers and consumers of research can feel safe acting as if various scientific hypotheses are unquestionably true. . . . We have in mind an analogy with the notorious AAA-class bonds created during the mid-2000s that led to the subprime mortgage crisis. Lower-quality mortgages—that is, mortgages with high probability of default and, thus, high uncertainty—were packaged and transformed into financial instruments that were (in retrospect, falsely) characterized as low risk. There was a tremendous interest in these securities, not just among the most unscrupulous market manipulators, but in a

4 0.96890754 2137 andrew gelman stats-2013-12-17-Replication backlash

Introduction: Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes , “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.” I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent. That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news. I’ll explain a bit in the conte

5 0.96821934 110 andrew gelman stats-2010-06-26-Philosophy and the practice of Bayesian statistics

Introduction: Here’s an article that I believe is flat-out entertaining to read. It’s about philosophy, so it’s supposed to be entertaining, in any case. Here’s the abstract: A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but

6 0.96703875 1882 andrew gelman stats-2013-06-03-The statistical properties of smart chains (and referral chains more generally)

7 0.96654946 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

8 0.96552896 2050 andrew gelman stats-2013-10-04-Discussion with Dan Kahan on political polarization, partisan information processing. And, more generally, the role of theory in empirical social science

9 0.96464044 2281 andrew gelman stats-2014-04-04-The Notorious N.H.S.T. presents: Mo P-values Mo Problems

10 0.96409601 1760 andrew gelman stats-2013-03-12-Misunderstanding the p-value

11 0.96395206 2227 andrew gelman stats-2014-02-27-“What Can we Learn from the Many Labs Replication Project?”

12 0.963844 711 andrew gelman stats-2011-05-14-Steven Rhoads’s book, “The Economist’s View of the World”

13 0.9638328 2301 andrew gelman stats-2014-04-22-Ticket to Baaaaarf

14 0.96380413 770 andrew gelman stats-2011-06-15-Still more Mr. P in public health

15 0.96372628 1218 andrew gelman stats-2012-03-18-Check your missing-data imputations using cross-validation

16 0.96308422 1729 andrew gelman stats-2013-02-20-My beef with Brooks: the alternative to “good statistics” is not “no statistics,” it’s “bad statistics”

17 0.96293151 431 andrew gelman stats-2010-11-26-One fun thing about physicists . . .

18 0.96290362 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?

19 0.96275246 1253 andrew gelman stats-2012-04-08-Technology speedup graph

20 0.96271646 2177 andrew gelman stats-2014-01-19-“The British amateur who debunked the mathematics of happiness”