andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-576 knowledge-graph by maker-knowledge-mining

576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today


meta infos for this blog

Source: html

Introduction: Chris Masse points me to this response by Daryl Bem and two statisticians (Jessica Utts and Wesley Johnson) to criticisms by Wagenmakers et.al. of Bem’s recent ESP study. I have nothing to add but would like to repeat a couple bits of my discussions of last month, of here : Classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc.) fall apart in the presence of small effects. I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics. To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I have nothing to add but would like to repeat a couple bits of my discussions of last month, of here : Classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc. [sent-4, score-0.5]

2 ESP is a field of small effects and so ESP researchers use high-quality statistics. [sent-8, score-0.255]

3 To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. [sent-9, score-0.237]

4 The difference is that when you’re studying a large, robust phenomenon, little statistical errors won’t be so damaging as in a study of a fragile, possibly zero effect. [sent-10, score-0.538]

5 In some ways, there’s an analogy to the difficulties of using surveys to estimate small proportions, in which case misclassification errors can loom large. [sent-11, score-0.29]

6 The other key part of Bayesian inference–the more important part, I’d argue–is “shrinkage” or “partial pooling,” in which estimates get pooled toward zero (or, more generally, toward their estimates based on external information). [sent-15, score-0.3]

7 Whatever filter you use–whatever rule you use to decide whether something is worth publishing–I still want to see some modeling and shrinkage (or, at least, some retrospective power analysis) to handle the overestimation problem. [sent-17, score-0.402]

8 This is something Martin and I discussed in our discussion of the “voodoo correlations” paper of Vul et al. [sent-18, score-0.32]

9 Finally, my argument for why a top psychology journal should never have published Bem’s article: I mean, how hard would it be for the experimenters to gather more data, do some sifting, find out which subjects are good at ESP, etc. [sent-19, score-0.228]

10 It’s not like a study of the democratic or capitalistic peace, where you have a fixed amount of data and you have to learn what you can. [sent-23, score-0.266]

11 As Tal Yarkoni would say, I agree with Wagenmakers et al. [sent-31, score-0.32]

12 Given the long history of ESP experiments (as noted by some of the commenters below), it seems more reasonable to me to suppose that these studies have some level of measurement error of magnitude larger than that of any ESP effects themselves. [sent-35, score-0.183]

13 As I’ve already discussed, I’m not thrilled with the discrete models used in these discussions and I am for some reason particularly annoyed by the labels “Strong,” “Substantial,” “Anecdotal” in figure 4 of Wagenmakers et al. [sent-36, score-0.385]

14 Just for example, suppose you conduct a perfect randomized experiment on a large random sample of people. [sent-38, score-0.201]

15 There’s nothing anecdotal at all about this (hypothetical) study. [sent-39, score-0.217]

16 Nonetheless, it might very well be that the effect under study is tiny, in which case a statistical analysis (Bayesian or otherwise) is likely to report no effect. [sent-41, score-0.24]

17 It could fall into the “anecdotal” category used by Wagenmakers et al. [sent-42, score-0.39]

18 That said, I think people have to use what statistical methods they’re comfortable with, so it’s sort of silly for me to fault Wagenmakers et al. [sent-44, score-0.552]

19 The key point that they and other critics have made is that the Bem et al. [sent-46, score-0.397]

20 As I note above, my take on this is that if you study very small effects, then no amount of statistical sophistication will save you. [sent-48, score-0.385]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('wagenmakers', 0.359), ('esp', 0.321), ('et', 0.32), ('bem', 0.307), ('anecdotal', 0.217), ('shrinkage', 0.146), ('filter', 0.136), ('study', 0.134), ('tal', 0.124), ('yarkoni', 0.124), ('effects', 0.107), ('statistical', 0.106), ('studying', 0.1), ('gather', 0.088), ('errors', 0.083), ('small', 0.082), ('psychology', 0.08), ('key', 0.077), ('suppose', 0.076), ('whatever', 0.074), ('generally', 0.072), ('fall', 0.07), ('capitalistic', 0.069), ('sifting', 0.069), ('masse', 0.069), ('utts', 0.069), ('use', 0.066), ('misclassification', 0.065), ('discussions', 0.065), ('amount', 0.063), ('experiment', 0.063), ('voodoo', 0.062), ('err', 0.062), ('large', 0.062), ('bayes', 0.061), ('experimenters', 0.06), ('snedecor', 0.06), ('damaging', 0.06), ('loom', 0.06), ('wesley', 0.06), ('methods', 0.06), ('rush', 0.058), ('observer', 0.058), ('vul', 0.058), ('toward', 0.057), ('cochran', 0.057), ('zero', 0.055), ('pooled', 0.054), ('overestimation', 0.054), ('fragile', 0.054)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

Introduction: Chris Masse points me to this response by Daryl Bem and two statisticians (Jessica Utts and Wesley Johnson) to criticisms by Wagenmakers et.al. of Bem’s recent ESP study. I have nothing to add but would like to repeat a couple bits of my discussions of last month, of here : Classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc.) fall apart in the presence of small effects. I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics. To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a

2 0.44613874 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

Introduction: John Talbott points me to this , which I briefly mocked a couple months ago. I largely agree with the critics of this research, but I want to reiterate my point from earlier that all the statistical sophistication in the world won’t help you if you’re studying a null effect. This is not to say that the actual effect is zero—who am I to say?—just that the comments about the high-quality statistics in the article don’t say much to me. There’s lots of discussion of the lack of science underlying ESP claims. I can’t offer anything useful on that account (not being a psychologist, I could imagine all sorts of stories about brain waves or whatever), but I would like to point out something that usually doesn’t seem to get mentioned in these discussions, which is that lots of people want to believe in ESP. After all, it would be cool to read minds. (It wouldn’t be so cool, maybe, if other people could read your mind and you couldn’t read theirs, but I suspect most people don’t think

3 0.42127976 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai

4 0.31302384 1998 andrew gelman stats-2013-08-25-A new Bem theory

Introduction: The other day I was talking with someone who knows Daryl Bem a bit, and he was sharing his thoughts on that notorious ESP paper that was published in a leading journal in the field but then was mocked, shot down, and was repeatedly replicated with no success. My friend said that overall the Bem paper had positive effects in forcing psychologists to think more carefully about what sorts of research results should or should not be published in top journals, the role of replications, and other things. I expressed agreement and shared my thought that, at some level, I don’t think Bem himself fully believes his ESP effects are real. Why do I say this? Because he seemed oddly content to publish results that were not quite conclusive. He ran a bunch of experiments, looked at the data, and computed some post-hoc p-values in the .01 to .05 range. If he really were confident that the phenomenon was real (that is, that the results would apply to new data), then he could’ve easily run the

5 0.22317475 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

Introduction: Sanjay Srivastava reports : Recently Ben Goldacre wrote about a group of researchers (Stuart Ritchie, Chris French, and Richard Wiseman) whose null replication of 3 experiments from the infamous Bem ESP paper was rejected by JPSP – the same journal that published Bem’s paper. Srivastava recognizes that JPSP does not usually publish replications but this is a different story because it’s an anti-replication. Here’s the paradox: - From a scientific point of view, the Ritchie et al. results are boring. To find out that there’s no evidence for ESP . . . that adds essentially zero to our scientific understanding. What next, a paper demonstrating that pigeons can fly higher than chickens? Maybe an article in the Journal of the Materials Research Society demonstrating that diamonds can scratch marble but not the reverse?? - But from a science-communication perspective, the null replication is a big deal because it adds credence to my hypothesis that the earlier ESP claims

6 0.20397706 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

7 0.1672706 1690 andrew gelman stats-2013-01-23-When are complicated models helpful in psychology research and when are they overkill?

8 0.15443499 758 andrew gelman stats-2011-06-11-Hey, good news! Your p-value just passed the 0.05 threshold!

9 0.1500106 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

10 0.14248896 897 andrew gelman stats-2011-09-09-The difference between significant and not significant…

11 0.1416835 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

12 0.13691889 691 andrew gelman stats-2011-05-03-Psychology researchers discuss ESP

13 0.131493 1974 andrew gelman stats-2013-08-08-Statistical significance and the dangerous lure of certainty

14 0.13054782 2004 andrew gelman stats-2013-09-01-Post-publication peer review: How it (sometimes) really works

15 0.12779178 2241 andrew gelman stats-2014-03-10-Preregistration: what’s in it for you?

16 0.12530065 446 andrew gelman stats-2010-12-03-Is 0.05 too strict as a p-value threshold?

17 0.12523411 1963 andrew gelman stats-2013-07-31-Response by Jessica Tracy and Alec Beall to my critique of the methods in their paper, “Women Are More Likely to Wear Red or Pink at Peak Fertility”

18 0.12364689 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

19 0.11862813 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

20 0.11540031 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.25), (1, 0.031), (2, 0.002), (3, -0.192), (4, -0.059), (5, -0.084), (6, -0.021), (7, -0.024), (8, -0.015), (9, -0.045), (10, 0.004), (11, 0.002), (12, 0.026), (13, -0.058), (14, 0.061), (15, -0.008), (16, -0.039), (17, 0.015), (18, -0.037), (19, 0.007), (20, -0.034), (21, 0.004), (22, -0.031), (23, 0.015), (24, -0.05), (25, -0.023), (26, -0.035), (27, 0.057), (28, 0.011), (29, -0.044), (30, 0.016), (31, 0.043), (32, 0.024), (33, -0.049), (34, 0.002), (35, -0.005), (36, -0.034), (37, -0.018), (38, -0.084), (39, 0.007), (40, 0.003), (41, 0.121), (42, 0.012), (43, 0.08), (44, 0.0), (45, -0.034), (46, -0.027), (47, 0.034), (48, -0.015), (49, 0.049)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96460539 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

Introduction: Chris Masse points me to this response by Daryl Bem and two statisticians (Jessica Utts and Wesley Johnson) to criticisms by Wagenmakers et.al. of Bem’s recent ESP study. I have nothing to add but would like to repeat a couple bits of my discussions of last month, of here : Classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc.) fall apart in the presence of small effects. I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics. To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a

2 0.89330256 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai

3 0.83449596 897 andrew gelman stats-2011-09-09-The difference between significant and not significant…

Introduction: E. J. Wagenmakers writes: You may be interested in a recent article [by Nieuwenhuis, Forstmann, and Wagenmakers] showing how often researchers draw conclusions by comparing p-values. As you and Hal Stern have pointed out, this is potentially misleading because the difference between significant and not significant is not necessarily significant. We were really suprised to see how often researchers in the neurosciences make this mistake. In the paper we speculate a little bit on the cause of the error. From their paper: In theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the other is not (P > 0.05). We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscien

4 0.81320179 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

Introduction: John Talbott points me to this , which I briefly mocked a couple months ago. I largely agree with the critics of this research, but I want to reiterate my point from earlier that all the statistical sophistication in the world won’t help you if you’re studying a null effect. This is not to say that the actual effect is zero—who am I to say?—just that the comments about the high-quality statistics in the article don’t say much to me. There’s lots of discussion of the lack of science underlying ESP claims. I can’t offer anything useful on that account (not being a psychologist, I could imagine all sorts of stories about brain waves or whatever), but I would like to point out something that usually doesn’t seem to get mentioned in these discussions, which is that lots of people want to believe in ESP. After all, it would be cool to read minds. (It wouldn’t be so cool, maybe, if other people could read your mind and you couldn’t read theirs, but I suspect most people don’t think

5 0.81196296 2040 andrew gelman stats-2013-09-26-Difficulties in making inferences about scientific truth from distributions of published p-values

Introduction: Jeff Leek just posted the discussions of his paper (with Leah Jager), “An estimate of the science-wise false discovery rate and application to the top medical literature,” along with some further comments of his own. Here are my original thoughts on an earlier version of their article. Keith O’Rourke and I expanded these thoughts into a formal comment for the journal. We’re pretty much in agreement with John Ioannidis (you can find his discussion in the top link above). In quick summary, I agree with Jager and Leek that this is an important topic. I think there are two key places where Keith and I disagree with them: 1. They take published p-values at face value whereas we consider them as the result of a complicated process of selection. This is something I didn’t used to think much about, but now I’ve become increasingly convinced that the problems with published p-values is not a simple file-drawer effect or the case of a few p=0.051 values nudged toward p=0.049, bu

6 0.80379462 1171 andrew gelman stats-2012-02-16-“False-positive psychology”

7 0.79789788 1998 andrew gelman stats-2013-08-25-A new Bem theory

8 0.77741385 898 andrew gelman stats-2011-09-10-Fourteen magic words: an update

9 0.77699751 466 andrew gelman stats-2010-12-13-“The truth wears off: Is there something wrong with the scientific method?”

10 0.76908082 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

11 0.76585311 2093 andrew gelman stats-2013-11-07-I’m negative on the expression “false positives”

12 0.76197743 1944 andrew gelman stats-2013-07-18-You’ll get a high Type S error rate if you use classical statistical methods to analyze data from underpowered studies

13 0.76088959 1883 andrew gelman stats-2013-06-04-Interrogating p-values

14 0.7544148 1400 andrew gelman stats-2012-06-29-Decline Effect in Linguistics?

15 0.75284117 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

16 0.75103909 2241 andrew gelman stats-2014-03-10-Preregistration: what’s in it for you?

17 0.74892342 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

18 0.7483061 1963 andrew gelman stats-2013-07-31-Response by Jessica Tracy and Alec Beall to my critique of the methods in their paper, “Women Are More Likely to Wear Red or Pink at Peak Fertility”

19 0.74816769 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?

20 0.74191695 2042 andrew gelman stats-2013-09-28-Difficulties of using statistical significance (or lack thereof) to sift through and compare research hypotheses


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.014), (15, 0.211), (16, 0.055), (19, 0.013), (21, 0.036), (24, 0.221), (30, 0.024), (95, 0.022), (96, 0.013), (99, 0.269)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98251152 1081 andrew gelman stats-2011-12-24-Statistical ethics violation

Introduction: A colleague writes: When I was in NYC I went to this party by group of Japanese bio-scientists. There, one guy told me about how the biggest pharmaceutical company in Japan did their statistics. They ran 100 different tests and reported the most significant one. (This was in 2006 and he said they stopped doing this few years back so they were doing this until pretty recently…) I’m not sure if this was 100 multiple comparison or 100 different kinds of test but I’m sure they wouldn’t want to disclose their data… Ouch!

2 0.97722983 1800 andrew gelman stats-2013-04-12-Too tired to mock

Introduction: Someone sent me an email with the subject line “A terrible infographic,” and it went on from there: “Given some of your recent writing on infovis, I thought you might get a kick out of this . . . I’m certainly sympathetic to their motivations, but some of these plots do not aid understanding… To pick on a few in particular, the first plot attached, cropped from the infographic, is a strange alternative to a bar plot. For the second attachment, I still don’t understand what they’ve plotted. . . .” I agree with everything he wrote, but this point I think I’m getting too exhausted to laugh at graphs unless there is an obvious political bias to point to.

3 0.9746722 945 andrew gelman stats-2011-10-06-W’man < W’pedia, again

Introduction: Blogger Deep Climate looks at another paper by the 2002 recipient of the American Statistical Association’s Founders award. This time it’s not funny, it’s just sad. Here’s Wikipedia on simulated annealing: By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random “nearby” solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly “downhill” as T goes to zero. The allowance for “uphill” moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. And here’s Wegman: During each step of the algorithm, the variable that will eventually represent the minimum is replaced by a random solution that is chosen according to a temperature

4 0.95910263 1541 andrew gelman stats-2012-10-19-Statistical discrimination again

Introduction: Mark Johnstone writes: I’ve recently been investigating a new European Court of Justice ruling on insurance calculations (on behalf of MoneySuperMarket) and I found something related to statistics that caught my attention. . . . The ruling (which comes into effect in December 2012) states that insurers in Europe can no longer provide different premiums based on gender. Despite the fact that women are statistically safer drivers, unless it’s biologically proven there is a causal relationship between being female and being a safer driver, this is now seen as an act of discrimination (more on this from the Wall Street Journal). However, where do you stop with this? What about age? What about other factors? And what does this mean for the application of statistics in general? Is it inherently unjust in this context? One proposal has been to fit ‘black boxes’ into cars so more individual data can be collected, as opposed to relying heavily on aggregates. For fans of data and s

5 0.95833302 133 andrew gelman stats-2010-07-08-Gratuitous use of “Bayesian Statistics,” a branding issue?

Introduction: I’m on an island in Maine for a few weeks (big shout out for North Haven!) This morning I picked up a copy of “Working Waterfront,” a newspaper that focuses on issues of coastal fishing communities. I came across an article about modeling “fish” populations — actually lobsters, I guess they’re considered “fish” for regulatory purposes. When I read it, I thought “wow, this article is really well-written, not dumbed down like articles in most newspapers.” I think it’s great that a small coastal newspaper carries reporting like this. (The online version has a few things that I don’t recall in the print version, too, so it’s even better). But in addition to being struck by finding such a good article in a small newspaper, I was struck by this: According to [University of Maine scientist Yong] Chen, there are four main areas where his model improved on the prior version. “We included the inshore trawl data from Maine and other state surveys, in addition to federal survey data; we h

same-blog 6 0.94866359 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

7 0.94842762 2188 andrew gelman stats-2014-01-27-“Disappointed with your results? Boost your scientific paper”

8 0.94228011 1794 andrew gelman stats-2013-04-09-My talks in DC and Baltimore this week

9 0.93839335 329 andrew gelman stats-2010-10-08-More on those dudes who will pay your professor $8000 to assign a book to your class, and related stories about small-time sleazoids

10 0.9353013 803 andrew gelman stats-2011-07-14-Subtleties with measurement-error models for the evaluation of wacky claims

11 0.93437409 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

12 0.93069875 1908 andrew gelman stats-2013-06-21-Interpreting interactions in discrete-data regression

13 0.92996991 834 andrew gelman stats-2011-08-01-I owe it all to the haters

14 0.92080331 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

15 0.91642916 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

16 0.91584086 908 andrew gelman stats-2011-09-14-Type M errors in the lab

17 0.91264415 883 andrew gelman stats-2011-09-01-Arrow’s theorem update

18 0.91249645 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

19 0.90818352 1624 andrew gelman stats-2012-12-15-New prize on causality in statstistics education

20 0.90625727 274 andrew gelman stats-2010-09-14-Battle of the Americans: Writer at the American Enterprise Institute disparages the American Political Science Association