andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-1774 knowledge-graph by maker-knowledge-mining

1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal


meta infos for this blog

Source: html

Introduction: Dan Kahan writes : The basic idea . . . is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures—regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too. Now I [Kahan] am aware of a set of real journals that have a similar motivation. One is the Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to “reject” the null. Like JASNH, LR ≠1J would try to offset the “file drawer” bias and like bad consequences associated with the convention of publishing only findings that are “significant at p < 0.05." But it would try to do more. By publishing studies that are deemed to have valid designs an


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures—regardless of what studies based on such designs actually find. [sent-4, score-1.916]

2 Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too. [sent-5, score-1.259]

3 Now I [Kahan] am aware of a set of real journals that have a similar motivation. [sent-6, score-0.208]

4 One is the Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to “reject” the null. [sent-7, score-0.359]

5 Like JASNH, LR ≠1J would try to offset the “file drawer” bias and like bad consequences associated with the convention of publishing only findings that are “significant at p < 0. [sent-8, score-0.311]

6 By publishing studies that are deemed to have valid designs and that have not actually been performed yet, LR ≠1J would seek to change the odd, sad professional sensibility favoring studies that confirm researchers’ hypotheses. [sent-11, score-1.046]

7 Some additional journals that likewise try (very sensibly) to promote recognition of studies that report unexpected, surprising, or controversial findings include Contradicting Results in Science; Journal of Serendipitous and Unexpected Results; and Journal of Negative Results in Biomedicine. [sent-15, score-0.532]

8 These journals are very worthwhile, too, but still focus on results, not the identification of designs the validity of such would be recognized ex ante by reasonable people who disagree! [sent-16, score-0.841]

9 I am also aware of the idea to set up registries for designs for studies before they are carried out. [sent-17, score-0.81]

10 Papers describing the design and ones reporting the results will be published separately, and in sequence, to promote the success of LR≠1′s sister journal, “Put Your Money Where Your Mouth Is, Mr. [sent-25, score-0.366]

11 ‘That’s Obvious,’ ” which will conduct on-line predication markets for “experts” & others willing to bet on the outcome of pending LR≠1 studies. [sent-27, score-0.075]

12 For comic relief, LR ≠1J will also run a feature that publishes reviews of articles submitted to other journals that LR≠1J referees agree suggest the potential operation of one of the influences identified above. [sent-31, score-0.54]

13 The journal would then (3) fund the study, and finally, (4) publish the results. [sent-33, score-0.347]

14 This procedure would generate the same benefits as “adversary collaboration” but without insisting that adversaries collaborate. [sent-34, score-0.302]

15 Rather than adding any new comments, I’ll just refer you to my two discussions ( here and here ) from last year of four other entries (by Brendan Nyhan, Larry Wasserman, Chris Said, and Niko Kriegeskorte) in the ever-popular genre of, Our Peer-Review System is in Trouble; How Can We Fix It? [sent-35, score-0.06]

16 And, if I could get all Dave Krantz-y for a moment, I’d suggest that this discussion could be improved on all sides (including my own) by starting with goals and going from there, rather than jumping straight into problems and potential solutions. [sent-37, score-0.063]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('lr', 0.472), ('designs', 0.378), ('studies', 0.179), ('promote', 0.151), ('results', 0.145), ('journal', 0.144), ('journals', 0.136), ('generate', 0.128), ('proposing', 0.126), ('publishes', 0.11), ('would', 0.107), ('carried', 0.106), ('regardless', 0.104), ('kahan', 0.098), ('unexpected', 0.097), ('publish', 0.096), ('competing', 0.092), ('agree', 0.088), ('identification', 0.088), ('articles', 0.083), ('scholars', 0.082), ('dan', 0.08), ('pending', 0.075), ('jasnh', 0.075), ('niko', 0.075), ('contradicting', 0.075), ('registries', 0.075), ('publishing', 0.073), ('aware', 0.072), ('disagree', 0.071), ('sensibly', 0.07), ('reporting', 0.07), ('researchers', 0.068), ('favoring', 0.067), ('ex', 0.067), ('proposition', 0.067), ('conjectures', 0.067), ('insisting', 0.067), ('warrant', 0.067), ('kriegeskorte', 0.067), ('try', 0.066), ('ante', 0.065), ('offset', 0.065), ('relief', 0.065), ('deemed', 0.063), ('mouth', 0.063), ('drawer', 0.063), ('suggest', 0.063), ('genre', 0.06), ('comic', 0.06)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

Introduction: Dan Kahan writes : The basic idea . . . is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures—regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too. Now I [Kahan] am aware of a set of real journals that have a similar motivation. One is the Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to “reject” the null. Like JASNH, LR ≠1J would try to offset the “file drawer” bias and like bad consequences associated with the convention of publishing only findings that are “significant at p < 0.05." But it would try to do more. By publishing studies that are deemed to have valid designs an

2 0.18531604 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

Introduction: I recently became aware of two new entries in the ever-popular genre of, Our Peer-Review System is in Trouble; How Can We Fix It? Political scientist Brendan Nyhan, commenting on experimental and empirical sciences more generally, focuses on the selection problem that positive rather then negative findings tend to get published, leading via the statistical significance filter to an overestimation of effect sizes. Nyhan recommends that data-collection protocols be published ahead of time, with the commitment to publish the eventual results: In the case of experimental data, a better practice would be for journals to accept articles before the study was conducted. The article should be written up to the point of the results section, which would then be populated using a pre-specified analysis plan submitted by the author. The journal would then allow for post-hoc analysis and interpretation by the author that would be labeled as such and distinguished from the previously submit

3 0.16689214 1272 andrew gelman stats-2012-04-20-More proposals to reform the peer-review system

Introduction: Chris Said points us to two proposals to fix the system for reviewing scientific papers. Both the proposals are focused on biological research. Said writes : The growing problems with scientific research are by now well known: Many results in the top journals are cherry picked, methodological weaknesses and other important caveats are often swept under the rug, and a large fraction of findings cannot be replicated. In some rare cases, there is even outright fraud. This waste of resources is unfair to the general public that pays for most of the research. . . . Scientists have known about these problems for decades, and there have been several well-intentioned efforts to fix them. The Journal of Articles in Support of the Null Hypothesis (JASNH) is specifically dedicated to null results. . . . Simmons and colleagues (2011) have proposed lists of regulations for other journals to enforce, including minimum sample sizes and requirements for the disclosure of all variables and

4 0.15142262 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

Introduction: We’ve had lots of lively discussions of fatally-flawed papers that have been published in top, top journals such as the American Economic Review or the Journal of Personality and Social Psychology or the American Sociological Review or the tabloids . And we also know about mistakes that make their way into mid-ranking outlets such as the Journal of Theoretical Biology. But what about results that appear in the lower tier of legitimate journals? I was thinking about this after reading a post by Dan Kahan slamming a paper that recently appeared in PLOS-One. I won’t discuss the paper itself here because that’s not my point. Rather, I had some thoughts regarding Kahan’s annoyance that a paper with fatal errors was published at all. I commented as follows: Read between the lines. The paper originally was released in 2009 and was published in 2013 in PLOS-One, which is one step above appearing on Arxiv. PLOS-One publishes some good things (so does Arxiv) but it’s the place

5 0.14673217 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Introduction: Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition. Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe: In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . . The bias agai

6 0.14406638 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

7 0.13671082 2268 andrew gelman stats-2014-03-26-New research journal on observational studies

8 0.13543423 2245 andrew gelman stats-2014-03-12-More on publishing in journals

9 0.12822996 2006 andrew gelman stats-2013-09-03-Evaluating evidence from published research

10 0.12756442 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

11 0.12669872 371 andrew gelman stats-2010-10-26-Musical chairs in econ journals

12 0.12052856 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

13 0.11225698 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?

14 0.11034035 1209 andrew gelman stats-2012-03-12-As a Bayesian I want scientists to report their data non-Bayesianly

15 0.1103265 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

16 0.1096568 2050 andrew gelman stats-2013-10-04-Discussion with Dan Kahan on political polarization, partisan information processing. And, more generally, the role of theory in empirical social science

17 0.10722576 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

18 0.10625432 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

19 0.10199299 1844 andrew gelman stats-2013-05-06-Against optimism about social science

20 0.10155039 838 andrew gelman stats-2011-08-04-Retraction Watch


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.169), (1, -0.042), (2, -0.037), (3, -0.147), (4, -0.068), (5, -0.074), (6, 0.007), (7, -0.075), (8, -0.044), (9, -0.007), (10, 0.073), (11, -0.002), (12, -0.018), (13, -0.01), (14, 0.005), (15, -0.035), (16, 0.01), (17, 0.019), (18, -0.032), (19, 0.006), (20, 0.006), (21, 0.032), (22, -0.024), (23, 0.025), (24, 0.002), (25, -0.004), (26, 0.019), (27, -0.013), (28, 0.007), (29, -0.001), (30, -0.045), (31, -0.069), (32, 0.045), (33, 0.025), (34, -0.04), (35, 0.019), (36, -0.02), (37, 0.014), (38, 0.029), (39, 0.043), (40, 0.008), (41, -0.036), (42, 0.01), (43, 0.048), (44, -0.01), (45, 0.014), (46, 0.055), (47, 0.058), (48, -0.01), (49, 0.022)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97477973 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

Introduction: Dan Kahan writes : The basic idea . . . is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures—regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too. Now I [Kahan] am aware of a set of real journals that have a similar motivation. One is the Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to “reject” the null. Like JASNH, LR ≠1J would try to offset the “file drawer” bias and like bad consequences associated with the convention of publishing only findings that are “significant at p < 0.05." But it would try to do more. By publishing studies that are deemed to have valid designs an

2 0.88460761 1291 andrew gelman stats-2012-04-30-Systematic review of publication bias in studies on publication bias

Introduction: Via Yalda Afshar , a 2005 paper by Hans-Hermann Dubben and Hans-Peter Beck-Bornholdt: Publication bias is a well known phenomenon in clinical literature, in which positive results have a better chance of being published, are published earlier, and are published in journals with higher impact factors. Conclusions exclusively based on published studies, therefore, can be misleading. Selective under-reporting of research might be more widespread and more likely to have adverse consequences for patients than publication of deliberately falsified data. We investigated whether there is preferential publication of positive papers on publication bias. They conclude, “We found no evidence of publication bias in reports on publication bias.” But of course that’s the sort of finding regarding publication bias of findings on publication bias that you’d expect would get published. What we really need is a careful meta-analysis to estimate the level of publication bias in studies of publi

3 0.86075342 2268 andrew gelman stats-2014-03-26-New research journal on observational studies

Introduction: Dylan Small writes: I am starting an observational studies journal that aims to publish papers on all aspects of observational studies, including study protocols for observational studies, methodologies for observational studies, descriptions of data sets for observational studies, software for observational studies and analyses of observational studies. One of the goals of the journal is to promote the planning of observational studies and to publish study plans for observational studies, like study plans are published for major clinical trials. Regular readers will know my suggestion that scientific journals move away from the idea of being unique publishers of new material and move toward a “newsletter” approach, recommending papers from Arxiv, SSRN, etc. So, instead of going through exhausting review and revision processes, the journal editors would read and review recent preprints on observational studies and then, each month or quarter or whatever, produce a list of pap

4 0.83790666 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

Introduction: This seems to be the topic of the week. Yesterday I posted on the sister blog some further thoughts on those “Psychological Science” papers on menstrual cycles, biceps size, and political attitudes, tied to a horrible press release from the journal Psychological Science hyping the biceps and politics study. Then I was pointed to these suggestions from Richard Lucas and M. Brent Donnellan have on improving the replicability and reproducibility of research published in the Journal of Research in Personality: It goes without saying that editors of scientific journals strive to publish research that is not only theoretically interesting but also methodologically rigorous. The goal is to select papers that advance the field. Accordingly, editors want to publish findings that can be reproduced and replicated by other scientists. Unfortunately, there has been a recent “crisis in confidence” among psychologists about the quality of psychological research (Pashler & Wagenmakers, 2012)

5 0.83561963 601 andrew gelman stats-2011-03-05-Against double-blind reviewing: Political science and statistics are not like biology and physics

Introduction: Responding to a proposal to move the journal Political Analysis from double-blind to single-blind reviewing (that is, authors would not know who is reviewing their papers but reviewers would know the authors’ names), Tom Palfrey writes: I agree with the editors’ recommendation. I have served on quite a few editorial boards of journals with different blinding policies, and have seen no evidence that double blind procedures are a useful way to improve the quality of articles published in a journal. Aside from the obvious administrative nuisance and the fact that authorship anonymity is a thing of the past in our discipline, the theoretical and empirical arguments in both directions lead to an ambiguous conclusion. Also keep in mind that the editors know the identity of the authors (they need to know for practical reasons), their identity is not hidden from authors, and ultimately it is they who make the accept/reject decision, and also lobby their friends and colleagues to submit “the

6 0.83460498 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

7 0.82951158 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

8 0.82406497 1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”

9 0.82310784 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

10 0.81253898 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

11 0.81136644 1321 andrew gelman stats-2012-05-15-A statistical research project: Weeding out the fraudulent citations

12 0.81109041 1671 andrew gelman stats-2013-01-13-Preregistration of Studies and Mock Reports

13 0.80558771 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

14 0.79227597 1272 andrew gelman stats-2012-04-20-More proposals to reform the peer-review system

15 0.78900301 2137 andrew gelman stats-2013-12-17-Replication backlash

16 0.78496474 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

17 0.78353804 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

18 0.78317982 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

19 0.7828989 1055 andrew gelman stats-2011-12-13-Data sharing update

20 0.78256589 2004 andrew gelman stats-2013-09-01-Post-publication peer review: How it (sometimes) really works


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.01), (5, 0.034), (9, 0.021), (15, 0.079), (16, 0.076), (21, 0.039), (24, 0.122), (30, 0.02), (36, 0.014), (37, 0.072), (43, 0.01), (63, 0.017), (72, 0.028), (82, 0.01), (84, 0.057), (86, 0.028), (95, 0.031), (99, 0.219)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95657831 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

Introduction: Dan Kahan writes : The basic idea . . . is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures—regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too. Now I [Kahan] am aware of a set of real journals that have a similar motivation. One is the Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to “reject” the null. Like JASNH, LR ≠1J would try to offset the “file drawer” bias and like bad consequences associated with the convention of publishing only findings that are “significant at p < 0.05." But it would try to do more. By publishing studies that are deemed to have valid designs an

2 0.92901301 2004 andrew gelman stats-2013-09-01-Post-publication peer review: How it (sometimes) really works

Introduction: In an ideal world, research articles would be open to criticism and discussion in the same place where they are published, in a sort of non-corrupt version of Yelp. What is happening now is that the occasional paper or research area gets lots of press coverage, and this inspires reactions on science-focused blogs. The trouble here is that it’s easier to give off-the-cuff comments than detailed criticisms. Here’s an example. It starts a couple years ago with this article by Ryota Kanai, Tom Feilden, Colin Firth, and Geraint Rees, on brain size and political orientation: In a large sample of young adults, we related self-reported political attitudes to gray matter volume using structural MRI. We found that greater liberalism was associated with increased gray matter volume in the anterior cingulate cortex, whereas greater conservatism was associated with increased volume of the right amygdala. These results were replicated in an independent sample of additional participants. Ou

3 0.92890036 1035 andrew gelman stats-2011-11-29-“Tobin’s analysis here is methodologically old-fashioned in the sense that no attempt is made to provide microfoundations for the postulated adjustment processes”

Introduction: Rajiv Sethi writes the above in a discussion of a misunderstanding of the economics of Keynes. The discussion is interesting. According to Sethi, Keynes wrote that, in a depression, nominal wages might be sticky but in any case a decline in wages would not do the trick to increase hiring. But many modern economics writers have missed this. For example, Gary Becker writes, “Keynes and many earlier economists emphasized that unemployment rises during recessions because nominal wage rates tend to be inflexible in the downward direction. . . . A fall in price stimulates demand and reduces supply until they are brought back to rough equality.” Whether Becker is empirically correct is another story, but in any case he is misinterpreting Keynes. But the actual reason I’m posting here is in reaction to Sethi’s remark quoted in the title above, in which he endorses a 1975 paper by James Tobin on wages and employment but remarks that Tobin’s paper did not include the individual-level de

4 0.92196345 5 andrew gelman stats-2010-04-27-Ethical and data-integrity problems in a study of mortality in Iraq

Introduction: Michael Spagat notifies me that his article criticizing the 2006 study of Burnham, Lafta, Doocy and Roberts has just been published . The Burnham et al. paper (also called, to my irritation (see the last item here ), “the Lancet survey”) used a cluster sample to estimate the number of deaths in Iraq in the three years following the 2003 invasion. In his newly-published paper, Spagat writes: [The Spagat article] presents some evidence suggesting ethical violations to the survey’s respondents including endangerment, privacy breaches and violations in obtaining informed consent. Breaches of minimal disclosure standards examined include non-disclosure of the survey’s questionnaire, data-entry form, data matching anonymised interviewer identifications with households and sample design. The paper also presents some evidence relating to data fabrication and falsification, which falls into nine broad categories. This evidence suggests that this survey cannot be considered a reliable or

5 0.91577864 2191 andrew gelman stats-2014-01-29-“Questioning The Lancet, PLOS, And Other Surveys On Iraqi Deaths, An Interview With Univ. of London Professor Michael Spagat”

Introduction: Mike Spagat points to this interview , which, he writes, covers themes that are discussed on the blog such as wrong ideas that don’t die, peer review and the statistics of conflict deaths. I agree. It’s good stuff. Here are some of the things that Spagat says (he’s being interviewed by Joel Wing): In fact, the standard excess-deaths concept leads to an interesting conundrum when combined with an interesting fact exposed in the next-to-latest Human Security Report ; in most countries child mortality rates decline during armed conflict (chapter 6). So if you believe the usual excess-death causality story then you’re forced to conclude that many conflicts actually save the lives of many children. Of course, the idea of wars savings lives is pretty hard to swallow. A much more sensible understanding is that there are a variety of factors that determine child deaths and that in many cases the factors that save the lives of children are stronger than the negative effects that confli

6 0.91230774 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

7 0.91145372 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

8 0.9105444 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

9 0.91043925 42 andrew gelman stats-2010-05-19-Updated solutions to Bayesian Data Analysis homeworks

10 0.90994513 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

11 0.90739083 1883 andrew gelman stats-2013-06-04-Interrogating p-values

12 0.90702003 1877 andrew gelman stats-2013-05-30-Infill asymptotics and sprawl asymptotics

13 0.90629244 1645 andrew gelman stats-2012-12-31-Statistical modeling, causal inference, and social science

14 0.90580332 586 andrew gelman stats-2011-02-23-A statistical version of Arrow’s paradox

15 0.90543067 2177 andrew gelman stats-2014-01-19-“The British amateur who debunked the mathematics of happiness”

16 0.90472019 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

17 0.90470219 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

18 0.90437388 481 andrew gelman stats-2010-12-22-The Jumpstart financial literacy survey and the different purposes of tests

19 0.9035517 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

20 0.90346247 1713 andrew gelman stats-2013-02-08-P-values and statistical practice