andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1272 knowledge-graph by maker-knowledge-mining

1272 andrew gelman stats-2012-04-20-More proposals to reform the peer-review system


meta infos for this blog

Source: html

Introduction: Chris Said points us to two proposals to fix the system for reviewing scientific papers. Both the proposals are focused on biological research. Said writes : The growing problems with scientific research are by now well known: Many results in the top journals are cherry picked, methodological weaknesses and other important caveats are often swept under the rug, and a large fraction of findings cannot be replicated. In some rare cases, there is even outright fraud. This waste of resources is unfair to the general public that pays for most of the research. . . . Scientists have known about these problems for decades, and there have been several well-intentioned efforts to fix them. The Journal of Articles in Support of the Null Hypothesis (JASNH) is specifically dedicated to null results. . . . Simmons and colleagues (2011) have proposed lists of regulations for other journals to enforce, including minimum sample sizes and requirements for the disclosure of all variables and


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Chris Said points us to two proposals to fix the system for reviewing scientific papers. [sent-1, score-0.387]

2 Both the proposals are focused on biological research. [sent-2, score-0.229]

3 Said writes : The growing problems with scientific research are by now well known: Many results in the top journals are cherry picked, methodological weaknesses and other important caveats are often swept under the rug, and a large fraction of findings cannot be replicated. [sent-3, score-0.67]

4 Scientists have known about these problems for decades, and there have been several well-intentioned efforts to fix them. [sent-9, score-0.259]

5 The Journal of Articles in Support of the Null Hypothesis (JASNH) is specifically dedicated to null results. [sent-10, score-0.347]

6 Simmons and colleagues (2011) have proposed lists of regulations for other journals to enforce, including minimum sample sizes and requirements for the disclosure of all variables and analyses. [sent-14, score-0.37]

7 ” I think the whole null hypothesis thing is a bad idea. [sent-17, score-0.414]

8 I understand the appeal of seeing whether a pattern can be explained merely by chance, but let’s not go overboard here: in lots of examples of biological and social sciences, the null hypothesis can’t be true. [sent-18, score-0.606]

9 The issue isn’t “support of the null hypothesis” so much as inferential uncertainty. [sent-19, score-0.28]

10 Said continues: Granting agencies should reward scientists who publish in journals that have acceptance criteria that are aligned with good science. [sent-20, score-0.387]

11 In particular, the agencies should favor journals that devote special sections to replications, including failures to replicate. [sent-21, score-0.452]

12 I would like to see some preference given to fully “outcome-unbiased” journals that make decisions based on the quality of the experimental design and the importance of the scientific question, not the outcome of the experiment. [sent-25, score-0.457]

13 This type of policy naturally eliminates the temptation to manipulate data towards desired outcomes. [sent-26, score-0.697]

14 The recommendations seem reasonable but I disagree with the claim that, if implemented, they would “eliminates the temptation to manipulate data towards desired outcomes. [sent-27, score-0.532]

15 Of course there is a temptation to find what we want to find. [sent-29, score-0.195]

16 I don’t quite see how it works but it’s probably a good idea, sort of like my modification of Larry Wasserman’s idea mentioned in our previous post, where I note that if all our publication shifts to Arxiv-like repositories, the defunct journals can retool as lists of recommended reading. [sent-32, score-0.595]

17 Said also writes that, “in the current system, the only signal of a paper’s quality is the journal’s impact factor. [sent-35, score-0.368]

18 For example, mediocre biology journals have higher impact factors than top statistics journals. [sent-40, score-0.421]

19 Citations are no guarantee of quality either, but they are another signal, not the same as the journal’s impact factor. [sent-43, score-0.269]

20 That’s a lot of signals right there (see also also the many different ratings collated here ), and I’m probably forgetting a few more. [sent-47, score-0.272]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('null', 0.28), ('journals', 0.263), ('said', 0.212), ('temptation', 0.195), ('eliminates', 0.165), ('impact', 0.158), ('kriegeskorte', 0.149), ('hypothesis', 0.134), ('journal', 0.133), ('signals', 0.128), ('manipulate', 0.126), ('agencies', 0.124), ('biological', 0.12), ('quality', 0.111), ('citations', 0.11), ('proposals', 0.109), ('lists', 0.107), ('towards', 0.106), ('desired', 0.105), ('fix', 0.101), ('signal', 0.099), ('system', 0.094), ('support', 0.093), ('problems', 0.091), ('scientific', 0.083), ('buckley', 0.083), ('caveats', 0.083), ('jasnh', 0.083), ('niko', 0.083), ('temptations', 0.083), ('probably', 0.079), ('funders', 0.078), ('cherry', 0.078), ('interject', 0.078), ('repositories', 0.075), ('basics', 0.075), ('publication', 0.073), ('previous', 0.073), ('granting', 0.072), ('weaknesses', 0.072), ('tabloids', 0.072), ('overboard', 0.072), ('rug', 0.072), ('education', 0.07), ('enforce', 0.07), ('paper', 0.069), ('known', 0.067), ('dedicated', 0.067), ('devote', 0.065), ('forgetting', 0.065)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 1272 andrew gelman stats-2012-04-20-More proposals to reform the peer-review system

Introduction: Chris Said points us to two proposals to fix the system for reviewing scientific papers. Both the proposals are focused on biological research. Said writes : The growing problems with scientific research are by now well known: Many results in the top journals are cherry picked, methodological weaknesses and other important caveats are often swept under the rug, and a large fraction of findings cannot be replicated. In some rare cases, there is even outright fraud. This waste of resources is unfair to the general public that pays for most of the research. . . . Scientists have known about these problems for decades, and there have been several well-intentioned efforts to fix them. The Journal of Articles in Support of the Null Hypothesis (JASNH) is specifically dedicated to null results. . . . Simmons and colleagues (2011) have proposed lists of regulations for other journals to enforce, including minimum sample sizes and requirements for the disclosure of all variables and

2 0.2056751 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

Introduction: We’ve had lots of lively discussions of fatally-flawed papers that have been published in top, top journals such as the American Economic Review or the Journal of Personality and Social Psychology or the American Sociological Review or the tabloids . And we also know about mistakes that make their way into mid-ranking outlets such as the Journal of Theoretical Biology. But what about results that appear in the lower tier of legitimate journals? I was thinking about this after reading a post by Dan Kahan slamming a paper that recently appeared in PLOS-One. I won’t discuss the paper itself here because that’s not my point. Rather, I had some thoughts regarding Kahan’s annoyance that a paper with fatal errors was published at all. I commented as follows: Read between the lines. The paper originally was released in 2009 and was published in 2013 in PLOS-One, which is one step above appearing on Arxiv. PLOS-One publishes some good things (so does Arxiv) but it’s the place

3 0.20532383 371 andrew gelman stats-2010-10-26-Musical chairs in econ journals

Introduction: Tyler Cowen links to a paper by Bruno Frey on the lack of space for articles in economics journals. Frey writes: To further their careers, [academic economists] are required to publish in A-journals, but for the vast majority this is impossible because there are few slots open in such journals. Such academic competition maybe useful to generate hard work, however, there may be serious negative consequences: the wrong output may be produced in an inefficient way, the wrong people may be selected, and losers may react in a harmful way. According to Frey, the consensus is that there are only five top economics journals–and one of those five is Econometrica, which is so specialized that I’d say that, for most academic economists, there are only four top places they can publish. The difficulty is that demand for these slots outpaces supply: for example, in 2007 there were only 275 articles in all these journals combined (or 224 if you exclude Econometrica), while “a rough estim

4 0.17353815 1826 andrew gelman stats-2013-04-26-“A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null”

Introduction: Erin Jonaitis points us to this article by Christopher Ferguson and Moritz Heene, who write: Publication bias remains a controversial issue in psychological science. . . . that the field often constructs arguments to block the publication and interpretation of null results and that null results may be further extinguished through questionable researcher practices. Given that science is dependent on the process of falsification, we argue that these problems reduce psychological science’s capability to have a proper mechanism for theory falsification, thus resulting in the promulgation of numerous “undead” theories that are ideologically popular but have little basis in fact. They mention the infamous Daryl Bem article. It is pretty much only because Bem’s claims are (presumably) false that they got published in a major research journal. Had the claims been true—that is, had Bem run identical experiments, analyzed his data more carefully and objectively, and reported that the r

5 0.17291561 256 andrew gelman stats-2010-09-04-Noooooooooooooooooooooooooooooooooooooooooooooooo!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Introduction: Masanao sends this one in, under the heading, “another incident of misunderstood p-value”: Warren Davies, a positive psychology MSc student at UEL, provides the latest in our ongoing series of guest features for students. Warren has just released a Psychology Study Guide, which covers information on statistics, research methods and study skills for psychology students. Despite the myriad rules and procedures of science, some research findings are pure flukes. Perhaps you’re testing a new drug, and by chance alone, a large number of people spontaneously get better. The better your study is conducted, the lower the chance that your result was a fluke – but still, there is always a certain probability that it was. Statistical significance testing gives you an idea of what this probability is. In science we’re always testing hypotheses. We never conduct a study to ‘see what happens’, because there’s always at least one way to make any useless set of data look important. We take

6 0.17215662 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

7 0.16689214 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

8 0.16652854 1429 andrew gelman stats-2012-07-26-Our broken scholarly publishing system

9 0.16553238 2245 andrew gelman stats-2014-03-12-More on publishing in journals

10 0.16500193 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

11 0.16189064 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

12 0.16081759 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

13 0.1606248 2295 andrew gelman stats-2014-04-18-One-tailed or two-tailed?

14 0.15582083 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

15 0.151223 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

16 0.1459364 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

17 0.14500092 1869 andrew gelman stats-2013-05-24-In which I side with Neyman over Fisher

18 0.14455472 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

19 0.14199632 2281 andrew gelman stats-2014-04-04-The Notorious N.H.S.T. presents: Mo P-values Mo Problems

20 0.14035688 1321 andrew gelman stats-2012-05-15-A statistical research project: Weeding out the fraudulent citations


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.235), (1, -0.059), (2, -0.055), (3, -0.152), (4, -0.085), (5, -0.06), (6, -0.001), (7, -0.066), (8, -0.029), (9, -0.041), (10, 0.061), (11, 0.004), (12, -0.066), (13, -0.041), (14, -0.003), (15, -0.06), (16, 0.001), (17, -0.005), (18, -0.035), (19, -0.04), (20, 0.041), (21, 0.058), (22, -0.013), (23, 0.039), (24, -0.067), (25, -0.041), (26, 0.004), (27, 0.005), (28, 0.008), (29, 0.027), (30, 0.011), (31, -0.047), (32, 0.071), (33, 0.006), (34, -0.085), (35, -0.033), (36, 0.041), (37, -0.003), (38, 0.015), (39, 0.072), (40, -0.056), (41, -0.02), (42, -0.02), (43, 0.044), (44, -0.025), (45, 0.093), (46, 0.066), (47, -0.048), (48, 0.02), (49, 0.014)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96559507 1272 andrew gelman stats-2012-04-20-More proposals to reform the peer-review system

Introduction: Chris Said points us to two proposals to fix the system for reviewing scientific papers. Both the proposals are focused on biological research. Said writes : The growing problems with scientific research are by now well known: Many results in the top journals are cherry picked, methodological weaknesses and other important caveats are often swept under the rug, and a large fraction of findings cannot be replicated. In some rare cases, there is even outright fraud. This waste of resources is unfair to the general public that pays for most of the research. . . . Scientists have known about these problems for decades, and there have been several well-intentioned efforts to fix them. The Journal of Articles in Support of the Null Hypothesis (JASNH) is specifically dedicated to null results. . . . Simmons and colleagues (2011) have proposed lists of regulations for other journals to enforce, including minimum sample sizes and requirements for the disclosure of all variables and

2 0.82505596 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

Introduction: Sanjay Srivastava reports : Recently Ben Goldacre wrote about a group of researchers (Stuart Ritchie, Chris French, and Richard Wiseman) whose null replication of 3 experiments from the infamous Bem ESP paper was rejected by JPSP – the same journal that published Bem’s paper. Srivastava recognizes that JPSP does not usually publish replications but this is a different story because it’s an anti-replication. Here’s the paradox: - From a scientific point of view, the Ritchie et al. results are boring. To find out that there’s no evidence for ESP . . . that adds essentially zero to our scientific understanding. What next, a paper demonstrating that pigeons can fly higher than chickens? Maybe an article in the Journal of the Materials Research Society demonstrating that diamonds can scratch marble but not the reverse?? - But from a science-communication perspective, the null replication is a big deal because it adds credence to my hypothesis that the earlier ESP claims

3 0.81081283 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

Introduction: I discussed two problems: 1. An artificial scarcity applied to journal publication, a scarcity which I believe is being enforced based on a monetary principle of not wanting to reduce the value of publication. The problem is that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. I’d prefer to separate these two aspects of publication. To keep these functions tied together seems to me like a terrible mistake. It would be as if, instead of using dollar bills as currency, we were to just use paper , and then if the government kept paper artificially scarce to retain the value of money, so that we were reduced to scratching notes to each other on walls and tables. 2. The discontinuous way in which unpublished papers and submissions to journals are taken as highly suspect and requiring a strong justification of all methods and assumptions, but once a paper becomes published its conclusions are taken as true unless

4 0.80968523 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

Introduction: Dan Kahan writes : The basic idea . . . is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures—regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too. Now I [Kahan] am aware of a set of real journals that have a similar motivation. One is the Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to “reject” the null. Like JASNH, LR ≠1J would try to offset the “file drawer” bias and like bad consequences associated with the convention of publishing only findings that are “significant at p < 0.05." But it would try to do more. By publishing studies that are deemed to have valid designs an

5 0.79219699 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Introduction: Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition. Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe: In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . . The bias agai

6 0.78178751 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

7 0.78014988 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

8 0.77104527 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

9 0.76994467 371 andrew gelman stats-2010-10-26-Musical chairs in econ journals

10 0.76935971 601 andrew gelman stats-2011-03-05-Against double-blind reviewing: Political science and statistics are not like biology and physics

11 0.76700592 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

12 0.76575911 883 andrew gelman stats-2011-09-01-Arrow’s theorem update

13 0.7654165 1998 andrew gelman stats-2013-08-25-A new Bem theory

14 0.76251465 1826 andrew gelman stats-2013-04-26-“A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null”

15 0.7585507 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

16 0.7542153 1321 andrew gelman stats-2012-05-15-A statistical research project: Weeding out the fraudulent citations

17 0.75381953 2055 andrew gelman stats-2013-10-08-A Bayesian approach for peer-review panels? and a speculation about Bruno Frey

18 0.75039995 2137 andrew gelman stats-2013-12-17-Replication backlash

19 0.7485972 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

20 0.74717212 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.022), (6, 0.013), (9, 0.017), (15, 0.071), (16, 0.059), (17, 0.101), (21, 0.012), (24, 0.114), (30, 0.024), (47, 0.031), (52, 0.011), (55, 0.033), (65, 0.024), (84, 0.012), (95, 0.018), (99, 0.317)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.9691031 309 andrew gelman stats-2010-10-01-Why Development Economics Needs Theory?

Introduction: Robert Neumann writes: in the JEP 24(3), page18, Daron Acemoglu states: Why Development Economics Needs Theory There is no general agreement on how much we should rely on economic theory in motivating empirical work and whether we should try to formulate and estimate “structural parameters.” I (Acemoglu) argue that the answer is largely “yes” because otherwise econometric estimates would lack external validity, in which case they can neither inform us about whether a particular model or theory is a useful approximation to reality, nor would they be useful in providing us guidance on what the effects of similar shocks and policies would be in different circumstances or if implemented in different scales. I therefore define “structural parameters” as those that provide external validity and would thus be useful in testing theories or in policy analysis beyond the specific environment and sample from which they are derived. External validity becomes a particularly challenging t

2 0.96743774 1616 andrew gelman stats-2012-12-10-John McAfee is a Heinlein hero

Introduction: “A small group of mathematicians” Jenny Davidson points to this article by Krugman on Asimov’s Foundation Trilogy. Given the silliness of the topic, Krugman’s piece is disappointingly serious (“Maybe the first thing to say about Foundation is that it’s not exactly science fiction – not really. Yes, it’s set in the future, there’s interstellar travel, people shoot each other with blasters instead of pistols and so on. But these are superficial details . . . the story can sound arid and didactic. . . . you’ll also be disappointed if you’re looking for shoot-em-up action scenes, in which Han Solo and Luke Skywalker destroy the Death Star in the nick of time. . . .”). What really jumped out at me from Krugman’s piece, though, was this line: In Foundation, we learn that a small group of mathematicians have developed “psychohistory”, the aforementioned rigorous science of society. Like Davidson (and Krugman), I read the Foundation books as a child. I remember the “psychohisto

3 0.96339852 1557 andrew gelman stats-2012-11-01-‘Researcher Degrees of Freedom’

Introduction: False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant [I]t is unacceptably easy to publish “statistically significant” evidence consistent with any hypothesis. The culprit is a construct we refer to as researcher degrees of freedom. In the course of collecting and analyzing data, researchers have many decisions to make: Should more data be collected? Should some observations be excluded? Which conditions should be combined and which ones compared? Which control variables should be considered? Should specific measures be combined or transformed or both? It is rare, and sometimes impractical, for researchers to make all these decisions beforehand. Rather, it is common (and accepted practice) for researchers to explore various analytic alternatives, to search for a combination that yields “statistical significance,” and to then report only what “worked.” The problem, of course, is that the likelihood of at leas

4 0.9632805 1136 andrew gelman stats-2012-01-23-Fight! (also a bit of reminiscence at the end)

Introduction: Martin Lindquist and Michael Sobel published a fun little article in Neuroimage on models and assumptions for causal inference with intermediate outcomes. As their subtitle indicates (“A response to the comments on our comment”), this is a topic of some controversy. Lindquist and Sobel write: Our original comment (Lindquist and Sobel, 2011) made explicit the types of assumptions neuroimaging researchers are making when directed graphical models (DGMs), which include certain types of structural equation models (SEMs), are used to estimate causal effects. When these assumptions, which many researchers are not aware of, are not met, parameters of these models should not be interpreted as effects. . . . [Judea] Pearl does not disagree with anything we stated. However, he takes exception to our use of potential outcomes notation, which is the standard notation used in the statistical literature on causal inference, and his comment is devoted to promoting his alternative conventions. [C

same-blog 5 0.96236587 1272 andrew gelman stats-2012-04-20-More proposals to reform the peer-review system

Introduction: Chris Said points us to two proposals to fix the system for reviewing scientific papers. Both the proposals are focused on biological research. Said writes : The growing problems with scientific research are by now well known: Many results in the top journals are cherry picked, methodological weaknesses and other important caveats are often swept under the rug, and a large fraction of findings cannot be replicated. In some rare cases, there is even outright fraud. This waste of resources is unfair to the general public that pays for most of the research. . . . Scientists have known about these problems for decades, and there have been several well-intentioned efforts to fix them. The Journal of Articles in Support of the Null Hypothesis (JASNH) is specifically dedicated to null results. . . . Simmons and colleagues (2011) have proposed lists of regulations for other journals to enforce, including minimum sample sizes and requirements for the disclosure of all variables and

6 0.95896214 397 andrew gelman stats-2010-11-06-Multilevel quantile regression

7 0.95675015 1362 andrew gelman stats-2012-06-03-Question 24 of my final exam for Design and Analysis of Sample Surveys

8 0.95621783 2314 andrew gelman stats-2014-05-01-Heller, Heller, and Gorfine on univariate and multivariate information measures

9 0.95257759 1422 andrew gelman stats-2012-07-20-Likelihood thresholds and decisions

10 0.95057058 2125 andrew gelman stats-2013-12-05-What predicts whether a school district will participate in a large-scale evaluation?

11 0.94922841 1076 andrew gelman stats-2011-12-21-Derman, Rodrik and the nature of statistical models

12 0.9480648 1230 andrew gelman stats-2012-03-26-Further thoughts on nonparametric correlation measures

13 0.94803953 1591 andrew gelman stats-2012-11-26-Politics as an escape hatch

14 0.94601047 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

15 0.94324946 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

16 0.9424473 1383 andrew gelman stats-2012-06-18-Hierarchical modeling as a framework for extrapolation

17 0.94204551 2191 andrew gelman stats-2014-01-29-“Questioning The Lancet, PLOS, And Other Surveys On Iraqi Deaths, An Interview With Univ. of London Professor Michael Spagat”

18 0.94170338 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

19 0.94146907 1683 andrew gelman stats-2013-01-19-“Confirmation, on the other hand, is not sexy”

20 0.94096464 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”