andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-1779 knowledge-graph by maker-knowledge-mining

1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”


meta infos for this blog

Source: html

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. [sent-1, score-0.178]

2 What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. [sent-2, score-0.768]

3 The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. [sent-4, score-0.528]

4 We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. [sent-5, score-0.703]

5 I have not read their paper in detail but I think I pretty much agree with their criticism of classical or strong Bayesian philosophies of the objective or subjective variety. [sent-7, score-0.415]

6 In particular, I agree with them that (a) the traditional Bayesian philosophy (which culminates in the posterior probability of a model being true) is not a good model for the evaluation and replacement of scientific theories, but (b) a fuller, falsificationist Bayesian philosophy can do the job. [sent-8, score-1.122]

7 As always, I find it misleading to focus on the prior distribution as the locus of subjective uncertainty. [sent-10, score-0.315]

8 In some problems, there is more reasonable agreement on the population model; in others, there is more agreement on the data model. [sent-13, score-0.194]

9 It’s just that, for historical reason, “likelihood methods” have been grandfathered in as classical methods and thus don’t suffer the Bayesian taint. [sent-14, score-0.192]

10 All sorts of goofy stories that happened to have been placed in time before 100 BC become canonical; whereas everything that happened after is evaluated in the category of “history” rather than “religion. [sent-16, score-0.174]

11 ” This cuts both ways: in one direction, you have people who believe anything that happens to be in the official collection of biblical stories; on the other, historical stories get the benefit of being revisable by evidence. [sent-17, score-0.254]

12 Something similar happens in many statistical problems, when all sorts of critical thinking gets applied to the prior distribution, whereas conventional likelihoods just get accepted. [sent-18, score-0.243]

13 Before going on to the next issue, let me qualify the above by recognizing Deborah Mayo’s point that, in typical cases, the data model differs from the prior distribution by being more accessible to checking. [sent-19, score-0.35]

14 In practice, though, statisticians (including those of the classical or Bayesian variety who complain or rejoice about the subjectivity of prior distributions) often don’t take the opportunity to check the fit of their data models. [sent-20, score-0.215]

15 In the problems I’ve worked on, it’s never seemed to make any sense to talk about the posterior probability that a continuous parameter equals zero or that a particular model is true. [sent-23, score-0.369]

16 This sort of logic seems odd from a scientific perspective (it’s sort of like evaluating the weight of an object by assessing how heavy it looks), but, in the context of the sociology of science, the evaluation of evidence is clearly an important thing that we do. [sent-28, score-0.339]

17 ” I like this definition, and it gives a clue as to the scientific relevance of statements such as, “I don’t think this finding is actually statistically significant,” even in completely Bayesian settings. [sent-30, score-0.227]

18 In nontrivial settings, our inferences won’t be coherent—if they were, we could just skip Bayesian inference, posterior integration, Stan, etc. [sent-35, score-0.243]

19 , and simply look at data and write down our subjective posterior distributions. [sent-36, score-0.257]

20 Thus, I think coherence is a valuable concept, but not because Bayesian inferences are coherent (they’re not) but because Bayesian inference provides a mechanism for finding and resolving incoherences. [sent-38, score-0.664]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('bayesian', 0.305), ('philosophy', 0.207), ('objective', 0.178), ('bandyopadhyay', 0.172), ('brittan', 0.172), ('scientific', 0.157), ('inference', 0.149), ('coherent', 0.14), ('logical', 0.14), ('posterior', 0.129), ('subjective', 0.128), ('coherence', 0.117), ('inferences', 0.114), ('bayesianism', 0.113), ('fails', 0.113), ('classical', 0.109), ('prior', 0.106), ('rejoinder', 0.105), ('stories', 0.104), ('cosma', 0.102), ('shalizi', 0.097), ('agreement', 0.097), ('evaluating', 0.096), ('practice', 0.093), ('model', 0.092), ('bayesians', 0.088), ('evaluation', 0.086), ('historical', 0.083), ('distribution', 0.081), ('falsificationist', 0.078), ('leonard', 0.078), ('account', 0.077), ('probability', 0.074), ('dogmatic', 0.074), ('bc', 0.074), ('equals', 0.074), ('resolving', 0.074), ('ascribe', 0.074), ('plato', 0.071), ('suitably', 0.071), ('urge', 0.071), ('qualify', 0.071), ('whereas', 0.07), ('finding', 0.07), ('gordon', 0.068), ('finetti', 0.068), ('occasions', 0.068), ('agents', 0.068), ('background', 0.068), ('happens', 0.067)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9999997 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai

2 0.31817618 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

Introduction: My article with Cosma Shalizi has appeared in the British Journal of Mathematical and Statistical Psychology. I’m so glad this paper has come out. I’d been thinking about writing such a paper for almost 20 years. What got me to actually do it was an invitation a few years ago to write a chapter on Bayesian statistics for a volume on the philosophy of social sciences. Once I started doing that, I realized I had enough for a journal article. I contacted Cosma because he, unlike me, was familiar with the post-1970 philosophy literature (my knowledge went only up to Popper, Kuhn, and Lakatos). We submitted it to a couple statistics journals that didn’t want it (for reasons that weren’t always clear ), but ultimately I think it ended up in the right place, as psychologists have been as serious as anyone in thinking about statistical foundations in recent years. Here’s the issue of the journal , which also includes an introduction, several discussions, and a rejoinder: Prior app

3 0.31584677 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

Introduction: I’ll answer the above question after first sharing some background and history on the the philosophy of Bayesian statistics, which appeared at the end of our rejoinder to the discussion to which I linked the other day: When we were beginning our statistical educations, the word ‘Bayesian’ conveyed membership in an obscure cult. Statisticians who were outside the charmed circle could ignore the Bayesian subfield, while Bayesians themselves tended to be either apologetic or brazenly defiant. These two extremes manifested themselves in ever more elaborate proposals for non-informative priors, on the one hand, and declarations of the purity of subjective probability, on the other. Much has changed in the past 30 years. ‘Bayesian’ is now often used in casual scientific parlance as a synonym for ‘rational’, the anti-Bayesians have mostly disappeared, and non-Bayesian statisticians feel the need to keep up with developments in Bayesian modelling and computation. Bayesians themselves

4 0.29118949 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

Introduction: Konrad Scheffler writes: I was interested by your paper “Induction and deduction in Bayesian data analysis” and was wondering if you would entertain a few questions: – Under the banner of objective Bayesianism, I would posit something like this as a description of Bayesian inference: “Objective Bayesian probability is not a degree of belief (which would necessarily be subjective) but a measure of the plausibility of a hypothesis, conditional on a formally specified information state. One way of specifying a formal information state is to specify a model, which involves specifying both a prior distribution (typically for a set of unobserved variables) and a likelihood function (typically for a set of observed variables, conditioned on the values of the unobserved variables). Bayesian inference involves calculating the objective degree of plausibility of a hypothesis (typically the truth value of the hypothesis is a function of the variables mentioned above) given such a

5 0.28767252 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

Introduction: I’ve been writing a lot about my philosophy of Bayesian statistics and how it fits into Popper’s ideas about falsification and Kuhn’s ideas about scientific revolutions. Here’s my long, somewhat technical paper with Cosma Shalizi. Here’s our shorter overview for the volume on the philosophy of social science. Here’s my latest try (for an online symposium), focusing on the key issues. I’m pretty happy with my approach–the familiar idea that Bayesian data analysis iterates the three steps of model building, inference, and model checking–but it does have some unresolved (maybe unresolvable) problems. Here are a couple mentioned in the third of the above links. Consider a simple model with independent data y_1, y_2, .., y_10 ~ N(θ,σ^2), with a prior distribution θ ~ N(0,10^2) and σ known and taking on some value of approximately 10. Inference about μ is straightforward, as is model checking, whether based on graphs or numerical summaries such as the sample variance and skewn

6 0.28237343 1151 andrew gelman stats-2012-02-03-Philosophy of Bayesian statistics: my reactions to Senn

7 0.27709764 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

8 0.27265516 110 andrew gelman stats-2010-06-26-Philosophy and the practice of Bayesian statistics

9 0.26548913 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

10 0.25118399 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

11 0.25068158 811 andrew gelman stats-2011-07-20-Kind of Bayesian

12 0.24753197 746 andrew gelman stats-2011-06-05-An unexpected benefit of Arrow’s other theorem

13 0.24333078 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

14 0.23582618 1554 andrew gelman stats-2012-10-31-It not necessary that Bayesian methods conform to the likelihood principle

15 0.22790435 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

16 0.22319467 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

17 0.22219865 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

18 0.21927162 2034 andrew gelman stats-2013-09-23-My talk Tues 24 Sept at 12h30 at Université de Technologie de Compiègne

19 0.21279022 1469 andrew gelman stats-2012-08-25-Ways of knowing

20 0.20878749 2368 andrew gelman stats-2014-06-11-Bayes in the research conversation


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.302), (1, 0.253), (2, -0.162), (3, 0.084), (4, -0.252), (5, -0.043), (6, -0.037), (7, 0.106), (8, 0.013), (9, -0.106), (10, -0.015), (11, -0.063), (12, -0.031), (13, 0.056), (14, 0.033), (15, 0.056), (16, 0.064), (17, 0.005), (18, -0.016), (19, 0.073), (20, -0.049), (21, 0.031), (22, -0.045), (23, -0.057), (24, 0.009), (25, 0.008), (26, 0.033), (27, -0.025), (28, -0.051), (29, 0.012), (30, 0.064), (31, 0.0), (32, -0.013), (33, 0.003), (34, 0.047), (35, 0.044), (36, -0.025), (37, 0.004), (38, 0.018), (39, -0.03), (40, 0.058), (41, -0.033), (42, 0.019), (43, 0.01), (44, 0.006), (45, -0.006), (46, -0.005), (47, 0.028), (48, -0.03), (49, 0.009)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98336798 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai

2 0.92453575 1151 andrew gelman stats-2012-02-03-Philosophy of Bayesian statistics: my reactions to Senn

Introduction: Continuing with my discussion of the articles in the special issue of the journal Rationality, Markets and Morals on the philosophy of Bayesian statistics: Stephen Senn, “You May Believe You Are a Bayesian But You Are Probably Wrong”: I agree with Senn’s comments on the impossibility of the de Finetti subjective Bayesian approach. As I wrote in 2008, if you could really construct a subjective prior you believe in, why not just look at the data and write down your subjective posterior. The immense practical difficulties with any serious system of inference render it absurd to think that it would be possible to just write down a probability distribution to represent uncertainty. I wish, however, that Senn would recognize my Bayesian approach (which is also that of John Carlin, Hal Stern, Don Rubin, and, I believe, others). De Finetti is no longer around, but we are! I have to admit that my own Bayesian views and practices have changed. In particular, I resonate wit

3 0.92378551 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

Introduction: Deborah Mayo recommended that I consider coming up with a new name for the statistical methods that I used, given that the term “Bayesian” has all sorts of associations that I dislike (as discussed, for example, in section 1 of this article ). I replied that I agree on Bayesian, I never liked the term and always wanted something better, but I couldn’t think of any convenient alternative. Also, I was finding that Bayesians (even the Bayesians I disagreed with) were reading my research articles, while non-Bayesians were simply ignoring them. So I thought it was best to identify with, and communicate with, those people who were willing to engage with me. More formally, I’m happy defining “Bayesian” as “using inference from the posterior distribution, p(theta|y)”. This says nothing about where the probability distributions come from (thus, no requirement to be “subjective” or “objective”) and it says nothing about the models (thus, no requirement to use the discrete models that hav

4 0.90709949 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

Introduction: Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been obse

5 0.90369755 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

Introduction: I’ll answer the above question after first sharing some background and history on the the philosophy of Bayesian statistics, which appeared at the end of our rejoinder to the discussion to which I linked the other day: When we were beginning our statistical educations, the word ‘Bayesian’ conveyed membership in an obscure cult. Statisticians who were outside the charmed circle could ignore the Bayesian subfield, while Bayesians themselves tended to be either apologetic or brazenly defiant. These two extremes manifested themselves in ever more elaborate proposals for non-informative priors, on the one hand, and declarations of the purity of subjective probability, on the other. Much has changed in the past 30 years. ‘Bayesian’ is now often used in casual scientific parlance as a synonym for ‘rational’, the anti-Bayesians have mostly disappeared, and non-Bayesian statisticians feel the need to keep up with developments in Bayesian modelling and computation. Bayesians themselves

6 0.90220624 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

7 0.89865673 117 andrew gelman stats-2010-06-29-Ya don’t know Bayes, Jack

8 0.89420068 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

9 0.87565494 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

10 0.87140954 1571 andrew gelman stats-2012-11-09-The anti-Bayesian moment and its passing

11 0.86279148 1157 andrew gelman stats-2012-02-07-Philosophy of Bayesian statistics: my reactions to Hendry

12 0.859272 110 andrew gelman stats-2010-06-26-Philosophy and the practice of Bayesian statistics

13 0.84888333 811 andrew gelman stats-2011-07-20-Kind of Bayesian

14 0.84879619 2368 andrew gelman stats-2014-06-11-Bayes in the research conversation

15 0.84865266 114 andrew gelman stats-2010-06-28-More on Bayesian deduction-induction

16 0.84860057 746 andrew gelman stats-2011-06-05-An unexpected benefit of Arrow’s other theorem

17 0.84858233 2027 andrew gelman stats-2013-09-17-Christian Robert on the Jeffreys-Lindley paradox; more generally, it’s good news when philosophical arguments can be transformed into technical modeling issues

18 0.84371448 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

19 0.83693749 1529 andrew gelman stats-2012-10-11-Bayesian brains?

20 0.83361828 932 andrew gelman stats-2011-09-30-Articles on the philosophy of Bayesian statistics by Cox, Mayo, Senn, and others!


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(15, 0.227), (16, 0.083), (21, 0.02), (24, 0.15), (63, 0.012), (84, 0.032), (86, 0.019), (89, 0.011), (99, 0.3)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98524499 1541 andrew gelman stats-2012-10-19-Statistical discrimination again

Introduction: Mark Johnstone writes: I’ve recently been investigating a new European Court of Justice ruling on insurance calculations (on behalf of MoneySuperMarket) and I found something related to statistics that caught my attention. . . . The ruling (which comes into effect in December 2012) states that insurers in Europe can no longer provide different premiums based on gender. Despite the fact that women are statistically safer drivers, unless it’s biologically proven there is a causal relationship between being female and being a safer driver, this is now seen as an act of discrimination (more on this from the Wall Street Journal). However, where do you stop with this? What about age? What about other factors? And what does this mean for the application of statistics in general? Is it inherently unjust in this context? One proposal has been to fit ‘black boxes’ into cars so more individual data can be collected, as opposed to relying heavily on aggregates. For fans of data and s

2 0.98372173 1081 andrew gelman stats-2011-12-24-Statistical ethics violation

Introduction: A colleague writes: When I was in NYC I went to this party by group of Japanese bio-scientists. There, one guy told me about how the biggest pharmaceutical company in Japan did their statistics. They ran 100 different tests and reported the most significant one. (This was in 2006 and he said they stopped doing this few years back so they were doing this until pretty recently…) I’m not sure if this was 100 multiple comparison or 100 different kinds of test but I’m sure they wouldn’t want to disclose their data… Ouch!

3 0.98168612 945 andrew gelman stats-2011-10-06-W’man < W’pedia, again

Introduction: Blogger Deep Climate looks at another paper by the 2002 recipient of the American Statistical Association’s Founders award. This time it’s not funny, it’s just sad. Here’s Wikipedia on simulated annealing: By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random “nearby” solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly “downhill” as T goes to zero. The allowance for “uphill” moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. And here’s Wegman: During each step of the algorithm, the variable that will eventually represent the minimum is replaced by a random solution that is chosen according to a temperature

4 0.98037499 329 andrew gelman stats-2010-10-08-More on those dudes who will pay your professor $8000 to assign a book to your class, and related stories about small-time sleazoids

Introduction: After noticing these remarks on expensive textbooks and this comment on the company that bribes professors to use their books, Preston McAfee pointed me to this update (complete with a picture of some guy who keeps threatening to sue him but never gets around to it). The story McAfee tells is sad but also hilarious. Especially the part about “smuck.” It all looks like one more symptom of the imploding market for books. Prices for intro stat and econ books go up and up (even mediocre textbooks routinely cost $150), and the publishers put more and more effort into promotion. McAfee adds: I [McAfee] hope a publisher sues me about posting the articles I wrote. Even a takedown notice would be fun. I would be pretty happy to start posting about that, especially when some of them are charging $30 per article. Ted Bergstrom and I used state Freedom of Information acts to extract the journal price deals at state university libraries. We have about 35 of them so far. Like te

5 0.98032463 834 andrew gelman stats-2011-08-01-I owe it all to the haters

Introduction: Sometimes when I submit an article to a journal it is accepted right away or with minor alterations. But many of my favorite articles were rejected or had to go through an exhausting series of revisions. For example, this influential article had a very hostile referee and we had to seriously push the journal editor to accept it. This one was rejected by one or two journals before finally appearing with discussion. This paper was rejected by the American Political Science Review with no chance of revision and we had to publish it in the British Journal of Political Science, which was a bit odd given that the article was 100% about American politics. And when I submitted this instant classic (actually at the invitation of the editor), the referees found it to be trivial, and the editor did me the favor of publishing it but only by officially labeling it as a discussion of another article that appeared in the same issue. Some of my most influential papers were accepted right

6 0.9756186 133 andrew gelman stats-2010-07-08-Gratuitous use of “Bayesian Statistics,” a branding issue?

7 0.96913302 908 andrew gelman stats-2011-09-14-Type M errors in the lab

8 0.96667695 1908 andrew gelman stats-2013-06-21-Interpreting interactions in discrete-data regression

9 0.9627071 1794 andrew gelman stats-2013-04-09-My talks in DC and Baltimore this week

10 0.9594453 1624 andrew gelman stats-2012-12-15-New prize on causality in statstistics education

same-blog 11 0.94607055 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

12 0.9457922 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

13 0.94177318 1800 andrew gelman stats-2013-04-12-Too tired to mock

14 0.93897474 1833 andrew gelman stats-2013-04-30-“Tragedy of the science-communication commons”

15 0.9388293 1998 andrew gelman stats-2013-08-25-A new Bem theory

16 0.93622625 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

17 0.93444222 274 andrew gelman stats-2010-09-14-Battle of the Americans: Writer at the American Enterprise Institute disparages the American Political Science Association

18 0.92947459 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

19 0.92927378 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

20 0.92736673 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?