andrew_gelman_stats andrew_gelman_stats-2010 andrew_gelman_stats-2010-291 knowledge-graph by maker-knowledge-mining

291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo


meta infos for this blog

Source: html

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. [sent-4, score-0.897]

2 I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and expanded, and I also feel strongly (based on my own experience and on my understanding of science) that some of our most important learning comes when we refute our models (in relevant ways). [sent-5, score-0.48]

3 ) Mayo: You wrote “I’m not quite sure what a “frequentist method” is, but I will assume that the term refers to any statistical method for which a frequency evaluation has been performed. [sent-13, score-0.386]

4 My understanding of the frequentist approach is that it treats all inferential statements as functions of data and thus as random variables, with the randomness induced by the sampling distribution, that is, the probability model describing which particular data happened to arise. [sent-17, score-0.725]

5 From this perspective (associated with George Box and Donald Rubin, among others), any method is frequentist if it is evaluated in that way. [sent-18, score-0.642]

6 Granted, calculating frequentist error probabilities of procedures is a necessary part of “frequentism” but it was never intended to be sufficient. [sent-23, score-0.802]

7 As noted in section 1 above, the frequentist method, as I understand it, is an approach for evaluating inferences …. [sent-28, score-0.55]

8 So, but what would you call standard (frequentist) procedures for arriving at estimators and tests (that satisfy frequentist goals e. [sent-30, score-0.912]

9 Now, granted, Bayesian philosophers have at times interpreted Popper Bayesianly, and he indvertently opens himself to this false reading by not being clear that the only statistical view in sync with his philosophy is frequentist error statisics. [sent-54, score-0.914]

10 We know this because he talks everywhere about his insistence that probability attach NOT to hypotheses but only to methods or procedures of testing. [sent-57, score-0.417]

11 (v) Finally, Popper denied that high posterior probability in hypotheses is at all desirable as a scientific goal. [sent-59, score-0.325]

12 If, as you seem to suggest, we are to use significance tests to check models, then it would seem you are appealing to frequentist error probabilities, e. [sent-62, score-0.82]

13 You ask: “would you call standard (frequentist) procedures for arriving at estimators and tests (that satisfy frequentist goals e. [sent-70, score-0.912]

14 The Bayesian models of 50 years ago seem hopelessly simple (except, of course, for simple problems), and I expect the Bayesian models of today will seem hopelessly simple, 50 years hence. [sent-75, score-0.556]

15 The frequentist story, as I undertstand it, is to advance through better procedures. [sent-79, score-0.49]

16 And, in contrast to some Bayesians, I _do_ view refutations and falsifications as Bayesian. [sent-90, score-0.33]

17 You ask: “If, as you seem to suggest, we are to use significance tests to check models, then it would seem you are appealing to frequentist error probabilities, e. [sent-103, score-0.82]

18 But these are Bayesian p-values, not frequentist p-values. [sent-107, score-0.49]

19 Yes, there are frequentist p-values, but there are Bayesian p-values too. [sent-109, score-0.49]

20 I think Mayo’s take on this is that I’m not really giving frequentist methods a fair shake. [sent-112, score-0.567]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('frequentist', 0.49), ('bayesian', 0.206), ('popper', 0.152), ('method', 0.152), ('models', 0.133), ('mayo', 0.125), ('falsifications', 0.125), ('procedures', 0.124), ('perplexing', 0.114), ('arriving', 0.114), ('perplexity', 0.114), ('refutations', 0.114), ('posterior', 0.109), ('hypotheses', 0.108), ('probability', 0.108), ('bayesians', 0.107), ('philosophy', 0.1), ('tests', 0.099), ('probabilities', 0.095), ('error', 0.093), ('view', 0.091), ('popperian', 0.089), ('estimators', 0.085), ('evaluation', 0.083), ('corroborated', 0.083), ('premises', 0.083), ('statistical', 0.08), ('methods', 0.077), ('inductivist', 0.076), ('entities', 0.076), ('frequentism', 0.076), ('hopelessly', 0.076), ('patched', 0.076), ('inference', 0.072), ('frequency', 0.071), ('seem', 0.069), ('model', 0.067), ('bayes', 0.067), ('smoothly', 0.066), ('granted', 0.064), ('operate', 0.064), ('concept', 0.064), ('refute', 0.062), ('inductive', 0.062), ('hypothesis', 0.061), ('approach', 0.06), ('philosophers', 0.06), ('expanded', 0.058), ('returning', 0.058), ('lakatos', 0.057)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

2 0.3382546 247 andrew gelman stats-2010-09-01-How does Bayes do it?

Introduction: I received the following message from a statistician working in industry: I am studying your paper, A Weakly Informative Default Prior Distribution for Logistic and Other Regression Models . I am not clear why the Bayesian approaches with some priors can usually handle the issue of nonidentifiability or can get stable estimates of parameters in model fit, while the frequentist approaches cannot. My reply: 1. The term “frequentist approach” is pretty general. “Frequentist” refers to an approach for evaluating inferences, not a method for creating estimates. In particular, any Bayes estimate can be viewed as a frequentist inference if you feel like evaluating its frequency properties. In logistic regression, maximum likelihood has some big problems that are solved with penalized likelihood–equivalently, Bayesian inference. A frequentist can feel free to consider the prior as a penalty function rather than a probability distribution of parameters. 2. The reason our approa

3 0.27001715 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

Introduction: I’ll answer the above question after first sharing some background and history on the the philosophy of Bayesian statistics, which appeared at the end of our rejoinder to the discussion to which I linked the other day: When we were beginning our statistical educations, the word ‘Bayesian’ conveyed membership in an obscure cult. Statisticians who were outside the charmed circle could ignore the Bayesian subfield, while Bayesians themselves tended to be either apologetic or brazenly defiant. These two extremes manifested themselves in ever more elaborate proposals for non-informative priors, on the one hand, and declarations of the purity of subjective probability, on the other. Much has changed in the past 30 years. ‘Bayesian’ is now often used in casual scientific parlance as a synonym for ‘rational’, the anti-Bayesians have mostly disappeared, and non-Bayesian statisticians feel the need to keep up with developments in Bayesian modelling and computation. Bayesians themselves

4 0.26511806 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

Introduction: Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been obse

5 0.2618759 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

Introduction: Yes, checking calibration of probability forecasts is part of Bayesian statistics. At the end of this post are three figures from Chapter 1 of Bayesian Data Analysis illustrating empirical evaluation of forecasts. But first the background. Why am I bringing this up now? It’s because of something Larry Wasserman wrote the other day : One of the striking facts about [baseball/political forecaster Nate Silver's recent] book is the emphasis the Silver places on frequency calibration. . . . Have no doubt about it: Nate Silver is a frequentist. For example, he says: One of the most important tests of a forecast — I would argue that it is the single most important one — is called calibration. Out of all the times you said there was a 40 percent chance of rain, how often did rain actually occur? If over the long run, it really did rain about 40 percent of the time, that means your forecasts were well calibrated. I had some discussion with Larry in the comments section of h

6 0.25400171 746 andrew gelman stats-2011-06-05-An unexpected benefit of Arrow’s other theorem

7 0.25151652 2263 andrew gelman stats-2014-03-24-Empirical implications of Empirical Implications of Theoretical Models

8 0.25118399 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

9 0.24395683 1572 andrew gelman stats-2012-11-10-I don’t like this cartoon

10 0.24195756 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

11 0.23697183 534 andrew gelman stats-2011-01-24-Bayes at the end

12 0.23231208 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

13 0.22781497 1181 andrew gelman stats-2012-02-23-Philosophy: Pointer to Salmon

14 0.22732671 1560 andrew gelman stats-2012-11-03-Statistical methods that work in some settings but not others

15 0.22611372 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

16 0.21442682 1355 andrew gelman stats-2012-05-31-Lindley’s paradox

17 0.2108355 110 andrew gelman stats-2010-06-26-Philosophy and the practice of Bayesian statistics

18 0.20863596 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

19 0.20786113 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

20 0.20773973 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.316), (1, 0.256), (2, -0.116), (3, 0.05), (4, -0.189), (5, -0.01), (6, -0.085), (7, 0.108), (8, 0.093), (9, -0.145), (10, -0.027), (11, -0.068), (12, -0.039), (13, -0.002), (14, 0.007), (15, 0.031), (16, 0.037), (17, -0.018), (18, -0.024), (19, 0.001), (20, 0.012), (21, 0.047), (22, 0.002), (23, 0.002), (24, 0.014), (25, -0.057), (26, 0.007), (27, 0.041), (28, 0.018), (29, -0.02), (30, 0.02), (31, 0.058), (32, 0.044), (33, 0.02), (34, 0.013), (35, -0.046), (36, -0.01), (37, -0.013), (38, 0.013), (39, -0.019), (40, 0.01), (41, -0.023), (42, 0.015), (43, 0.012), (44, -0.01), (45, 0.024), (46, 0.008), (47, 0.01), (48, -0.019), (49, -0.035)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97766691 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

2 0.92743999 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

Introduction: Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been obse

3 0.89876938 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

Introduction: Deborah Mayo recommended that I consider coming up with a new name for the statistical methods that I used, given that the term “Bayesian” has all sorts of associations that I dislike (as discussed, for example, in section 1 of this article ). I replied that I agree on Bayesian, I never liked the term and always wanted something better, but I couldn’t think of any convenient alternative. Also, I was finding that Bayesians (even the Bayesians I disagreed with) were reading my research articles, while non-Bayesians were simply ignoring them. So I thought it was best to identify with, and communicate with, those people who were willing to engage with me. More formally, I’m happy defining “Bayesian” as “using inference from the posterior distribution, p(theta|y)”. This says nothing about where the probability distributions come from (thus, no requirement to be “subjective” or “objective”) and it says nothing about the models (thus, no requirement to use the discrete models that hav

4 0.88375837 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

Introduction: Ryan Ickert writes: I was wondering if you’d seen this post , by a particle physicist with some degree of influence. Dr. Dorigo works at CERN and Fermilab. The penultimate paragraph is: From the above expression, the Frequentist researcher concludes that the tracker is indeed biased, and rejects the null hypothesis H0, since there is a less-than-2% probability (P’<α) that a result as the one observed could arise by chance! A Frequentist thus draws, strongly, the opposite conclusion than a Bayesian from the same set of data. How to solve the riddle? He goes on to not solve the riddle. Perhaps you can? Surely with the large sample size they have (n=10^6), the precision on the frequentist p-value is pretty good, is it not? My reply: The first comment on the site (by Anonymous [who, just to be clear, is not me; I have no idea who wrote that comment], 22 Feb 2012, 21:27pm) pretty much nails it: In setting up the Bayesian model, Dorigo assumed a silly distribution on th

5 0.88193101 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai

6 0.88090932 317 andrew gelman stats-2010-10-04-Rob Kass on statistical pragmatism, and my reactions

7 0.87753785 1469 andrew gelman stats-2012-08-25-Ways of knowing

8 0.86831617 117 andrew gelman stats-2010-06-29-Ya don’t know Bayes, Jack

9 0.86530739 1571 andrew gelman stats-2012-11-09-The anti-Bayesian moment and its passing

10 0.85649329 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

11 0.85631806 114 andrew gelman stats-2010-06-28-More on Bayesian deduction-induction

12 0.85534906 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

13 0.848086 1648 andrew gelman stats-2013-01-02-A important new survey of Bayesian predictive methods for model assessment, selection and comparison

14 0.84684652 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

15 0.84088933 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

16 0.83725733 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

17 0.83396 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

18 0.82707357 1560 andrew gelman stats-2012-11-03-Statistical methods that work in some settings but not others

19 0.82496673 1262 andrew gelman stats-2012-04-12-“Not only defended but also applied”: The perceived absurdity of Bayesian inference

20 0.8221249 1572 andrew gelman stats-2012-11-10-I don’t like this cartoon


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(15, 0.034), (16, 0.09), (18, 0.014), (21, 0.03), (24, 0.166), (53, 0.022), (55, 0.017), (63, 0.127), (77, 0.016), (84, 0.027), (86, 0.02), (89, 0.011), (99, 0.274)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.9827522 1484 andrew gelman stats-2012-09-05-Two exciting movie ideas: “Second Chance U” and “The New Dirty Dozen”

Introduction: I have a great idea for a movie. Actually two movies based on two variants of a similar idea. It all started when I saw this story: Dr. Anil Potti, the controversial cancer researcher whose work at Duke University led to lawsuits from patients, is now a medical oncologist at the Cancer Center of North Dakota in Grand Forks. When asked about Dr. Potti’s controversial appointment, his new boss said : If a guy can’t get a second chance here in North Dakota, where he trained, man, you can’t get a second chance anywhere. (Link from Retraction Watch , of course.) Potti’s boss is also quoted as saying, “Most, if not all, his patients have loved him.” On the other hand, the news article reports: “The North Carolina medical board’s website lists settlements against Potti of at least $75,000.” I guess there’s no reason you can’t love a guy and still want a juicy malpractice settlement. Second Chance U I don’t give two poops about Dr. Anil Potti. But seeing the above s

2 0.97964776 102 andrew gelman stats-2010-06-21-Why modern art is all in the mind

Introduction: This looks cool: Ten years ago researchers in America took two groups of three-year-olds and showed them a blob of paint on a canvas. Children who were told that the marks were the result of an accidental spillage showed little interest. The others, who had been told that the splodge of colour had been carefully created for them, started to refer to it as “a painting”. Now that experiment . . . has gone on to form part of the foundation of an influential new book that questions the way in which we respond to art. . . . The book, which is subtitled The New Science of Why We Like What We Like, is not an attack on modern or contemporary art and Bloom says fans of more traditional art are not capable of making purely aesthetic judgments either. “I don’t have a strong position about the art itself,” he said this weekend. “But I do have a strong position about why we actually like it.” This sounds fascinating. But I’m skeptical about this part: Humans are incapable of just getti

3 0.97889161 1621 andrew gelman stats-2012-12-13-Puzzles of criminal justice

Introduction: Four recent news stories about crime and punishment made me realize, yet again, how little I understand all this. 1. “HSBC to Pay $1.92 Billion to Settle Charges of Money Laundering” : State and federal authorities decided against indicting HSBC in a money-laundering case over concerns that criminal charges could jeopardize one of the world’s largest banks and ultimately destabilize the global financial system. Instead, HSBC announced on Tuesday that it had agreed to a record $1.92 billion settlement with authorities. . . . I don’t understand this idea of punishing the institution. I have the same problem when the NCAA punishes a college football program. These are individual people breaking the law (or the rules), right? So why not punish them directly? Giving 40 lashes to a bunch of HSBC executives and garnisheeing their salaries for life, say, that wouldn’t destabilize the global financial system would it? From the article: “A money-laundering indictment, or a guilt

4 0.97682279 782 andrew gelman stats-2011-06-29-Putting together multinomial discrete regressions by combining simple logits

Introduction: When predicting 0/1 data we can use logit (or probit or robit or some other robust model such as invlogit (0.01 + 0.98*X*beta)). Logit is simple enough and we can use bayesglm to regularize and avoid the problem of separation. What if there are more than 2 categories? If they’re ordered (1, 2, 3, etc), we can do ordered logit (and use bayespolr() to avoid separation). If the categories are unordered (vanilla, chocolate, strawberry), there are unordered multinomial logit and probit models out there. But it’s not so easy to fit these multinomial model in a multilevel setting (with coefficients that vary by group), especially if the computation is embedded in an iterative routine such as mi where you have real time constraints at each step. So this got me wondering whether we could kluge it with logits. Here’s the basic idea (in the ordered and unordered forms): - If you have a variable that goes 1, 2, 3, etc., set up a series of logits: 1 vs. 2,3,…; 2 vs. 3,…; and so forth

5 0.96889627 293 andrew gelman stats-2010-09-23-Lowess is great

Introduction: I came across this old blog entry that was just hilarious–but it’s from 2005 so I think most of you haven’t seen it. It’s the story of two people named Martin Voracek and Maryanne Fisher who in a published discussion criticized lowess (a justly popular nonlinear regression method). Curious, I looked up “Martin Voracek” on the web and found an article in the British Medical Journal whose the title promised “trend analysis.” I was wondering what statistical methods they used–something more sophisticated than lowess, perhaps? They did have one figure, and here it is: Voracek and Fisher, the critics of lowess, are fit straight lines to data to clearly nonlinear data! It’s most obvious in their leftmost graph. Voracek and Fisher get full credit for showing scatterplots, but hey . . . they should try lowess next time! What’s really funny in the graph are the little dotted lines indicating inferential uncertainty in the regression lines–all under the assumption of linearity,

6 0.96438622 1480 andrew gelman stats-2012-09-02-“If our product is harmful . . . we’ll stop making it.”

7 0.96409667 313 andrew gelman stats-2010-10-03-A question for psychometricians

same-blog 8 0.95466042 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

9 0.95068157 1078 andrew gelman stats-2011-12-22-Tables as graphs: The Ramanujan principle

10 0.94312274 1690 andrew gelman stats-2013-01-23-When are complicated models helpful in psychology research and when are they overkill?

11 0.9399817 1506 andrew gelman stats-2012-09-21-Building a regression model . . . with only 27 data points

12 0.9394182 286 andrew gelman stats-2010-09-20-Are the Democrats avoiding a national campaign?

13 0.93836224 2103 andrew gelman stats-2013-11-16-Objects of the class “Objects of the class”

14 0.93813241 747 andrew gelman stats-2011-06-06-Research Directions for Machine Learning and Algorithms

15 0.93538356 2148 andrew gelman stats-2013-12-25-Spam!

16 0.93535674 421 andrew gelman stats-2010-11-19-Just chaid

17 0.93504882 428 andrew gelman stats-2010-11-24-Flawed visualization of U.S. voting maybe has some good features

18 0.93449301 544 andrew gelman stats-2011-01-29-Splitting the data

19 0.9342947 2179 andrew gelman stats-2014-01-20-The AAA Tranche of Subprime Science

20 0.93222833 1390 andrew gelman stats-2012-06-23-Traditionalist claims that modern art could just as well be replaced by a “paint-throwing chimp”