andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1469 knowledge-graph by maker-knowledge-mining

1469 andrew gelman stats-2012-08-25-Ways of knowing


meta infos for this blog

Source: html

Introduction: In this discussion from last month, computer science student and Judea Pearl collaborator Elias Barenboim expressed an attitude that hierarchical Bayesian methods might be fine in practice but that they lack theory, that Bayesians can’t succeed in toy problems. I posted a P.S. there which might not have been noticed so I will put it here: I now realize that there is some disagreement about what constitutes a “guarantee.” In one of his comments, Barenboim writes, “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee.” In that sense, yes, we have guarantees! It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. We have lots of that in Bayesian Data Analysis (particularly in the first four chapters but implicitly elsewhere as well), and this is also covered in the classic books by Lindley, Jaynes, and others. This sort of guarantee is indeed p


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In this discussion from last month, computer science student and Judea Pearl collaborator Elias Barenboim expressed an attitude that hierarchical Bayesian methods might be fine in practice but that they lack theory, that Bayesians can’t succeed in toy problems. [sent-1, score-0.606]

2 ” In one of his comments, Barenboim writes, “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee. [sent-5, score-0.244]

3 It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. [sent-7, score-0.368]

4 This sort of guarantee is indeed pleasant, and there is a long history of Bayesians studying it in theory and in toy problems. [sent-9, score-0.649]

5 Arguably, many of the examples in Bayesian Data Analysis (for example, the 8 schools example in chapter 5) can be seen as toy problems. [sent-10, score-0.507]

6 As I wrote earlier, I don’t think theoretical proofs or toy problems are useless, I just find applied examples to be more convincing. [sent-11, score-0.673]

7 Bayesian methods have moved from plaything to practical tool Go back in time 50 years or so and read the discussions of Bayesian inference back then. [sent-21, score-0.448]

8 At that time, there were some applied successes (for example, I. [sent-22, score-0.205]

9 Good repeatedly referred to his successes using Bayesian methods to break codes in the second world war) but most of the arguments in favor of Bayes were theoretical. [sent-24, score-0.33]

10 The whole discussion then shifts to whether the model is true, or, better, how the methods perform under the (essentially certain) condition that the model’s assumptions are violated, which leads into the tangle of various theorems about robustness or lack thereof. [sent-26, score-0.546]

11 50 years ago one of Bayesianism’s major assets was its theoretical coference, with various theorems demonstrating that, under the right assumptions, Bayesian inference is optimal. [sent-27, score-0.524]

12 Bayesians also spent a lot of time writing about toy problems (for example, Basu’s example of the weights of elephants). [sent-28, score-0.543]

13 To me, the key turning points occurred around 1970-1980, when statisticians such as Lindley, Novick, Smith, Dempster, and Rubin applied hierarchical Bayesian modeling to solve problems in education research that could not be easily attacked otherwise. [sent-31, score-0.378]

14 The key in any case was to use partial pooling to learn about groups for which there was only a small amount of local data. [sent-33, score-0.21]

15 ) with the next step folding this approach back into the Bayesian formalism via hierarchical modeling. [sent-37, score-0.321]

16 This is a pattern that has happened with just about every successful statistical method I can think of: an interplay between theory and practice. [sent-41, score-0.251]

17 I think that’s right—Markov chain simulation methods indeed allow us to get out of the pick-your-model-from-the-cookbook trap—but I think the hierarchical models of the 1970s (which were fit using various approximations, no MCMC) showed the way. [sent-44, score-0.46]

18 To get back to the discussion from last month: Of course Bayesian inference has “theoretical guarantees” of the sort that our correspondent Barenboim was looking for. [sent-45, score-0.213]

19 Back 50 years ago, this theoretical guarantee was almost all that Bayesian statisticians had to offer. [sent-46, score-0.376]

20 Bayesian inference seemed like a theoretical toy and was considered by many leading statisticians as somewhere between a joke and a menace , but the hardcore Bayesians persisted and got some useful methods out of it. [sent-51, score-1.035]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('toy', 0.34), ('bayesian', 0.262), ('theory', 0.19), ('bayesians', 0.189), ('barenboim', 0.168), ('novick', 0.168), ('theoretical', 0.166), ('lindley', 0.158), ('methods', 0.146), ('pooling', 0.145), ('successes', 0.13), ('inference', 0.124), ('assumptions', 0.122), ('hierarchical', 0.12), ('guarantee', 0.119), ('example', 0.111), ('problems', 0.092), ('statisticians', 0.091), ('guarantees', 0.09), ('back', 0.089), ('theorems', 0.086), ('validation', 0.085), ('predictions', 0.081), ('success', 0.079), ('approximations', 0.079), ('demonstrating', 0.078), ('applied', 0.075), ('etc', 0.071), ('models', 0.07), ('various', 0.07), ('external', 0.07), ('model', 0.066), ('partial', 0.065), ('simulations', 0.064), ('assumed', 0.064), ('method', 0.061), ('estimates', 0.058), ('schools', 0.056), ('basu', 0.056), ('hardcore', 0.056), ('menace', 0.056), ('formalism', 0.056), ('folding', 0.056), ('persisted', 0.056), ('tangle', 0.056), ('hold', 0.056), ('fit', 0.054), ('application', 0.054), ('favor', 0.054), ('month', 0.054)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 1469 andrew gelman stats-2012-08-25-Ways of knowing

Introduction: In this discussion from last month, computer science student and Judea Pearl collaborator Elias Barenboim expressed an attitude that hierarchical Bayesian methods might be fine in practice but that they lack theory, that Bayesians can’t succeed in toy problems. I posted a P.S. there which might not have been noticed so I will put it here: I now realize that there is some disagreement about what constitutes a “guarantee.” In one of his comments, Barenboim writes, “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee.” In that sense, yes, we have guarantees! It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. We have lots of that in Bayesian Data Analysis (particularly in the first four chapters but implicitly elsewhere as well), and this is also covered in the classic books by Lindley, Jaynes, and others. This sort of guarantee is indeed p

2 0.4091779 1425 andrew gelman stats-2012-07-23-Examples of the use of hierarchical modeling to generalize to new settings

Introduction: In a link to our back-and-forth on causal inference and the use of hierarchical models to bridge between different inferential settings, Elias Bareinboim (a computer scientist who is working with Judea Pearl) writes : In the past week, I have been engaged in a discussion with Andrew Gelman and his blog readers regarding causal inference, selection bias, confounding, and generalizability. I was trying to understand how his method which he calls “hierarchical modeling” would handle these issues and what guarantees it provides. . . . If anyone understands how “hierarchical modeling” can solve a simple toy problem (e.g., M-bias, control of confounding, mediation, generalizability), please share with us. In his post, Bareinboim raises a direct question about hierarchical modeling and also indirectly brings up larger questions about what is convincing evidence when evaluating a statistical method. As I wrote earlier, Bareinboim believes that “The only way investigators can decide w

3 0.27094066 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

Introduction: Elias Bareinboim asked what I thought about his comment on selection bias in which he referred to a paper by himself and Judea Pearl, “Controlling Selection Bias in Causal Inference.” I replied that I have no problem with what he wrote, but that from my perspective I find it easier to conceptualize such problems in terms of multilevel models. I elaborated on that point in a recent post , “Hierarchical modeling as a framework for extrapolation,” which I think was read by only a few people (I say this because it received only two comments). I don’t think Bareinboim objected to anything I wrote, but like me he is comfortable working within his own framework. He wrote the following to me: In some sense, “not ad hoc” could mean logically consistent. In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. I am not aware of any other way of doing mathematics. As it turns out, to get causa

4 0.25743663 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

Introduction: Robert Bell pointed me to this post by Brad De Long on Bayesian statistics, and then I also noticed this from Noah Smith, who wrote: My impression is that although the Bayesian/Frequentist debate is interesting and intellectually fun, there’s really not much “there” there… despite being so-hip-right-now, Bayesian is not the Statistical Jesus. I’m happy to see the discussion going in this direction. Twenty-five years ago or so, when I got into this biz, there were some serious anti-Bayesian attitudes floating around in mainstream statistics. Discussions in the journals sometimes devolved into debates of the form, “Bayesians: knaves or fools?”. You’d get all sorts of free-floating skepticism about any prior distribution at all, even while people were accepting without question (and doing theory on) logistic regressions, proportional hazards models, and all sorts of strong strong models. (In the subfield of survey sampling, various prominent researchers would refuse to mode

5 0.23783608 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

Introduction: Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been obse

6 0.22942378 1554 andrew gelman stats-2012-10-31-It not necessary that Bayesian methods conform to the likelihood principle

7 0.22446153 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

8 0.22087656 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

9 0.21279022 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

10 0.20076647 1763 andrew gelman stats-2013-03-14-Everyone’s trading bias for variance at some point, it’s just done at different places in the analyses

11 0.19890803 1560 andrew gelman stats-2012-11-03-Statistical methods that work in some settings but not others

12 0.19805454 1529 andrew gelman stats-2012-10-11-Bayesian brains?

13 0.1980179 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

14 0.1912387 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

15 0.18682069 1482 andrew gelman stats-2012-09-04-Model checking and model understanding in machine learning

16 0.1837301 1859 andrew gelman stats-2013-05-16-How do we choose our default methods?

17 0.18235895 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

18 0.18033409 2368 andrew gelman stats-2014-06-11-Bayes in the research conversation

19 0.17941402 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

20 0.17674713 117 andrew gelman stats-2010-06-29-Ya don’t know Bayes, Jack


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.318), (1, 0.224), (2, -0.123), (3, 0.064), (4, -0.149), (5, 0.056), (6, -0.104), (7, 0.074), (8, 0.077), (9, -0.041), (10, -0.013), (11, -0.047), (12, -0.031), (13, 0.018), (14, 0.049), (15, 0.054), (16, 0.039), (17, -0.005), (18, -0.034), (19, 0.026), (20, -0.013), (21, 0.004), (22, -0.012), (23, 0.069), (24, 0.039), (25, 0.016), (26, -0.007), (27, 0.025), (28, -0.009), (29, 0.027), (30, 0.019), (31, 0.003), (32, 0.037), (33, -0.03), (34, -0.011), (35, -0.048), (36, -0.061), (37, -0.02), (38, -0.022), (39, -0.001), (40, -0.045), (41, 0.06), (42, -0.029), (43, -0.022), (44, -0.019), (45, -0.066), (46, 0.002), (47, 0.006), (48, -0.0), (49, 0.031)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98604769 1469 andrew gelman stats-2012-08-25-Ways of knowing

Introduction: In this discussion from last month, computer science student and Judea Pearl collaborator Elias Barenboim expressed an attitude that hierarchical Bayesian methods might be fine in practice but that they lack theory, that Bayesians can’t succeed in toy problems. I posted a P.S. there which might not have been noticed so I will put it here: I now realize that there is some disagreement about what constitutes a “guarantee.” In one of his comments, Barenboim writes, “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee.” In that sense, yes, we have guarantees! It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. We have lots of that in Bayesian Data Analysis (particularly in the first four chapters but implicitly elsewhere as well), and this is also covered in the classic books by Lindley, Jaynes, and others. This sort of guarantee is indeed p

2 0.8929351 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

3 0.89181119 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

Introduction: From 2006 : Eric Archer forwarded this document by Nick Freemantle, “The Reverend Bayes—was he really a prophet?”, in the Journal of the Royal Society of Medicine: Does [Bayes's] contribution merit the enthusiasms of his followers? Or is his legacy overhyped? . . . First, Bayesians appear to have an absolute right to disapprove of any conventional approach in statistics without offering a workable alternative—for example, a colleague recently stated at a meeting that ‘. . . it is OK to have multiple comparisons because Bayesians’ don’t believe in alpha spending’. . . . Second, Bayesians appear to build an army of straw men—everything it seems is different and better from a Bayesian perspective, although many of the concepts seem remarkably familiar. For example, a very well known Bayesian statistician recently surprised the audience with his discovery of the P value as a useful Bayesian statistic at a meeting in Birmingham. Third, Bayesians possess enormous enthusiasm fo

4 0.88281047 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

Introduction: Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been obse

5 0.88001066 1571 andrew gelman stats-2012-11-09-The anti-Bayesian moment and its passing

Introduction: Xian and I respond to the four discussants of our paper, “Not only defended but also applied”: The perceived absurdity of Bayesian inference.” Here’s the abstract of our rejoinder : Over the years we have often felt frustration, both at smug Bayesians—in particular, those who object to checking of the fit of model to data, either because all Bayesian models are held to be subjective and thus unquestioned (an odd combination indeed, but that is the subject of another article)—and angry anti-Bayesians who, as we wrote in our article, strain on the gnat of the prior distribution while swallowing the camel that is the likelihood. The present article arose from our memory of a particularly intemperate anti-Bayesian statement that appeared in Feller’s beautiful and classic book on probability theory. We felt that it was worth exploring the very extremeness of Feller’s words, along with similar anti-Bayesian remarks by others, in order to better understand the background underlying contr

6 0.871387 1781 andrew gelman stats-2013-03-29-Another Feller theory

7 0.85750258 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

8 0.85208136 205 andrew gelman stats-2010-08-13-Arnold Zellner

9 0.84772247 1157 andrew gelman stats-2012-02-07-Philosophy of Bayesian statistics: my reactions to Hendry

10 0.84756523 1529 andrew gelman stats-2012-10-11-Bayesian brains?

11 0.84613091 117 andrew gelman stats-2010-06-29-Ya don’t know Bayes, Jack

12 0.8449176 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

13 0.8422913 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

14 0.84114093 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

15 0.83870208 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

16 0.83783329 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

17 0.82525557 114 andrew gelman stats-2010-06-28-More on Bayesian deduction-induction

18 0.82493669 1280 andrew gelman stats-2012-04-24-Non-Bayesian analysis of Bayesian agents?

19 0.8234334 1091 andrew gelman stats-2011-12-29-Bayes in astronomy

20 0.82342112 1648 andrew gelman stats-2013-01-02-A important new survey of Bayesian predictive methods for model assessment, selection and comparison


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.021), (15, 0.037), (16, 0.063), (21, 0.038), (24, 0.117), (39, 0.028), (63, 0.017), (77, 0.013), (86, 0.035), (99, 0.429)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99625033 1469 andrew gelman stats-2012-08-25-Ways of knowing

Introduction: In this discussion from last month, computer science student and Judea Pearl collaborator Elias Barenboim expressed an attitude that hierarchical Bayesian methods might be fine in practice but that they lack theory, that Bayesians can’t succeed in toy problems. I posted a P.S. there which might not have been noticed so I will put it here: I now realize that there is some disagreement about what constitutes a “guarantee.” In one of his comments, Barenboim writes, “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee.” In that sense, yes, we have guarantees! It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. We have lots of that in Bayesian Data Analysis (particularly in the first four chapters but implicitly elsewhere as well), and this is also covered in the classic books by Lindley, Jaynes, and others. This sort of guarantee is indeed p

2 0.99428856 2151 andrew gelman stats-2013-12-27-Should statistics have a Nobel prize?

Introduction: Xiao-Li says yes: The most compelling reason for having highly visible awards in any field is to enhance its ability to attract future talent. Virtually all the media and public attention our profession received in recent years has been on the utility of statistics in all walks of life. We are extremely happy for and proud of this recognition—it is long overdue. However, the media and public have given much more attention to the Fields Medal than to the COPSS Award, even though the former has hardly been about direct or even indirect impact on everyday life. Why this difference? . . . these awards arouse media and public interest by featuring how ingenious the awardees are and how difficult the problems they solved, much like how conquering Everest bestows admiration not because the admirers care or even know much about Everest itself but because it represents the ultimate physical feat. In this sense, the biggest winner of the Fields Medal is mathematics itself: enticing the brig

3 0.99304628 2279 andrew gelman stats-2014-04-02-Am I too negative?

Introduction: For background, you can start by reading my recent article, Is It Possible to Be an Ethicist Without Being Mean to People? and then a blog post, Quality over Quantity , by John Cook, who writes: At one point [Ed] Tufte spoke more generally and more personally about pursuing quality over quantity. He said most papers are not worth reading and that he learned early on to concentrate on the great papers, maybe one in 500, that are worth reading and rereading rather than trying to “keep up with the literature.” He also explained how over time he has concentrated more on showcasing excellent work than on criticizing bad work. You can see this in the progression from his first book to his latest. (Criticizing bad work is important too, but you’ll have to read his early books to find more of that. He won’t spend as much time talking about it in his course.) That reminded me of Jesse Robbins’ line: “Don’t fight stupid. You are better than that. Make more awesome.” This made me stop an

4 0.99266291 2337 andrew gelman stats-2014-05-18-Never back down: The culture of poverty and the culture of journalism

Introduction: Ta-Nehisi Coates recently published a fascinating column on the “culture of poverty,” in particular focusing on the idea that behavior that is rational and adaptive in some settings is not so appropriate in others: The set of practices required for a young man to secure his safety on the streets of his troubled neighborhood are not the same as those required to place him on an honor roll . . . The way to guide him through this transition is not to insult his native language. . . . For black men like us, the feeling of having something to lose, beyond honor and face, is foreign. We grew up in communities—New York, Baltimore, Chicago—where the Code of the Streets was the first code we learned. Respect and reputation are everything there. These values are often denigrated by people who have never been punched in the face. But when you live around violence there is no opting out. A reputation for meeting violence with violence is a shield. That protection increases when you are part

5 0.9924407 757 andrew gelman stats-2011-06-10-Controversy over the Christakis-Fowler findings on the contagion of obesity

Introduction: Nicholas Christakis and James Fowler are famous for finding that obesity is contagious. Their claims, which have been received with both respect and skepticism (perhaps we need a new word for this: “respecticism”?) are based on analysis of data from the Framingham heart study, a large longitudinal public-health study that happened to have some social network data (for the odd reason that each participant was asked to provide the name of a friend who could help the researchers locate them if they were to move away during the study period. The short story is that if your close contact became obese, you were likely to become obese also. The long story is a debate about the reliability of this finding (that is, can it be explained by measurement error and sampling variability) and its causal implications. This sort of study is in my wheelhouse, as it were, but I have never looked at the Christakis-Fowler work in detail. Thus, my previous and current comments are more along the line

6 0.99238718 2245 andrew gelman stats-2014-03-12-More on publishing in journals

7 0.99238533 1832 andrew gelman stats-2013-04-29-The blogroll

8 0.99225271 750 andrew gelman stats-2011-06-07-Looking for a purpose in life: Update on that underworked and overpaid sociologist whose “main task as a university professor was self-cultivation”

9 0.9921236 692 andrew gelman stats-2011-05-03-“Rationality” reinforces, does not compete with, other models of behavior

10 0.99163461 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

11 0.99162424 2236 andrew gelman stats-2014-03-07-Selection bias in the reporting of shaky research

12 0.99160415 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

13 0.9914782 2170 andrew gelman stats-2014-01-13-Judea Pearl overview on causal inference, and more general thoughts on the reexpression of existing methods by considering their implicit assumptions

14 0.99136418 71 andrew gelman stats-2010-06-07-Pay for an A?

15 0.99129218 1722 andrew gelman stats-2013-02-14-Statistics for firefighters: update

16 0.9912802 2158 andrew gelman stats-2014-01-03-Booze: Been There. Done That.

17 0.99094677 793 andrew gelman stats-2011-07-09-R on the cloud

18 0.99090892 1289 andrew gelman stats-2012-04-29-We go to war with the data we have, not the data we want

19 0.99069166 1807 andrew gelman stats-2013-04-17-Data problems, coding errors…what can be done?

20 0.9906581 2006 andrew gelman stats-2013-09-03-Evaluating evidence from published research