andrew_gelman_stats andrew_gelman_stats-2010 andrew_gelman_stats-2010-317 knowledge-graph by maker-knowledge-mining

317 andrew gelman stats-2010-10-04-Rob Kass on statistical pragmatism, and my reactions


meta infos for this blog

Source: html

Introduction: Rob Kass writes : Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I [Kass] suggest that a philosophy compatible with statistical practice, labeled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative “big picture” depiction. In my comments , I pretty much agree with everything Rob says, with a few points of elaboration: Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective proce


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Rob Kass writes : Statistics has moved beyond the frequentist-Bayesian controversies of the past. [sent-1, score-0.068]

2 I [Kass] suggest that a philosophy compatible with statistical practice, labeled here statistical pragmatism, serves as a foundation for inference. [sent-3, score-0.425]

3 Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. [sent-4, score-0.713]

4 I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative “big picture” depiction. [sent-5, score-0.309]

5 In my comments , I pretty much agree with everything Rob says, with a few points of elaboration: Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. [sent-6, score-0.643]

6 I completely agree but would also add another anchoring point: calibration. [sent-7, score-0.081]

7 Calibration of probability assessments is an objective, not subjective process, although some subjectivity (or scientific judgment) is necessarily involved in the choice of events used in the calibration. [sent-8, score-0.552]

8 In that way, Bayesian probability calibration is closely connected to frequentist probability statements, in that both are conditional on “reference sets” of comparable events . [sent-9, score-0.812]

9 In a modern Bayesian approach, confidence intervals and hypothesis testing are both important but are not isomorphic; they represent two different steps of inference. [sent-12, score-0.336]

10 Confidence statements, or posterior intervals, are summaries of inference about parameters conditional on an assumed model. [sent-13, score-0.17]

11 Hypothesis testing–or, more generally, model checking–is the process of comparing observed data to replications under the model if it were true. [sent-14, score-0.186]

12 Kass discusses the role of sampling as a model for understanding statistical inference. [sent-18, score-0.199]

13 But sampling is more than a metaphor; it is crucial in many aspects of statistics. [sent-19, score-0.078]

14 The only two statements in Kass’s article that I clearly disagree with are the following two claims: “the only solid foundation for Bayesianism is subjective,” and “the most fundamental belief of any scientist is that the theoretical and real worlds are aligned. [sent-23, score-0.233]

15 ” , , , Claims of the subjectivity of Bayesian inference have been much debated, and I am under no illusion that I can resolve them here. [sent-24, score-0.225]

16 , , , a person who is really worried about subjective model-building might profitably spend more effort thinking about assumptions inherent in additive models, logistic regressions, proportional hazards models, and the like. [sent-26, score-0.426]

17 Even the Wilcoxon test is based on assumptions . [sent-27, score-0.118]

18 Like Kass, I believe that philosophical debates can be a good thing, if they motivate us to think carefully about our unexamined assumptions. [sent-30, score-0.078]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('kass', 0.486), ('calibration', 0.19), ('probability', 0.171), ('anchored', 0.169), ('pragmatism', 0.162), ('subjective', 0.147), ('subjectivity', 0.139), ('rob', 0.132), ('statements', 0.131), ('randomization', 0.13), ('statistical', 0.121), ('assumptions', 0.118), ('foundation', 0.102), ('process', 0.102), ('frequentist', 0.101), ('intervals', 0.095), ('events', 0.095), ('bayesian', 0.093), ('physical', 0.092), ('reference', 0.09), ('flips', 0.09), ('intolerant', 0.09), ('isomorphic', 0.09), ('confidence', 0.086), ('inference', 0.086), ('debated', 0.085), ('inclusive', 0.085), ('profitably', 0.085), ('conditional', 0.084), ('observed', 0.084), ('testing', 0.082), ('wilcoxon', 0.081), ('strands', 0.081), ('anchoring', 0.081), ('compatible', 0.081), ('rolls', 0.081), ('outset', 0.078), ('unexamined', 0.078), ('sampling', 0.078), ('hazards', 0.076), ('emphasizes', 0.076), ('metaphor', 0.076), ('pluralistic', 0.076), ('subfields', 0.074), ('elaboration', 0.074), ('hypothesis', 0.073), ('coin', 0.071), ('dominated', 0.069), ('controversies', 0.068), ('models', 0.067)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 317 andrew gelman stats-2010-10-04-Rob Kass on statistical pragmatism, and my reactions

Introduction: Rob Kass writes : Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I [Kass] suggest that a philosophy compatible with statistical practice, labeled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative “big picture” depiction. In my comments , I pretty much agree with everything Rob says, with a few points of elaboration: Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective proce

2 0.79483277 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

Introduction: Rob Kass’s article on statistical pragmatism is scheduled to appear in Statistical Science along with some discussions. Here are my comments. I agree with Rob Kass’s point that we can and should make use of statistical methods developed under different philosophies, and I am happy to take the opportunity to elaborate on some of his arguments. I’ll discuss the following: - Foundations of probability - Confidence intervals and hypothesis tests - Sampling - Subjectivity and belief - Different schools of statistics Foundations of probability. Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective process, although some subjectivity (or scientific judgment) is necessarily involved in the choice of events used

3 0.20450707 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

Introduction: Yes, checking calibration of probability forecasts is part of Bayesian statistics. At the end of this post are three figures from Chapter 1 of Bayesian Data Analysis illustrating empirical evaluation of forecasts. But first the background. Why am I bringing this up now? It’s because of something Larry Wasserman wrote the other day : One of the striking facts about [baseball/political forecaster Nate Silver's recent] book is the emphasis the Silver places on frequency calibration. . . . Have no doubt about it: Nate Silver is a frequentist. For example, he says: One of the most important tests of a forecast — I would argue that it is the single most important one — is called calibration. Out of all the times you said there was a 40 percent chance of rain, how often did rain actually occur? If over the long run, it really did rain about 40 percent of the time, that means your forecasts were well calibrated. I had some discussion with Larry in the comments section of h

4 0.17831041 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

5 0.16524178 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

Introduction: David Rohde writes: I have been thinking a lot lately about your Bayesian model checking approach. This is in part because I have been working on exploratory data analysis and wishing to avoid controversy and mathematical statistics we omitted model checking from our discussion. This is something that the refereeing process picked us up on and we ultimately added a critical discussion of null-hypothesis testing to our paper . The exploratory technique we discussed was essentially a 2D histogram approach, but we used Polya models as a formal model for the histogram. We are currently working on a new paper, and we are thinking through how or if we should do “confirmatory analysis” or model checking in the paper. What I find most admirable about your statistical work is that you clearly use the Bayesian approach to do useful applied statistical analysis. My own attempts at applied Bayesian analysis makes me greatly admire your applied successes. On the other hand it may be t

6 0.15845296 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

7 0.15675825 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

8 0.15415199 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

9 0.15313435 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

10 0.14873101 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

11 0.14602119 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

12 0.1400543 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

13 0.13728631 1151 andrew gelman stats-2012-02-03-Philosophy of Bayesian statistics: my reactions to Senn

14 0.13672794 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

15 0.13650766 1155 andrew gelman stats-2012-02-05-What is a prior distribution?

16 0.13601609 811 andrew gelman stats-2011-07-20-Kind of Bayesian

17 0.13323691 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

18 0.12969729 427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials

19 0.12694256 1469 andrew gelman stats-2012-08-25-Ways of knowing

20 0.1223692 1572 andrew gelman stats-2012-11-10-I don’t like this cartoon


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.197), (1, 0.166), (2, -0.071), (3, 0.013), (4, -0.132), (5, -0.016), (6, -0.077), (7, 0.095), (8, 0.065), (9, -0.109), (10, -0.063), (11, -0.001), (12, -0.026), (13, -0.018), (14, -0.029), (15, -0.001), (16, 0.001), (17, -0.017), (18, -0.005), (19, -0.039), (20, 0.014), (21, -0.017), (22, -0.013), (23, 0.021), (24, 0.024), (25, -0.014), (26, 0.018), (27, 0.029), (28, -0.038), (29, 0.034), (30, -0.027), (31, -0.021), (32, -0.022), (33, 0.014), (34, -0.059), (35, -0.002), (36, 0.015), (37, 0.044), (38, -0.001), (39, 0.027), (40, 0.012), (41, -0.05), (42, 0.089), (43, -0.032), (44, 0.009), (45, -0.014), (46, -0.052), (47, 0.025), (48, -0.054), (49, -0.057)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97121525 317 andrew gelman stats-2010-10-04-Rob Kass on statistical pragmatism, and my reactions

Introduction: Rob Kass writes : Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I [Kass] suggest that a philosophy compatible with statistical practice, labeled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative “big picture” depiction. In my comments , I pretty much agree with everything Rob says, with a few points of elaboration: Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective proce

2 0.92882955 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

Introduction: Rob Kass’s article on statistical pragmatism is scheduled to appear in Statistical Science along with some discussions. Here are my comments. I agree with Rob Kass’s point that we can and should make use of statistical methods developed under different philosophies, and I am happy to take the opportunity to elaborate on some of his arguments. I’ll discuss the following: - Foundations of probability - Confidence intervals and hypothesis tests - Sampling - Subjectivity and belief - Different schools of statistics Foundations of probability. Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective process, although some subjectivity (or scientific judgment) is necessarily involved in the choice of events used

3 0.79243129 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

4 0.79129934 1095 andrew gelman stats-2012-01-01-Martin and Liu: Probabilistic inference based on consistency of model with data

Introduction: What better way to start then new year than with some hard-core statistical theory? Ryan Martin and Chuanhai Liu send along a new paper on inferential models: Probability is a useful tool for describing uncertainty, so it is natural to strive for a system of statistical inference based on probabilities for or against various hypotheses. But existing probabilistic inference methods struggle to provide a meaningful interpretation of the probabilities across experiments in sufficient generality. In this paper we further develop a promising new approach based on what are called inferential models (IMs). The fundamental idea behind IMs is that there is an unobservable auxiliary variable that itself describes the inherent uncertainty about the parameter of interest, and that posterior probabilistic inference can be accomplished by predicting this unobserved quantity. We describe a simple and intuitive three-step construction of a random set of candidate parameter values, each being co

5 0.74994665 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

Introduction: Leading theoretical statistician Larry Wassserman in 2008 : Some of the greatest contributions of statistics to science involve adding additional randomness and leveraging that randomness. Examples are randomized experiments, permutation tests, cross-validation and data-splitting. These are unabashedly frequentist ideas and, while one can strain to fit them into a Bayesian framework, they don’t really have a place in Bayesian inference. The fact that Bayesian methods do not naturally accommodate such a powerful set of statistical ideas seems like a serious deficiency. To which I responded on the second-to-last paragraph of page 8 here . Larry Wasserman in 2013 : Some people say that there is no role for randomization in Bayesian inference. In other words, the randomization mechanism plays no role in Bayes’ theorem. But this is not really true. Without randomization, we can indeed derive a posterior for theta but it is highly sensitive to the prior. This is just a restat

6 0.7499451 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

7 0.72777742 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

8 0.72728986 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

9 0.72002065 1572 andrew gelman stats-2012-11-10-I don’t like this cartoon

10 0.71872389 811 andrew gelman stats-2011-07-20-Kind of Bayesian

11 0.71106213 2027 andrew gelman stats-2013-09-17-Christian Robert on the Jeffreys-Lindley paradox; more generally, it’s good news when philosophical arguments can be transformed into technical modeling issues

12 0.70458108 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

13 0.70349991 1469 andrew gelman stats-2012-08-25-Ways of knowing

14 0.69619721 1165 andrew gelman stats-2012-02-13-Philosophy of Bayesian statistics: my reactions to Wasserman

15 0.69301081 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

16 0.6924234 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

17 0.6920163 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

18 0.68315244 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

19 0.68206775 1571 andrew gelman stats-2012-11-09-The anti-Bayesian moment and its passing

20 0.67650038 331 andrew gelman stats-2010-10-10-Bayes jumps the shark


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(8, 0.146), (9, 0.021), (11, 0.017), (16, 0.094), (17, 0.012), (18, 0.023), (21, 0.019), (23, 0.013), (24, 0.171), (35, 0.021), (47, 0.01), (53, 0.017), (65, 0.028), (84, 0.025), (86, 0.051), (99, 0.234)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94076502 317 andrew gelman stats-2010-10-04-Rob Kass on statistical pragmatism, and my reactions

Introduction: Rob Kass writes : Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I [Kass] suggest that a philosophy compatible with statistical practice, labeled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative “big picture” depiction. In my comments , I pretty much agree with everything Rob says, with a few points of elaboration: Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective proce

2 0.94048005 1822 andrew gelman stats-2013-04-24-Samurai sword-wielding Mormon bishop pharmaceutical statistician stops mugger

Introduction: Brett Keller points us to this feel-good story of the day: A Samurai sword-wielding Mormon bishop helped a neighbor woman escape a Tuesday morning attack by a man who had been stalking her. Kent Hendrix woke up Tuesday to his teenage son pounding on his bedroom door and telling him somebody was being mugged in front of their house. The 47-year-old father of six rushed out the door and grabbed the weapon closest to him — a 29-inch high carbon steel Samurai sword. . . . Hendrix, a pharmaceutical statistician, was one of several neighbors who came to the woman’s aid after she began yelling for help . . . Too bad the whole “statistician” thing got buried in the middle of the article. Fair enough, though: I don’t know what it takes to become a Mormon bishop, but I assume it’s more effort than what it takes to learn statistics.

3 0.93642282 1133 andrew gelman stats-2012-01-21-Judea Pearl on why he is “only a half-Bayesian”

Introduction: In an article published in 2001, Pearl wrote: I [Pearl] turned Bayesian in 1971, as soon as I began reading Savage’s monograph The Foundations of Statistical Inference [Savage, 1962]. The arguments were unassailable: (i) It is plain silly to ignore what we know, (ii) It is natural and useful to cast what we know in the language of probabilities, and (iii) If our subjective probabilities are erroneous, their impact will get washed out in due time, as the number of observations increases. Thirty years later, I [Pearl] am still a devout Bayesian in the sense of (i), but I now doubt the wisdom of (ii) and I know that, in general, (iii) is false. He elaborates: The bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships. Specifically, the building blocks of our scientific and everyday knowledge are elementary facts such as “mud does not cause rain” and “symptom

4 0.93600011 916 andrew gelman stats-2011-09-18-Multimodality in hierarchical models

Introduction: Jim Hodges posted a note to the Bugs mailing list that I thought could be of more general interest: Is multi-modality a common experience? I [Hodges] think the answer is “nobody knows in any generality”. Here are some examples of bimodality that certainly do *not* involve the kind of labeling problems that arise in mixture models. The only systematic study of multimodality I know of is Liu J, Hodges JS (2003). Posterior bimodality in the balanced one-way random effects model. J.~Royal Stat.~Soc., Ser.~B, 65:247-255. The surprise of this paper is that in the simplest possible hierarchical model (analyzed using the standard inverse-gamma priors for the two variances), bimodality occurs quite readily, although it is much less common to have two modes that are big enough so that you’d actually get a noticeable fraction of MCMC draws from both of them. Because the restricted likelihood (= the marginal posterior for the two variances, if you’ve put flat priors on them) is

5 0.92331409 1128 andrew gelman stats-2012-01-19-Sharon Begley: Worse than Stephen Jay Gould?

Introduction: Commenter Tggp links to a criticism of science journalist Sharon Begley by science journalist Matthew Hutson. I learned of this dispute after reporting that Begley had received the American Statistical Association’s Excellence in Statistical Reporting Award, a completely undeserved honor, if Hutson is to believed. The two journalists have somewhat similar profiles: Begley was science editor at Newsweek (she’s now at Reuters) and author of “Train Your Mind, Change Your Brain: How a New Science Reveals Our Extraordinary Potential to Transform Ourselves,” and Hutson was news editor at Psychology Today and wrote the similarly self-helpy-titled, “The 7 Laws of Magical Thinking: How Irrational Beliefs Keep Us Happy, Healthy, and Sane.” Hutson writes : Psychological Science recently published a fascinating new study on jealousy. I was interested to read Newsweek’s 1300-word article covering the research by their science editor, Sharon Begley. But part-way through the article, I

6 0.91026211 994 andrew gelman stats-2011-11-06-Josh Tenenbaum presents . . . a model of folk physics!

7 0.90545154 575 andrew gelman stats-2011-02-15-What are the trickiest models to fit?

8 0.90078467 1355 andrew gelman stats-2012-05-31-Lindley’s paradox

9 0.89806515 85 andrew gelman stats-2010-06-14-Prior distribution for design effects

10 0.89610517 1056 andrew gelman stats-2011-12-13-Drawing to Learn in Science

11 0.89517158 1378 andrew gelman stats-2012-06-13-Economists . . .

12 0.89448273 647 andrew gelman stats-2011-04-04-Irritating pseudo-populism, backed up by false statistics and implausible speculations

13 0.89401138 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

14 0.88329256 1221 andrew gelman stats-2012-03-19-Whassup with deviance having a high posterior correlation with a parameter in the model?

15 0.88194102 1019 andrew gelman stats-2011-11-19-Validation of Software for Bayesian Models Using Posterior Quantiles

16 0.8803907 2183 andrew gelman stats-2014-01-23-Discussion on preregistration of research studies

17 0.88000089 1206 andrew gelman stats-2012-03-10-95% intervals that I don’t believe, because they’re from a flat prior I don’t believe

18 0.8795191 198 andrew gelman stats-2010-08-11-Multilevel modeling in R on a Mac

19 0.87843406 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

20 0.87807894 2201 andrew gelman stats-2014-02-06-Bootstrap averaging: Examples where it works and where it doesn’t work