andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1572 knowledge-graph by maker-knowledge-mining

1572 andrew gelman stats-2012-11-10-I don’t like this cartoon


meta infos for this blog

Source: html

Introduction: Some people pointed me to this : I am happy to see statistical theory and methods be a topic in popular culture, and of course I’m glad that, contra Feller , the Bayesian is presented as the hero this time, but . . . . I think the lower-left panel of the cartoon unfairly misrepresents frequentist statisticians. Frequentist statisticians recognize many statistical goals. Point estimates trade off bias and variance. Interval estimates have the goal of achieving nominal coverage and the goal of being informative. Tests have the goals of calibration and power. Frequentists know that no single principle applies in all settings, and this is a setting where this particular method is clearly inappropriate. All statisticians use prior information in their statistical analysis. Non-Bayesians express their prior information not through a probability distribution on parameters but rather through their choice of methods. I think this non-Bayesian attitude is too restrictive, but in


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Some people pointed me to this : I am happy to see statistical theory and methods be a topic in popular culture, and of course I’m glad that, contra Feller , the Bayesian is presented as the hero this time, but . [sent-1, score-0.214]

2 I think the lower-left panel of the cartoon unfairly misrepresents frequentist statisticians. [sent-5, score-0.948]

3 Interval estimates have the goal of achieving nominal coverage and the goal of being informative. [sent-8, score-0.319]

4 Frequentists know that no single principle applies in all settings, and this is a setting where this particular method is clearly inappropriate. [sent-10, score-0.259]

5 All statisticians use prior information in their statistical analysis. [sent-11, score-0.347]

6 Non-Bayesians express their prior information not through a probability distribution on parameters but rather through their choice of methods. [sent-12, score-0.265]

7 I think this non-Bayesian attitude is too restrictive, but in this case a small amount of reflection would reveal the inappropriateness of this procedure for this example. [sent-13, score-0.28]

8 In this comment , Phil defends the cartoon, pointing out that the procedure it describes is equivalent to the classical hypothesis-testing approach that is indeed widely used. [sent-14, score-0.396]

9 Phil (and, by extension, the cartoonist) have a point, but I don’t think a sensible statistician would use this method to estimate such a rare probability. [sent-15, score-0.283]

10 An analogy from a Bayesian perspective would be to use the probability estimate (y+1)/(n+2) with y=0 and n=36 for an extremely unlikely event, for example estimating the rate of BSE infection in a population as 1/38 based on the data that 0 people out of a random sample of 36 are infected. [sent-16, score-0.55]

11 The flat prior is inappropriate in a context where the probability is very low; similarly the test with 1/36 chance of error is inappropriate in a classical setting where the true positive rate is extremely low. [sent-17, score-1.52]

12 In the context of probability mathematics, textbooks carefully explain that p(A|B) ! [sent-19, score-0.505]

13 Still, I think the cartoon as a whole is unfair in that it compares a sensible Bayesian to a frequentist statistician who blindly follows the advice of shallow textbooks. [sent-21, score-0.941]

14 As an aside, I also think the lower-right panel is misleading. [sent-22, score-0.193]

15 A betting decision depends not just on probabilities but also on utilities. [sent-23, score-0.079]

16 Hence anyone, Bayesian or not, should be willing to bet $50 that the sun has not exploded. [sent-25, score-0.142]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('cartoon', 0.335), ('frequentist', 0.229), ('rate', 0.201), ('panel', 0.193), ('classical', 0.189), ('textbooks', 0.181), ('probability', 0.161), ('error', 0.143), ('sun', 0.142), ('inappropriate', 0.13), ('sensible', 0.129), ('statisticians', 0.125), ('low', 0.123), ('statistical', 0.118), ('test', 0.113), ('phil', 0.11), ('bayesian', 0.106), ('procedure', 0.105), ('prior', 0.104), ('inappropriateness', 0.102), ('misrepresents', 0.102), ('confine', 0.102), ('defends', 0.102), ('extremely', 0.099), ('principle', 0.096), ('contra', 0.096), ('ensuing', 0.096), ('conditional', 0.095), ('setting', 0.089), ('restrictive', 0.089), ('frequentists', 0.089), ('infection', 0.089), ('unfairly', 0.089), ('type', 0.087), ('blindly', 0.086), ('positives', 0.082), ('shallow', 0.082), ('explain', 0.082), ('context', 0.081), ('positive', 0.08), ('statistician', 0.08), ('achieving', 0.08), ('goal', 0.08), ('applicability', 0.079), ('betting', 0.079), ('nominal', 0.079), ('typically', 0.078), ('feller', 0.077), ('method', 0.074), ('reflection', 0.073)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 1572 andrew gelman stats-2012-11-10-I don’t like this cartoon

Introduction: Some people pointed me to this : I am happy to see statistical theory and methods be a topic in popular culture, and of course I’m glad that, contra Feller , the Bayesian is presented as the hero this time, but . . . . I think the lower-left panel of the cartoon unfairly misrepresents frequentist statisticians. Frequentist statisticians recognize many statistical goals. Point estimates trade off bias and variance. Interval estimates have the goal of achieving nominal coverage and the goal of being informative. Tests have the goals of calibration and power. Frequentists know that no single principle applies in all settings, and this is a setting where this particular method is clearly inappropriate. All statisticians use prior information in their statistical analysis. Non-Bayesians express their prior information not through a probability distribution on parameters but rather through their choice of methods. I think this non-Bayesian attitude is too restrictive, but in

2 0.24395683 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

3 0.19328116 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

Introduction: Yes, checking calibration of probability forecasts is part of Bayesian statistics. At the end of this post are three figures from Chapter 1 of Bayesian Data Analysis illustrating empirical evaluation of forecasts. But first the background. Why am I bringing this up now? It’s because of something Larry Wasserman wrote the other day : One of the striking facts about [baseball/political forecaster Nate Silver's recent] book is the emphasis the Silver places on frequency calibration. . . . Have no doubt about it: Nate Silver is a frequentist. For example, he says: One of the most important tests of a forecast — I would argue that it is the single most important one — is called calibration. Out of all the times you said there was a 40 percent chance of rain, how often did rain actually occur? If over the long run, it really did rain about 40 percent of the time, that means your forecasts were well calibrated. I had some discussion with Larry in the comments section of h

4 0.19229335 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

Introduction: Robert Bell pointed me to this post by Brad De Long on Bayesian statistics, and then I also noticed this from Noah Smith, who wrote: My impression is that although the Bayesian/Frequentist debate is interesting and intellectually fun, there’s really not much “there” there… despite being so-hip-right-now, Bayesian is not the Statistical Jesus. I’m happy to see the discussion going in this direction. Twenty-five years ago or so, when I got into this biz, there were some serious anti-Bayesian attitudes floating around in mainstream statistics. Discussions in the journals sometimes devolved into debates of the form, “Bayesians: knaves or fools?”. You’d get all sorts of free-floating skepticism about any prior distribution at all, even while people were accepting without question (and doing theory on) logistic regressions, proportional hazards models, and all sorts of strong strong models. (In the subfield of survey sampling, various prominent researchers would refuse to mode

5 0.18541846 247 andrew gelman stats-2010-09-01-How does Bayes do it?

Introduction: I received the following message from a statistician working in industry: I am studying your paper, A Weakly Informative Default Prior Distribution for Logistic and Other Regression Models . I am not clear why the Bayesian approaches with some priors can usually handle the issue of nonidentifiability or can get stable estimates of parameters in model fit, while the frequentist approaches cannot. My reply: 1. The term “frequentist approach” is pretty general. “Frequentist” refers to an approach for evaluating inferences, not a method for creating estimates. In particular, any Bayes estimate can be viewed as a frequentist inference if you feel like evaluating its frequency properties. In logistic regression, maximum likelihood has some big problems that are solved with penalized likelihood–equivalently, Bayesian inference. A frequentist can feel free to consider the prior as a penalty function rather than a probability distribution of parameters. 2. The reason our approa

6 0.18310404 534 andrew gelman stats-2011-01-24-Bayes at the end

7 0.15512326 1941 andrew gelman stats-2013-07-16-Priors

8 0.14746873 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

9 0.14548384 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

10 0.14386505 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

11 0.14385933 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability

12 0.14147198 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

13 0.13786167 56 andrew gelman stats-2010-05-28-Another argument in favor of expressing conditional probability statements using the population distribution

14 0.13412741 1560 andrew gelman stats-2012-11-03-Statistical methods that work in some settings but not others

15 0.13379182 960 andrew gelman stats-2011-10-15-The bias-variance tradeoff

16 0.13289334 1605 andrew gelman stats-2012-12-04-Write This Book

17 0.13213769 1355 andrew gelman stats-2012-05-31-Lindley’s paradox

18 0.13172781 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

19 0.13125564 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

20 0.12912661 1032 andrew gelman stats-2011-11-28-Does Avastin work on breast cancer? Should Medicare be paying for it?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.222), (1, 0.165), (2, 0.004), (3, -0.018), (4, -0.095), (5, -0.054), (6, -0.01), (7, 0.13), (8, -0.012), (9, -0.127), (10, -0.084), (11, -0.032), (12, 0.032), (13, 0.021), (14, -0.04), (15, -0.015), (16, -0.034), (17, 0.017), (18, 0.008), (19, -0.041), (20, 0.049), (21, 0.043), (22, 0.042), (23, 0.067), (24, -0.011), (25, -0.031), (26, -0.011), (27, 0.028), (28, -0.01), (29, -0.008), (30, 0.035), (31, 0.07), (32, 0.014), (33, 0.002), (34, 0.003), (35, -0.017), (36, -0.008), (37, 0.013), (38, -0.017), (39, -0.005), (40, 0.009), (41, -0.024), (42, -0.038), (43, -0.048), (44, -0.038), (45, 0.021), (46, 0.038), (47, 0.031), (48, -0.051), (49, -0.012)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98241097 1572 andrew gelman stats-2012-11-10-I don’t like this cartoon

Introduction: Some people pointed me to this : I am happy to see statistical theory and methods be a topic in popular culture, and of course I’m glad that, contra Feller , the Bayesian is presented as the hero this time, but . . . . I think the lower-left panel of the cartoon unfairly misrepresents frequentist statisticians. Frequentist statisticians recognize many statistical goals. Point estimates trade off bias and variance. Interval estimates have the goal of achieving nominal coverage and the goal of being informative. Tests have the goals of calibration and power. Frequentists know that no single principle applies in all settings, and this is a setting where this particular method is clearly inappropriate. All statisticians use prior information in their statistical analysis. Non-Bayesians express their prior information not through a probability distribution on parameters but rather through their choice of methods. I think this non-Bayesian attitude is too restrictive, but in

2 0.81970149 1560 andrew gelman stats-2012-11-03-Statistical methods that work in some settings but not others

Introduction: David Hogg pointed me to this post by Larry Wasserman: 1. The Horwitz-Thompson estimator    satisfies the following condition: for every   , where   — the parameter space — is the set of all functions  . (There are practical improvements to the Horwitz-Thompson estimator that we discussed in our earlier posts but we won’t revisit those here.) 2. A Bayes estimator requires a prior   for  . In general, if   is not a function of   then (1) will not hold. . . . 3. If you let   be a function if  , (1) still, in general, does not hold. 4. If you make   a function if   in just the right way, then (1) will hold. . . . There is nothing wrong with doing this, but in our opinion this is not in the spirit of Bayesian inference. . . . 7. This example is only meant to show that Bayesian estimators do not necessarily have good frequentist properties. This should not be surprising. There is no reason why we should in general expect a Bayesian method to have a frequentist property

3 0.76746339 2180 andrew gelman stats-2014-01-21-Everything I need to know about Bayesian statistics, I learned in eight schools.

Introduction: This post is by Phil. I’m aware that there  are  some people who use a Bayesian approach largely because it allows them to provide a highly informative prior distribution based subjective judgment, but that is not the appeal of Bayesian methods for a lot of us practitioners. It’s disappointing and surprising, twenty years after my initial experiences, to still hear highly informed professional statisticians who think that what distinguishes Bayesian statistics from Frequentist statistics is “subjectivity” ( as seen in  a recent blog post and its comments ). My first encounter with Bayesian statistics was just over 20 years ago. I was a postdoc at Lawrence Berkeley National Laboratory, with a new PhD in theoretical atomic physics but working on various problems related to the geographical and statistical distribution of indoor radon (a naturally occurring radioactive gas that can be dangerous if present at high concentrations). One of the issues I ran into right at the start was th

4 0.7589969 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

5 0.74587142 643 andrew gelman stats-2011-04-02-So-called Bayesian hypothesis testing is just as bad as regular hypothesis testing

Introduction: Steve Ziliak points me to this article by the always-excellent Carl Bialik, slamming hypothesis tests. I only wish Carl had talked with me before so hastily posting, though! I would’ve argued with some of the things in the article. In particular, he writes: Reese and Brad Carlin . . . suggest that Bayesian statistics are a better alternative, because they tackle the probability that the hypothesis is true head-on, and incorporate prior knowledge about the variables involved. Brad Carlin does great work in theory, methods, and applications, and I like the bit about the prior knowledge (although I might prefer the more general phrase “additional information”), but I hate that quote! My quick response is that the hypothesis of zero effect is almost never true! The problem with the significance testing framework–Bayesian or otherwise–is in the obsession with the possibility of an exact zero effect. The real concern is not with zero, it’s with claiming a positive effect whe

6 0.73584288 586 andrew gelman stats-2011-02-23-A statistical version of Arrow’s paradox

7 0.73394585 317 andrew gelman stats-2010-10-04-Rob Kass on statistical pragmatism, and my reactions

8 0.71564698 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

9 0.71461231 331 andrew gelman stats-2010-10-10-Bayes jumps the shark

10 0.71024019 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

11 0.70555246 341 andrew gelman stats-2010-10-14-Confusion about continuous probability densities

12 0.7031759 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

13 0.70143968 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

14 0.69921374 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

15 0.69806594 2027 andrew gelman stats-2013-09-17-Christian Robert on the Jeffreys-Lindley paradox; more generally, it’s good news when philosophical arguments can be transformed into technical modeling issues

16 0.6935184 2201 andrew gelman stats-2014-02-06-Bootstrap averaging: Examples where it works and where it doesn’t work

17 0.68455172 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

18 0.67929953 638 andrew gelman stats-2011-03-30-More on the correlation between statistical and political ideology

19 0.67804748 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

20 0.67792356 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.014), (9, 0.024), (16, 0.134), (24, 0.209), (36, 0.013), (55, 0.013), (63, 0.011), (86, 0.021), (89, 0.207), (99, 0.242)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97817743 1160 andrew gelman stats-2012-02-09-Familial Linkage between Neuropsychiatric Disorders and Intellectual Interests

Introduction: When I spoke at Princeton last year, I talked with neuroscientist Sam Wang, who told me about a project he did surveying incoming Princeton freshmen about mental illness in their families. He and his coauthor Benjamin Campbell found some interesting results, which they just published : A link between intellect and temperament has long been the subject of speculation. . . . Studies of the artistically inclined report linkage with familial depression, while among eminent and creative scientists, a lower incidence of affective disorders is found. In the case of developmental disorders, a heightened prevalence of autism spectrum disorders (ASDs) has been found in the families of mathematicians, physicists, and engineers. . . . We surveyed the incoming class of 2014 at Princeton University about their intended academic major, familial incidence of neuropsychiatric disorders, and demographic variables. . . . Consistent with prior findings, we noticed a relation between intended academ

2 0.97620773 1708 andrew gelman stats-2013-02-05-Wouldn’t it be cool if Glenn Hubbard were consulting for Herbalife and I were on the other side?

Introduction: I remember in 4th grade or so, the teacher would give us a list of vocabulary words each week and we’d have to show we learned them by using each in a sentence. We quickly got bored and decided to do the assignment by writing a single sentence using all ten words. (Which the teacher hated, of course.) The above headline is in that spirit, combining blog posts rather than vocabulary words. But that only uses two of the entries. To really do the job, I’d need to throw in bivariate associations, ecological fallacies, high-dimensional feature selection, statistical significance, the suddenly unpopular name Hilary, snotty reviewers, the contagion of obesity, and milk-related spam. Or we could bring in some of the all-time favorites, such as Bayesians, economists, Finland, beautiful parents and their daughters, goofy graphics, red and blue states, essentialism in children’s reasoning, chess running, and zombies. Putting 8 of these in a single sentence (along with Glenn Hubbard

3 0.95620787 833 andrew gelman stats-2011-07-31-Untunable Metropolis

Introduction: Michael Margolis writes: What are we to make of it when a Metropolis-Hastings step just won’t tune? That is, the acceptance rate is zero at expected-jump-size X, and way above 1/2 at X-exp(-16) (i.e., machine precision ). I’ve solved my practical problem by writing that I would have liked to include results from a diffuse prior, but couldn’t. But I’m bothered by the poverty of my intuition. And since everything I’ve read says this is an issue of efficiency, rather than accuracy, I wonder if I could solve it just by running massive and heavily thinned chains. My reply: I can’t see how this could happen in a well-specified problem! I suspect it’s a bug. Otherwise try rescaling your variables so that your parameters will have values on the order of magnitude of 1. To which Margolis responded: I hardly wrote any of the code, so I can’t speak to the bug question — it’s binomial kriging from the R package geoRglm. And there are no covariates to scale — just the zero and one

4 0.94556677 1756 andrew gelman stats-2013-03-10-He said he was sorry

Introduction: Yes, it can be done : Hereby I contact you to clarify the situation that occurred with the publication of the article entitled *** which was published in Volume 11, Issue 3 of *** and I made the mistake of declaring as an author. This chapter is a plagiarism of . . . I wish to express and acknowledge that I am solely responsible for this . . . I recognize the gravity of the offense committed, since there is no justification for so doing. Therefore, and as a sign of shame and regret I feel in this situation, I will publish this letter, in order to set an example for other researchers do not engage in a similar error. No more, and to please accept my apologies, Sincerely, *** P.S. Since we’re on Retraction Watch already, I’ll point you to this unrelated story featuring a hilarious photo of a fraudster, who in this case was a grad student in psychology who faked his data and “has agreed to submit to a three-year supervisory period for any work involving funding from the

5 0.94241041 1215 andrew gelman stats-2012-03-16-The “hot hand” and problems with hypothesis testing

Introduction: Gur Yaari writes : Anyone who has ever watched a sports competition is familiar with expressions like “on fire”, “in the zone”, “on a roll”, “momentum” and so on. But what do these expressions really mean? In 1985 when Thomas Gilovich, Robert Vallone and Amos Tversky studied this phenomenon for the first time, they defined it as: “. . . these phrases express a belief that the performance of a player during a particular period is significantly better than expected on the basis of the player’s overall record”. Their conclusion was that what people tend to perceive as a “hot hand” is essentially a cognitive illusion caused by a misperception of random sequences. Until recently there was little, if any, evidence to rule out their conclusion. Increased computing power and new data availability from various sports now provide surprising evidence of this phenomenon, thus reigniting the debate. Yaari goes on to some studies that have found time dependence in basketball, baseball, voll

same-blog 6 0.93967634 1572 andrew gelman stats-2012-11-10-I don’t like this cartoon

7 0.93163264 407 andrew gelman stats-2010-11-11-Data Visualization vs. Statistical Graphics

8 0.92560029 1477 andrew gelman stats-2012-08-30-Visualizing Distributions of Covariance Matrices

9 0.91463059 2243 andrew gelman stats-2014-03-11-The myth of the myth of the myth of the hot hand

10 0.91385263 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

11 0.89683735 1702 andrew gelman stats-2013-02-01-Don’t let your standard errors drive your research agenda

12 0.89661539 231 andrew gelman stats-2010-08-24-Yet another Bayesian job opportunity

13 0.89447582 593 andrew gelman stats-2011-02-27-Heat map

14 0.89134836 1320 andrew gelman stats-2012-05-14-Question 4 of my final exam for Design and Analysis of Sample Surveys

15 0.88682586 623 andrew gelman stats-2011-03-21-Baseball’s greatest fielders

16 0.88457936 459 andrew gelman stats-2010-12-09-Solve mazes by starting at the exit

17 0.88319302 1390 andrew gelman stats-2012-06-23-Traditionalist claims that modern art could just as well be replaced by a “paint-throwing chimp”

18 0.88120359 1473 andrew gelman stats-2012-08-28-Turing chess run update

19 0.87840497 1580 andrew gelman stats-2012-11-16-Stantastic!

20 0.87695378 846 andrew gelman stats-2011-08-09-Default priors update?